Feb 17 13:36:19 crc systemd[1]: Starting Kubernetes Kubelet... Feb 17 13:36:19 crc restorecon[4675]: Relabeled /var/lib/kubelet/config.json from system_u:object_r:unlabeled_t:s0 to system_u:object_r:container_var_lib_t:s0 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/device-plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/device-plugins/kubelet.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/volumes/kubernetes.io~configmap/nginx-conf/..2025_02_23_05_40_35.4114275528/nginx.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/22e96971 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/21c98286 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/0f1869e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/46889d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/5b6a5969 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/6c7921f5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4804f443 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/2a46b283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/a6b5573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4f88ee5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/5a4eee4b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/cd87c521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/38602af4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/1483b002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/0346718b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/d3ed4ada not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/3bb473a5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/8cd075a9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/00ab4760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/54a21c09 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/70478888 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/43802770 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/955a0edc not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/bca2d009 not reset as customized by admin to system_u:object_r:container_file_t:s0:c140,c1009 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/b295f9bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/bc46ea27 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5731fc1b not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5e1b2a3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/943f0936 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/3f764ee4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/8695e3f9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/aed7aa86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/c64d7448 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/0ba16bd2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/207a939f not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/54aa8cdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/1f5fa595 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/bf9c8153 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/47fba4ea not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/7ae55ce9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7906a268 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/ce43fa69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7fc7ea3a not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/d8c38b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/9ef015fb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/b9db6a41 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/b1733d79 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/afccd338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/9df0a185 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/18938cf8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/7ab4eb23 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/56930be6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_35.630010865 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/0d8e3722 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/d22b2e76 not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/e036759f not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/2734c483 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/57878fe7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/3f3c2e58 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/375bec3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/7bc41e08 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/48c7a72d not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/4b66701f not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/a5a1c202 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_40.1388695756 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/26f3df5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/6d8fb21d not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/50e94777 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208473b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/ec9e08ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3b787c39 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208eaed5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/93aa3a2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3c697968 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/ba950ec9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/cb5cdb37 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/f2df9827 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/fedaa673 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/9ca2df95 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/b2d7460e not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2207853c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/241c1c29 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2d910eaf not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/c6c0f2e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/399edc97 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8049f7cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/0cec5484 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/312446d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c406,c828 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8e56a35d not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/2d30ddb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/eca8053d not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/c3a25c9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c168,c522 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/b9609c22 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/e8b0eca9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/b36a9c3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/38af7b07 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/ae821620 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/baa23338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/2c534809 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/59b29eae not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/c91a8e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/4d87494a not reset as customized by admin to system_u:object_r:container_file_t:s0:c442,c857 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/1e33ca63 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/8dea7be2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d0b04a99 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d84f01e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/4109059b not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/a7258a3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/05bdf2b6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/f3261b51 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/315d045e not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/5fdcf278 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/d053f757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/c2850dc7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fcfb0b2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c7ac9b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fa0c0d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c609b6ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/2be6c296 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/89a32653 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/4eb9afeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/13af6efa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/b03f9724 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/e3d105cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/3aed4d83 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/0765fa6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/2cefc627 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/3dcc6345 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/365af391 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b1130c0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/236a5913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b9432e26 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/5ddb0e3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/986dc4fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/8a23ff9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/9728ae68 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/665f31d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/136c9b42 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/98a1575b not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/cac69136 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/5deb77a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/2ae53400 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/e46f2326 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/dc688d3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/3497c3cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/177eb008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/af5a2afa not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/d780cb1f not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/49b0f374 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/26fbb125 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/cf14125a not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/b7f86972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/e51d739c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/88ba6a69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/669a9acf not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/5cd51231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/75349ec7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/15c26839 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/45023dcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/2bb66a50 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/64d03bdd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/ab8e7ca0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/bb9be25f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/9a0b61d3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/d471b9d2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/8cb76b8e not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/11a00840 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/ec355a92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/992f735e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d59cdbbc not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/72133ff0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/c56c834c not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d13724c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/0a498258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa471982 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fc900d92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa7d68da not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/4bacf9b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/424021b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/fc2e31a3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/f51eefac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/c8997f2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/7481f599 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/fdafea19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/d0e1c571 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/ee398915 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/682bb6b8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a3e67855 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a989f289 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/915431bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/7796fdab not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/dcdb5f19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/a3aaa88c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/5508e3e6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/160585de not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/e99f8da3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/8bc85570 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/a5861c91 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/84db1135 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/9e1a6043 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/c1aba1c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/d55ccd6d not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/971cc9f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/8f2e3dcf not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/ceb35e9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/1c192745 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/5209e501 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/f83de4df not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/e7b978ac not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/c64304a1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/5384386b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/cce3e3ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/8fb75465 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/740f573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/32fd1134 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/0a861bd3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/80363026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/bfa952a8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..2025_02_23_05_33_31.333075221 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/793bf43d not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/7db1bb6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/4f6a0368 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/c12c7d86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/36c4a773 not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/4c1e98ae not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/a4c8115c not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/setup/7db1802e not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver/a008a7ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-syncer/2c836bac not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-regeneration-controller/0ce62299 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-insecure-readyz/945d2457 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-check-endpoints/7d5c1dd8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/index.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/bundle-v1.15.0.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/channel.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/package.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/bc8d0691 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/6b76097a not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/34d1af30 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/312ba61c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/645d5dd1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/16e825f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/4cf51fc9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/2a23d348 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/075dbd49 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/dd585ddd not reset as customized by admin to system_u:object_r:container_file_t:s0:c377,c642 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/17ebd0ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c343 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/005579f4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_23_11.1287037894 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/bf5f3b9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/af276eb7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/ea28e322 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/692e6683 not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/871746a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/4eb2e958 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/ca9b62da not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/0edd6fce not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/containers/controller-manager/89b4555f not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/655fcd71 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/0d43c002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/e68efd17 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/9acf9b65 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/5ae3ff11 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/1e59206a not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/27af16d1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c304,c1017 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/7918e729 not reset as customized by admin to system_u:object_r:container_file_t:s0:c853,c893 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/5d976d0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c585,c981 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/d7f55cbb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/f0812073 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/1a56cbeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/7fdd437e not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/cdfb5652 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/fix-audit-permissions/fb93119e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver/f1e8fc0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver-check-endpoints/218511f3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server/serving-certs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/ca8af7b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/72cc8a75 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/6e8a3760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4c3455c0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/2278acb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4b453e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/3ec09bda not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2/cacerts.bin not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java/cacerts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl/ca-bundle.trust.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/email-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/objsign-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2ae6433e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fde84897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75680d2e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/openshift-service-serving-signer_1740288168.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/facfc4fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f5a969c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CFCA_EV_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9ef4a08a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ingress-operator_1740288202.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2f332aed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/248c8271.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d10a21f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ACCVRAIZ1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a94d09e5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c9a4d3b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40193066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd8c0d63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b936d1c6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CA_Disig_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4fd49c6c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM_SERVIDORES_SEGUROS.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b81b93f0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f9a69fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b30d5fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ANF_Secure_Server_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:36:19 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b433981b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93851c9e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9282e51c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7dd1bc4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Actalis_Authentication_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/930ac5d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f47b495.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e113c810.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5931b5bc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Commercial.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2b349938.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e48193cf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/302904dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a716d4ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Networking.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93bc0acc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/86212b19.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b727005e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbc54cab.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f51bb24c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c28a8a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9c8dfbd4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ccc52f49.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cb1c3204.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ce5e74ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd08c599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6d41d539.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb5fa911.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e35234b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8cb5ee0f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a7c655d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f8fc53da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/de6d66f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d41b5e2a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/41a3f684.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1df5a75f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_2011.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e36a6752.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b872f2b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9576d26b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/228f89db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_ECC_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb717492.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d21b73c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b1b94ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/595e996b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_RSA_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b46e03d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/128f4b91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_3_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81f2d2b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Autoridad_de_Certificacion_Firmaprofesional_CIF_A62634068.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3bde41ac.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d16a5865.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_EC-384_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0179095f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ffa7f1eb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9482e63a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4dae3dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e359ba6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7e067d03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/95aff9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7746a63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Baltimore_CyberTrust_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/653b494a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3ad48a91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_2_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/54657681.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/82223c44.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8de2f56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d9dafe4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d96b65e2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee64a828.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40547a79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5a3f0ff8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a780d93.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/34d996fb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/eed8c118.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/89c02a45.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b1159c4c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d6325660.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4c339cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8312c4c1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_E1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8508e720.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5fdd185d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48bec511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/69105f4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b9bc432.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/32888f65.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b03dec0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/219d9499.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5acf816d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbf06781.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc99f41e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AAA_Certificate_Services.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/985c1f52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8794b4e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_BR_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7c037b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ef954a4e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_EV_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2add47b6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/90c5a3c8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0f3e76e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/53a1b57a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_EV_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5ad8a5d6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/68dd7389.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d04f354.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d6437c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/062cdee6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bd43e1dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7f3d5d1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c491639e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3513523f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/399e7759.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/feffd413.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d18e9066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/607986c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c90bc37d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1b0f7e5c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e08bfd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dd8e9d41.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed39abd0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a3418fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bc3f2570.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_High_Assurance_EV_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/244b5494.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81b9768f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4be590e0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_ECC_P384_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9846683b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/252252d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e8e7201.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_RSA4096_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d52c538d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c44cc0c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Trusted_Root_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75d1b2ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a2c66da8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ecccd8db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust.net_Certification_Authority__2048_.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/aee5f10d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e7271e8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0e59380.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4c3982f2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b99d060.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf64f35b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0a775a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/002c0b4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cc450945.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_EC1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/106f3e4d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b3fb433b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4042bcee.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/02265526.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/455f1b52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0d69c7e1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9f727ac7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5e98733a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0cd152c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc4d6a89.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6187b673.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/FIRMAPROFESIONAL_CA_ROOT-A_WEB.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ba8887ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/068570d1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f081611a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48a195d8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GDCA_TrustAUTH_R5_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f6fa695.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab59055e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b92fd57f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GLOBALTRUST_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fa5da96b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ec40989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7719f463.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1001acf7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f013ecaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/626dceaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c559d742.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1d3472b9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9479c8c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a81e292b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4bfab552.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e071171e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/57bcb2da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_ECC_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab5346f4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5046c355.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_RSA_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/865fbdf9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da0cfd1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/85cde254.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_ECC_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbb3f32b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureSign_RootCA11.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5860aaa6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/31188b5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HiPKI_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c7f1359b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f15c80c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hongkong_Post_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/09789157.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/18856ac4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e09d511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Commercial_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cf701eeb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d06393bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Public_Sector_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/10531352.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Izenpe.com.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureTrust_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0ed035a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsec_e-Szigno_Root_CA_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8160b96c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8651083.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2c63f966.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_ECC_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d89cda1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/01419da9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_RSA_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7a5b843.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_RSA_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf53fb88.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9591a472.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3afde786.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Gold_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NAVER_Global_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3fb36b73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d39b0a2c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a89d74c2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd58d51e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7db1890.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NetLock_Arany__Class_Gold__F__tan__s__tv__ny.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/988a38cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/60afe812.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f39fc864.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5443e9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GB_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e73d606e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dfc0fe80.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b66938e9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e1eab7c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GC_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/773e07ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c899c73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d59297b8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ddcda989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_1_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/749e9e03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/52b525c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7e8dc79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a819ef2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/08063a00.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b483515.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/064e0aa9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1f58a078.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6f7454b3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7fa05551.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76faf6c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9339512a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f387163d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee37c333.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e18bfb83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e442e424.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fe8a2cd8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/23f4c490.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5cd81ad7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0c70a8d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7892ad52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SZAFIR_ROOT_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4f316efb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_RSA_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/06dc52d5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/583d0756.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0bf05006.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/88950faa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9046744a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c860d51.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_RSA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6fa5da56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/33ee480d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Secure_Global_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/63a2c897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_ECC_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bdacca6f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ff34af3f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbff3a01.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_ECC_RootCA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_C1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/406c9bb1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_C3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Services_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Silver_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/99e1b953.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/14bc7599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TUBITAK_Kamu_SM_SSL_Kok_Sertifikasi_-_Surum_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a3adc42.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f459871d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_ECC_Root_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_RSA_Root_2023.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TeliaSonera_Root_CA_v1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telia_Root_CA_v2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f103249.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f058632f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-certificates.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9bf03295.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/98aaf404.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1cef98f5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/073bfcc5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2923b3f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f249de83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/edcbddb5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P256_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b5697b0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ae85e5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b74d2bd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P384_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d887a5bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9aef356c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TunTrust_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd64f3fc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e13665f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Extended_Validation_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f5dc4f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da7377f6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Global_G2_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c01eb047.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/304d27c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed858448.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f30dd6ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/04f60c28.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_ECC_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fc5a8f99.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/35105088.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee532fd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/XRamp_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/706f604c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76579174.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d86cdd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/882de061.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f618aec.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a9d40e02.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e-Szigno_Root_CA_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e868b802.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/83e9984f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ePKI_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca6e4ad9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d6523ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4b718d9b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/869fbf79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/containers/registry/f8d22bdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/6e8bbfac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/54dd7996 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/a4f1bb05 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/207129da not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/c1df39e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/15b8f1cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/77bd6913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/2382c1b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/704ce128 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/70d16fe0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/bfb95535 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/57a8e8e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/1b9d3e5e not reset as customized by admin to system_u:object_r:container_file_t:s0:c107,c917 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/fddb173c not reset as customized by admin to system_u:object_r:container_file_t:s0:c202,c983 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/95d3c6c4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/bfb5fff5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/2aef40aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/c0391cad not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/1119e69d not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/660608b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/8220bd53 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/85f99d5c not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/4b0225f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/9c2a3394 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/e820b243 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/1ca52ea0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/e6988e45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/6655f00b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/98bc3986 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/08e3458a not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/2a191cb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/6c4eeefb not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/f61a549c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/24891863 not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/fbdfd89c not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/9b63b3bc not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/8acde6d6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/node-driver-registrar/59ecbba3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/csi-provisioner/685d4be3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/containers/route-controller-manager/feaea55e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/63709497 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/d966b7fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/f5773757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/81c9edb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/57bf57ee not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/86f5e6aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/0aabe31d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/d2af85c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/09d157d9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c0fe7256 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c30319e4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/e6b1dd45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/2bb643f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/920de426 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/70fa1e87 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/a1c12a2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/9442e6c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/5b45ec72 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/3c9f3a59 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/1091c11b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/9a6821c6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/ec0c35e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/517f37e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/6214fe78 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/ba189c8b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/351e4f31 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/c0f219ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/8069f607 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/559c3d82 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/605ad488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/148df488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/3bf6dcb4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/022a2feb not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/938c3924 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/729fe23e not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/1fd5cbd4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/a96697e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/e155ddca not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/10dd0e0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/6f2c8392 not reset as customized by admin to system_u:object_r:container_file_t:s0:c267,c588 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/bd241ad9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/plugins/csi-hostpath not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/plugins/csi-hostpath/csi.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/plugins/kubernetes.io not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/plugins/kubernetes.io/csi not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983 not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/vol_data.json not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 17 13:36:20 crc restorecon[4675]: /var/lib/kubelet/plugins_registry not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 17 13:36:20 crc restorecon[4675]: Relabeled /var/usrlocal/bin/kubenswrapper from system_u:object_r:bin_t:s0 to system_u:object_r:kubelet_exec_t:s0 Feb 17 13:36:21 crc kubenswrapper[4768]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 17 13:36:21 crc kubenswrapper[4768]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Feb 17 13:36:21 crc kubenswrapper[4768]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 17 13:36:21 crc kubenswrapper[4768]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 17 13:36:21 crc kubenswrapper[4768]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 17 13:36:21 crc kubenswrapper[4768]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 17 13:36:21 crc kubenswrapper[4768]: I0217 13:36:21.292757 4768 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 17 13:36:21 crc kubenswrapper[4768]: W0217 13:36:21.300852 4768 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 17 13:36:21 crc kubenswrapper[4768]: W0217 13:36:21.300895 4768 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 17 13:36:21 crc kubenswrapper[4768]: W0217 13:36:21.300899 4768 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 17 13:36:21 crc kubenswrapper[4768]: W0217 13:36:21.300903 4768 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 17 13:36:21 crc kubenswrapper[4768]: W0217 13:36:21.300907 4768 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 17 13:36:21 crc kubenswrapper[4768]: W0217 13:36:21.300912 4768 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 17 13:36:21 crc kubenswrapper[4768]: W0217 13:36:21.300916 4768 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 17 13:36:21 crc kubenswrapper[4768]: W0217 13:36:21.300919 4768 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 17 13:36:21 crc kubenswrapper[4768]: W0217 13:36:21.300923 4768 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 17 13:36:21 crc kubenswrapper[4768]: W0217 13:36:21.300926 4768 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 17 13:36:21 crc kubenswrapper[4768]: W0217 13:36:21.300930 4768 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 17 13:36:21 crc kubenswrapper[4768]: W0217 13:36:21.300935 4768 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 17 13:36:21 crc kubenswrapper[4768]: W0217 13:36:21.300941 4768 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 17 13:36:21 crc kubenswrapper[4768]: W0217 13:36:21.300945 4768 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 17 13:36:21 crc kubenswrapper[4768]: W0217 13:36:21.300950 4768 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 17 13:36:21 crc kubenswrapper[4768]: W0217 13:36:21.300955 4768 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 17 13:36:21 crc kubenswrapper[4768]: W0217 13:36:21.300959 4768 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 17 13:36:21 crc kubenswrapper[4768]: W0217 13:36:21.300963 4768 feature_gate.go:330] unrecognized feature gate: Example Feb 17 13:36:21 crc kubenswrapper[4768]: W0217 13:36:21.300967 4768 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 17 13:36:21 crc kubenswrapper[4768]: W0217 13:36:21.300971 4768 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 17 13:36:21 crc kubenswrapper[4768]: W0217 13:36:21.300975 4768 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 17 13:36:21 crc kubenswrapper[4768]: W0217 13:36:21.300980 4768 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 17 13:36:21 crc kubenswrapper[4768]: W0217 13:36:21.300984 4768 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 17 13:36:21 crc kubenswrapper[4768]: W0217 13:36:21.300987 4768 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 17 13:36:21 crc kubenswrapper[4768]: W0217 13:36:21.300991 4768 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 17 13:36:21 crc kubenswrapper[4768]: W0217 13:36:21.300995 4768 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 17 13:36:21 crc kubenswrapper[4768]: W0217 13:36:21.300998 4768 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 17 13:36:21 crc kubenswrapper[4768]: W0217 13:36:21.301002 4768 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 17 13:36:21 crc kubenswrapper[4768]: W0217 13:36:21.301005 4768 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 17 13:36:21 crc kubenswrapper[4768]: W0217 13:36:21.301012 4768 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 17 13:36:21 crc kubenswrapper[4768]: W0217 13:36:21.301015 4768 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 17 13:36:21 crc kubenswrapper[4768]: W0217 13:36:21.301019 4768 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 17 13:36:21 crc kubenswrapper[4768]: W0217 13:36:21.301024 4768 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 17 13:36:21 crc kubenswrapper[4768]: W0217 13:36:21.301028 4768 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 17 13:36:21 crc kubenswrapper[4768]: W0217 13:36:21.301032 4768 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 17 13:36:21 crc kubenswrapper[4768]: W0217 13:36:21.301035 4768 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 17 13:36:21 crc kubenswrapper[4768]: W0217 13:36:21.301039 4768 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 17 13:36:21 crc kubenswrapper[4768]: W0217 13:36:21.301043 4768 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 17 13:36:21 crc kubenswrapper[4768]: W0217 13:36:21.301047 4768 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 17 13:36:21 crc kubenswrapper[4768]: W0217 13:36:21.301050 4768 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 17 13:36:21 crc kubenswrapper[4768]: W0217 13:36:21.301054 4768 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 17 13:36:21 crc kubenswrapper[4768]: W0217 13:36:21.301057 4768 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 17 13:36:21 crc kubenswrapper[4768]: W0217 13:36:21.301061 4768 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 17 13:36:21 crc kubenswrapper[4768]: W0217 13:36:21.301064 4768 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 17 13:36:21 crc kubenswrapper[4768]: W0217 13:36:21.301068 4768 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 17 13:36:21 crc kubenswrapper[4768]: W0217 13:36:21.301071 4768 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 17 13:36:21 crc kubenswrapper[4768]: W0217 13:36:21.301075 4768 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 17 13:36:21 crc kubenswrapper[4768]: W0217 13:36:21.301079 4768 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 17 13:36:21 crc kubenswrapper[4768]: W0217 13:36:21.301083 4768 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 17 13:36:21 crc kubenswrapper[4768]: W0217 13:36:21.301087 4768 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 17 13:36:21 crc kubenswrapper[4768]: W0217 13:36:21.301090 4768 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 17 13:36:21 crc kubenswrapper[4768]: W0217 13:36:21.301094 4768 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 17 13:36:21 crc kubenswrapper[4768]: W0217 13:36:21.301112 4768 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 17 13:36:21 crc kubenswrapper[4768]: W0217 13:36:21.301116 4768 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 17 13:36:21 crc kubenswrapper[4768]: W0217 13:36:21.301119 4768 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 17 13:36:21 crc kubenswrapper[4768]: W0217 13:36:21.301123 4768 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 17 13:36:21 crc kubenswrapper[4768]: W0217 13:36:21.301126 4768 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 17 13:36:21 crc kubenswrapper[4768]: W0217 13:36:21.301130 4768 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 17 13:36:21 crc kubenswrapper[4768]: W0217 13:36:21.301133 4768 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 17 13:36:21 crc kubenswrapper[4768]: W0217 13:36:21.301137 4768 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 17 13:36:21 crc kubenswrapper[4768]: W0217 13:36:21.301140 4768 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 17 13:36:21 crc kubenswrapper[4768]: W0217 13:36:21.301144 4768 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 17 13:36:21 crc kubenswrapper[4768]: W0217 13:36:21.301147 4768 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 17 13:36:21 crc kubenswrapper[4768]: W0217 13:36:21.301151 4768 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 17 13:36:21 crc kubenswrapper[4768]: W0217 13:36:21.301154 4768 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 17 13:36:21 crc kubenswrapper[4768]: W0217 13:36:21.301157 4768 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 17 13:36:21 crc kubenswrapper[4768]: W0217 13:36:21.301162 4768 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 17 13:36:21 crc kubenswrapper[4768]: W0217 13:36:21.301165 4768 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 17 13:36:21 crc kubenswrapper[4768]: W0217 13:36:21.301169 4768 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 17 13:36:21 crc kubenswrapper[4768]: W0217 13:36:21.301172 4768 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 17 13:36:21 crc kubenswrapper[4768]: W0217 13:36:21.301176 4768 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 17 13:36:21 crc kubenswrapper[4768]: I0217 13:36:21.302977 4768 flags.go:64] FLAG: --address="0.0.0.0" Feb 17 13:36:21 crc kubenswrapper[4768]: I0217 13:36:21.303004 4768 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Feb 17 13:36:21 crc kubenswrapper[4768]: I0217 13:36:21.303022 4768 flags.go:64] FLAG: --anonymous-auth="true" Feb 17 13:36:21 crc kubenswrapper[4768]: I0217 13:36:21.303031 4768 flags.go:64] FLAG: --application-metrics-count-limit="100" Feb 17 13:36:21 crc kubenswrapper[4768]: I0217 13:36:21.303038 4768 flags.go:64] FLAG: --authentication-token-webhook="false" Feb 17 13:36:21 crc kubenswrapper[4768]: I0217 13:36:21.303043 4768 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Feb 17 13:36:21 crc kubenswrapper[4768]: I0217 13:36:21.303051 4768 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Feb 17 13:36:21 crc kubenswrapper[4768]: I0217 13:36:21.303058 4768 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Feb 17 13:36:21 crc kubenswrapper[4768]: I0217 13:36:21.303063 4768 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Feb 17 13:36:21 crc kubenswrapper[4768]: I0217 13:36:21.303069 4768 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Feb 17 13:36:21 crc kubenswrapper[4768]: I0217 13:36:21.303074 4768 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Feb 17 13:36:21 crc kubenswrapper[4768]: I0217 13:36:21.303081 4768 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Feb 17 13:36:21 crc kubenswrapper[4768]: I0217 13:36:21.303087 4768 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Feb 17 13:36:21 crc kubenswrapper[4768]: I0217 13:36:21.303093 4768 flags.go:64] FLAG: --cgroup-root="" Feb 17 13:36:21 crc kubenswrapper[4768]: I0217 13:36:21.303113 4768 flags.go:64] FLAG: --cgroups-per-qos="true" Feb 17 13:36:21 crc kubenswrapper[4768]: I0217 13:36:21.303120 4768 flags.go:64] FLAG: --client-ca-file="" Feb 17 13:36:21 crc kubenswrapper[4768]: I0217 13:36:21.303125 4768 flags.go:64] FLAG: --cloud-config="" Feb 17 13:36:21 crc kubenswrapper[4768]: I0217 13:36:21.303131 4768 flags.go:64] FLAG: --cloud-provider="" Feb 17 13:36:21 crc kubenswrapper[4768]: I0217 13:36:21.303136 4768 flags.go:64] FLAG: --cluster-dns="[]" Feb 17 13:36:21 crc kubenswrapper[4768]: I0217 13:36:21.303143 4768 flags.go:64] FLAG: --cluster-domain="" Feb 17 13:36:21 crc kubenswrapper[4768]: I0217 13:36:21.303148 4768 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Feb 17 13:36:21 crc kubenswrapper[4768]: I0217 13:36:21.303153 4768 flags.go:64] FLAG: --config-dir="" Feb 17 13:36:21 crc kubenswrapper[4768]: I0217 13:36:21.303158 4768 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Feb 17 13:36:21 crc kubenswrapper[4768]: I0217 13:36:21.303165 4768 flags.go:64] FLAG: --container-log-max-files="5" Feb 17 13:36:21 crc kubenswrapper[4768]: I0217 13:36:21.303172 4768 flags.go:64] FLAG: --container-log-max-size="10Mi" Feb 17 13:36:21 crc kubenswrapper[4768]: I0217 13:36:21.303178 4768 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Feb 17 13:36:21 crc kubenswrapper[4768]: I0217 13:36:21.303183 4768 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Feb 17 13:36:21 crc kubenswrapper[4768]: I0217 13:36:21.303189 4768 flags.go:64] FLAG: --containerd-namespace="k8s.io" Feb 17 13:36:21 crc kubenswrapper[4768]: I0217 13:36:21.303196 4768 flags.go:64] FLAG: --contention-profiling="false" Feb 17 13:36:21 crc kubenswrapper[4768]: I0217 13:36:21.303201 4768 flags.go:64] FLAG: --cpu-cfs-quota="true" Feb 17 13:36:21 crc kubenswrapper[4768]: I0217 13:36:21.303206 4768 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Feb 17 13:36:21 crc kubenswrapper[4768]: I0217 13:36:21.303212 4768 flags.go:64] FLAG: --cpu-manager-policy="none" Feb 17 13:36:21 crc kubenswrapper[4768]: I0217 13:36:21.303217 4768 flags.go:64] FLAG: --cpu-manager-policy-options="" Feb 17 13:36:21 crc kubenswrapper[4768]: I0217 13:36:21.303225 4768 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Feb 17 13:36:21 crc kubenswrapper[4768]: I0217 13:36:21.303230 4768 flags.go:64] FLAG: --enable-controller-attach-detach="true" Feb 17 13:36:21 crc kubenswrapper[4768]: I0217 13:36:21.303262 4768 flags.go:64] FLAG: --enable-debugging-handlers="true" Feb 17 13:36:21 crc kubenswrapper[4768]: I0217 13:36:21.303268 4768 flags.go:64] FLAG: --enable-load-reader="false" Feb 17 13:36:21 crc kubenswrapper[4768]: I0217 13:36:21.303273 4768 flags.go:64] FLAG: --enable-server="true" Feb 17 13:36:21 crc kubenswrapper[4768]: I0217 13:36:21.303279 4768 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Feb 17 13:36:21 crc kubenswrapper[4768]: I0217 13:36:21.303290 4768 flags.go:64] FLAG: --event-burst="100" Feb 17 13:36:21 crc kubenswrapper[4768]: I0217 13:36:21.303295 4768 flags.go:64] FLAG: --event-qps="50" Feb 17 13:36:21 crc kubenswrapper[4768]: I0217 13:36:21.303301 4768 flags.go:64] FLAG: --event-storage-age-limit="default=0" Feb 17 13:36:21 crc kubenswrapper[4768]: I0217 13:36:21.303306 4768 flags.go:64] FLAG: --event-storage-event-limit="default=0" Feb 17 13:36:21 crc kubenswrapper[4768]: I0217 13:36:21.303312 4768 flags.go:64] FLAG: --eviction-hard="" Feb 17 13:36:21 crc kubenswrapper[4768]: I0217 13:36:21.303319 4768 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Feb 17 13:36:21 crc kubenswrapper[4768]: I0217 13:36:21.303325 4768 flags.go:64] FLAG: --eviction-minimum-reclaim="" Feb 17 13:36:21 crc kubenswrapper[4768]: I0217 13:36:21.303330 4768 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Feb 17 13:36:21 crc kubenswrapper[4768]: I0217 13:36:21.303337 4768 flags.go:64] FLAG: --eviction-soft="" Feb 17 13:36:21 crc kubenswrapper[4768]: I0217 13:36:21.303342 4768 flags.go:64] FLAG: --eviction-soft-grace-period="" Feb 17 13:36:21 crc kubenswrapper[4768]: I0217 13:36:21.303348 4768 flags.go:64] FLAG: --exit-on-lock-contention="false" Feb 17 13:36:21 crc kubenswrapper[4768]: I0217 13:36:21.303354 4768 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Feb 17 13:36:21 crc kubenswrapper[4768]: I0217 13:36:21.303359 4768 flags.go:64] FLAG: --experimental-mounter-path="" Feb 17 13:36:21 crc kubenswrapper[4768]: I0217 13:36:21.303364 4768 flags.go:64] FLAG: --fail-cgroupv1="false" Feb 17 13:36:21 crc kubenswrapper[4768]: I0217 13:36:21.303370 4768 flags.go:64] FLAG: --fail-swap-on="true" Feb 17 13:36:21 crc kubenswrapper[4768]: I0217 13:36:21.303376 4768 flags.go:64] FLAG: --feature-gates="" Feb 17 13:36:21 crc kubenswrapper[4768]: I0217 13:36:21.303382 4768 flags.go:64] FLAG: --file-check-frequency="20s" Feb 17 13:36:21 crc kubenswrapper[4768]: I0217 13:36:21.303388 4768 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Feb 17 13:36:21 crc kubenswrapper[4768]: I0217 13:36:21.303394 4768 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Feb 17 13:36:21 crc kubenswrapper[4768]: I0217 13:36:21.303400 4768 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Feb 17 13:36:21 crc kubenswrapper[4768]: I0217 13:36:21.303406 4768 flags.go:64] FLAG: --healthz-port="10248" Feb 17 13:36:21 crc kubenswrapper[4768]: I0217 13:36:21.303411 4768 flags.go:64] FLAG: --help="false" Feb 17 13:36:21 crc kubenswrapper[4768]: I0217 13:36:21.303417 4768 flags.go:64] FLAG: --hostname-override="" Feb 17 13:36:21 crc kubenswrapper[4768]: I0217 13:36:21.303422 4768 flags.go:64] FLAG: --housekeeping-interval="10s" Feb 17 13:36:21 crc kubenswrapper[4768]: I0217 13:36:21.303428 4768 flags.go:64] FLAG: --http-check-frequency="20s" Feb 17 13:36:21 crc kubenswrapper[4768]: I0217 13:36:21.303434 4768 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Feb 17 13:36:21 crc kubenswrapper[4768]: I0217 13:36:21.303439 4768 flags.go:64] FLAG: --image-credential-provider-config="" Feb 17 13:36:21 crc kubenswrapper[4768]: I0217 13:36:21.303445 4768 flags.go:64] FLAG: --image-gc-high-threshold="85" Feb 17 13:36:21 crc kubenswrapper[4768]: I0217 13:36:21.303450 4768 flags.go:64] FLAG: --image-gc-low-threshold="80" Feb 17 13:36:21 crc kubenswrapper[4768]: I0217 13:36:21.303455 4768 flags.go:64] FLAG: --image-service-endpoint="" Feb 17 13:36:21 crc kubenswrapper[4768]: I0217 13:36:21.303461 4768 flags.go:64] FLAG: --kernel-memcg-notification="false" Feb 17 13:36:21 crc kubenswrapper[4768]: I0217 13:36:21.303466 4768 flags.go:64] FLAG: --kube-api-burst="100" Feb 17 13:36:21 crc kubenswrapper[4768]: I0217 13:36:21.303472 4768 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Feb 17 13:36:21 crc kubenswrapper[4768]: I0217 13:36:21.303478 4768 flags.go:64] FLAG: --kube-api-qps="50" Feb 17 13:36:21 crc kubenswrapper[4768]: I0217 13:36:21.303483 4768 flags.go:64] FLAG: --kube-reserved="" Feb 17 13:36:21 crc kubenswrapper[4768]: I0217 13:36:21.303488 4768 flags.go:64] FLAG: --kube-reserved-cgroup="" Feb 17 13:36:21 crc kubenswrapper[4768]: I0217 13:36:21.303494 4768 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Feb 17 13:36:21 crc kubenswrapper[4768]: I0217 13:36:21.303499 4768 flags.go:64] FLAG: --kubelet-cgroups="" Feb 17 13:36:21 crc kubenswrapper[4768]: I0217 13:36:21.303505 4768 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Feb 17 13:36:21 crc kubenswrapper[4768]: I0217 13:36:21.303510 4768 flags.go:64] FLAG: --lock-file="" Feb 17 13:36:21 crc kubenswrapper[4768]: I0217 13:36:21.303515 4768 flags.go:64] FLAG: --log-cadvisor-usage="false" Feb 17 13:36:21 crc kubenswrapper[4768]: I0217 13:36:21.303521 4768 flags.go:64] FLAG: --log-flush-frequency="5s" Feb 17 13:36:21 crc kubenswrapper[4768]: I0217 13:36:21.303526 4768 flags.go:64] FLAG: --log-json-info-buffer-size="0" Feb 17 13:36:21 crc kubenswrapper[4768]: I0217 13:36:21.303536 4768 flags.go:64] FLAG: --log-json-split-stream="false" Feb 17 13:36:21 crc kubenswrapper[4768]: I0217 13:36:21.303543 4768 flags.go:64] FLAG: --log-text-info-buffer-size="0" Feb 17 13:36:21 crc kubenswrapper[4768]: I0217 13:36:21.303549 4768 flags.go:64] FLAG: --log-text-split-stream="false" Feb 17 13:36:21 crc kubenswrapper[4768]: I0217 13:36:21.303555 4768 flags.go:64] FLAG: --logging-format="text" Feb 17 13:36:21 crc kubenswrapper[4768]: I0217 13:36:21.303561 4768 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Feb 17 13:36:21 crc kubenswrapper[4768]: I0217 13:36:21.303568 4768 flags.go:64] FLAG: --make-iptables-util-chains="true" Feb 17 13:36:21 crc kubenswrapper[4768]: I0217 13:36:21.303574 4768 flags.go:64] FLAG: --manifest-url="" Feb 17 13:36:21 crc kubenswrapper[4768]: I0217 13:36:21.303579 4768 flags.go:64] FLAG: --manifest-url-header="" Feb 17 13:36:21 crc kubenswrapper[4768]: I0217 13:36:21.303587 4768 flags.go:64] FLAG: --max-housekeeping-interval="15s" Feb 17 13:36:21 crc kubenswrapper[4768]: I0217 13:36:21.303593 4768 flags.go:64] FLAG: --max-open-files="1000000" Feb 17 13:36:21 crc kubenswrapper[4768]: I0217 13:36:21.303600 4768 flags.go:64] FLAG: --max-pods="110" Feb 17 13:36:21 crc kubenswrapper[4768]: I0217 13:36:21.303605 4768 flags.go:64] FLAG: --maximum-dead-containers="-1" Feb 17 13:36:21 crc kubenswrapper[4768]: I0217 13:36:21.303611 4768 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Feb 17 13:36:21 crc kubenswrapper[4768]: I0217 13:36:21.303617 4768 flags.go:64] FLAG: --memory-manager-policy="None" Feb 17 13:36:21 crc kubenswrapper[4768]: I0217 13:36:21.303622 4768 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Feb 17 13:36:21 crc kubenswrapper[4768]: I0217 13:36:21.303628 4768 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Feb 17 13:36:21 crc kubenswrapper[4768]: I0217 13:36:21.303634 4768 flags.go:64] FLAG: --node-ip="192.168.126.11" Feb 17 13:36:21 crc kubenswrapper[4768]: I0217 13:36:21.303639 4768 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos" Feb 17 13:36:21 crc kubenswrapper[4768]: I0217 13:36:21.303654 4768 flags.go:64] FLAG: --node-status-max-images="50" Feb 17 13:36:21 crc kubenswrapper[4768]: I0217 13:36:21.303660 4768 flags.go:64] FLAG: --node-status-update-frequency="10s" Feb 17 13:36:21 crc kubenswrapper[4768]: I0217 13:36:21.303666 4768 flags.go:64] FLAG: --oom-score-adj="-999" Feb 17 13:36:21 crc kubenswrapper[4768]: I0217 13:36:21.303671 4768 flags.go:64] FLAG: --pod-cidr="" Feb 17 13:36:21 crc kubenswrapper[4768]: I0217 13:36:21.303676 4768 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:33549946e22a9ffa738fd94b1345f90921bc8f92fa6137784cb33c77ad806f9d" Feb 17 13:36:21 crc kubenswrapper[4768]: I0217 13:36:21.303685 4768 flags.go:64] FLAG: --pod-manifest-path="" Feb 17 13:36:21 crc kubenswrapper[4768]: I0217 13:36:21.303690 4768 flags.go:64] FLAG: --pod-max-pids="-1" Feb 17 13:36:21 crc kubenswrapper[4768]: I0217 13:36:21.303696 4768 flags.go:64] FLAG: --pods-per-core="0" Feb 17 13:36:21 crc kubenswrapper[4768]: I0217 13:36:21.303702 4768 flags.go:64] FLAG: --port="10250" Feb 17 13:36:21 crc kubenswrapper[4768]: I0217 13:36:21.303707 4768 flags.go:64] FLAG: --protect-kernel-defaults="false" Feb 17 13:36:21 crc kubenswrapper[4768]: I0217 13:36:21.303713 4768 flags.go:64] FLAG: --provider-id="" Feb 17 13:36:21 crc kubenswrapper[4768]: I0217 13:36:21.303718 4768 flags.go:64] FLAG: --qos-reserved="" Feb 17 13:36:21 crc kubenswrapper[4768]: I0217 13:36:21.303724 4768 flags.go:64] FLAG: --read-only-port="10255" Feb 17 13:36:21 crc kubenswrapper[4768]: I0217 13:36:21.303730 4768 flags.go:64] FLAG: --register-node="true" Feb 17 13:36:21 crc kubenswrapper[4768]: I0217 13:36:21.303735 4768 flags.go:64] FLAG: --register-schedulable="true" Feb 17 13:36:21 crc kubenswrapper[4768]: I0217 13:36:21.303741 4768 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Feb 17 13:36:21 crc kubenswrapper[4768]: I0217 13:36:21.303751 4768 flags.go:64] FLAG: --registry-burst="10" Feb 17 13:36:21 crc kubenswrapper[4768]: I0217 13:36:21.303757 4768 flags.go:64] FLAG: --registry-qps="5" Feb 17 13:36:21 crc kubenswrapper[4768]: I0217 13:36:21.303762 4768 flags.go:64] FLAG: --reserved-cpus="" Feb 17 13:36:21 crc kubenswrapper[4768]: I0217 13:36:21.303770 4768 flags.go:64] FLAG: --reserved-memory="" Feb 17 13:36:21 crc kubenswrapper[4768]: I0217 13:36:21.303777 4768 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Feb 17 13:36:21 crc kubenswrapper[4768]: I0217 13:36:21.303783 4768 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Feb 17 13:36:21 crc kubenswrapper[4768]: I0217 13:36:21.303788 4768 flags.go:64] FLAG: --rotate-certificates="false" Feb 17 13:36:21 crc kubenswrapper[4768]: I0217 13:36:21.303793 4768 flags.go:64] FLAG: --rotate-server-certificates="false" Feb 17 13:36:21 crc kubenswrapper[4768]: I0217 13:36:21.303799 4768 flags.go:64] FLAG: --runonce="false" Feb 17 13:36:21 crc kubenswrapper[4768]: I0217 13:36:21.303804 4768 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Feb 17 13:36:21 crc kubenswrapper[4768]: I0217 13:36:21.303810 4768 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Feb 17 13:36:21 crc kubenswrapper[4768]: I0217 13:36:21.303816 4768 flags.go:64] FLAG: --seccomp-default="false" Feb 17 13:36:21 crc kubenswrapper[4768]: I0217 13:36:21.303821 4768 flags.go:64] FLAG: --serialize-image-pulls="true" Feb 17 13:36:21 crc kubenswrapper[4768]: I0217 13:36:21.303826 4768 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Feb 17 13:36:21 crc kubenswrapper[4768]: I0217 13:36:21.303833 4768 flags.go:64] FLAG: --storage-driver-db="cadvisor" Feb 17 13:36:21 crc kubenswrapper[4768]: I0217 13:36:21.303838 4768 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Feb 17 13:36:21 crc kubenswrapper[4768]: I0217 13:36:21.303844 4768 flags.go:64] FLAG: --storage-driver-password="root" Feb 17 13:36:21 crc kubenswrapper[4768]: I0217 13:36:21.303850 4768 flags.go:64] FLAG: --storage-driver-secure="false" Feb 17 13:36:21 crc kubenswrapper[4768]: I0217 13:36:21.303855 4768 flags.go:64] FLAG: --storage-driver-table="stats" Feb 17 13:36:21 crc kubenswrapper[4768]: I0217 13:36:21.303860 4768 flags.go:64] FLAG: --storage-driver-user="root" Feb 17 13:36:21 crc kubenswrapper[4768]: I0217 13:36:21.303866 4768 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Feb 17 13:36:21 crc kubenswrapper[4768]: I0217 13:36:21.303872 4768 flags.go:64] FLAG: --sync-frequency="1m0s" Feb 17 13:36:21 crc kubenswrapper[4768]: I0217 13:36:21.303877 4768 flags.go:64] FLAG: --system-cgroups="" Feb 17 13:36:21 crc kubenswrapper[4768]: I0217 13:36:21.303883 4768 flags.go:64] FLAG: --system-reserved="cpu=200m,ephemeral-storage=350Mi,memory=350Mi" Feb 17 13:36:21 crc kubenswrapper[4768]: I0217 13:36:21.303897 4768 flags.go:64] FLAG: --system-reserved-cgroup="" Feb 17 13:36:21 crc kubenswrapper[4768]: I0217 13:36:21.303903 4768 flags.go:64] FLAG: --tls-cert-file="" Feb 17 13:36:21 crc kubenswrapper[4768]: I0217 13:36:21.303909 4768 flags.go:64] FLAG: --tls-cipher-suites="[]" Feb 17 13:36:21 crc kubenswrapper[4768]: I0217 13:36:21.303916 4768 flags.go:64] FLAG: --tls-min-version="" Feb 17 13:36:21 crc kubenswrapper[4768]: I0217 13:36:21.303921 4768 flags.go:64] FLAG: --tls-private-key-file="" Feb 17 13:36:21 crc kubenswrapper[4768]: I0217 13:36:21.303927 4768 flags.go:64] FLAG: --topology-manager-policy="none" Feb 17 13:36:21 crc kubenswrapper[4768]: I0217 13:36:21.303932 4768 flags.go:64] FLAG: --topology-manager-policy-options="" Feb 17 13:36:21 crc kubenswrapper[4768]: I0217 13:36:21.303938 4768 flags.go:64] FLAG: --topology-manager-scope="container" Feb 17 13:36:21 crc kubenswrapper[4768]: I0217 13:36:21.303944 4768 flags.go:64] FLAG: --v="2" Feb 17 13:36:21 crc kubenswrapper[4768]: I0217 13:36:21.303956 4768 flags.go:64] FLAG: --version="false" Feb 17 13:36:21 crc kubenswrapper[4768]: I0217 13:36:21.303963 4768 flags.go:64] FLAG: --vmodule="" Feb 17 13:36:21 crc kubenswrapper[4768]: I0217 13:36:21.303971 4768 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Feb 17 13:36:21 crc kubenswrapper[4768]: I0217 13:36:21.303977 4768 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Feb 17 13:36:21 crc kubenswrapper[4768]: W0217 13:36:21.304124 4768 feature_gate.go:330] unrecognized feature gate: Example Feb 17 13:36:21 crc kubenswrapper[4768]: W0217 13:36:21.304133 4768 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 17 13:36:21 crc kubenswrapper[4768]: W0217 13:36:21.304139 4768 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 17 13:36:21 crc kubenswrapper[4768]: W0217 13:36:21.304144 4768 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 17 13:36:21 crc kubenswrapper[4768]: W0217 13:36:21.304149 4768 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 17 13:36:21 crc kubenswrapper[4768]: W0217 13:36:21.304154 4768 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 17 13:36:21 crc kubenswrapper[4768]: W0217 13:36:21.304158 4768 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 17 13:36:21 crc kubenswrapper[4768]: W0217 13:36:21.304163 4768 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 17 13:36:21 crc kubenswrapper[4768]: W0217 13:36:21.304168 4768 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 17 13:36:21 crc kubenswrapper[4768]: W0217 13:36:21.304173 4768 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 17 13:36:21 crc kubenswrapper[4768]: W0217 13:36:21.304178 4768 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 17 13:36:21 crc kubenswrapper[4768]: W0217 13:36:21.304183 4768 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 17 13:36:21 crc kubenswrapper[4768]: W0217 13:36:21.304188 4768 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 17 13:36:21 crc kubenswrapper[4768]: W0217 13:36:21.304193 4768 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 17 13:36:21 crc kubenswrapper[4768]: W0217 13:36:21.304198 4768 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 17 13:36:21 crc kubenswrapper[4768]: W0217 13:36:21.304203 4768 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 17 13:36:21 crc kubenswrapper[4768]: W0217 13:36:21.304207 4768 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 17 13:36:21 crc kubenswrapper[4768]: W0217 13:36:21.304212 4768 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 17 13:36:21 crc kubenswrapper[4768]: W0217 13:36:21.304217 4768 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 17 13:36:21 crc kubenswrapper[4768]: W0217 13:36:21.304225 4768 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 17 13:36:21 crc kubenswrapper[4768]: W0217 13:36:21.304231 4768 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 17 13:36:21 crc kubenswrapper[4768]: W0217 13:36:21.304235 4768 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 17 13:36:21 crc kubenswrapper[4768]: W0217 13:36:21.304240 4768 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 17 13:36:21 crc kubenswrapper[4768]: W0217 13:36:21.304245 4768 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 17 13:36:21 crc kubenswrapper[4768]: W0217 13:36:21.304249 4768 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 17 13:36:21 crc kubenswrapper[4768]: W0217 13:36:21.304254 4768 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 17 13:36:21 crc kubenswrapper[4768]: W0217 13:36:21.304260 4768 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 17 13:36:21 crc kubenswrapper[4768]: W0217 13:36:21.304266 4768 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 17 13:36:21 crc kubenswrapper[4768]: W0217 13:36:21.304272 4768 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 17 13:36:21 crc kubenswrapper[4768]: W0217 13:36:21.304278 4768 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 17 13:36:21 crc kubenswrapper[4768]: W0217 13:36:21.304284 4768 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 17 13:36:21 crc kubenswrapper[4768]: W0217 13:36:21.304289 4768 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 17 13:36:21 crc kubenswrapper[4768]: W0217 13:36:21.304294 4768 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 17 13:36:21 crc kubenswrapper[4768]: W0217 13:36:21.304299 4768 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 17 13:36:21 crc kubenswrapper[4768]: W0217 13:36:21.304303 4768 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 17 13:36:21 crc kubenswrapper[4768]: W0217 13:36:21.304309 4768 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 17 13:36:21 crc kubenswrapper[4768]: W0217 13:36:21.304314 4768 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 17 13:36:21 crc kubenswrapper[4768]: W0217 13:36:21.304320 4768 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 17 13:36:21 crc kubenswrapper[4768]: W0217 13:36:21.304327 4768 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 17 13:36:21 crc kubenswrapper[4768]: W0217 13:36:21.304333 4768 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 17 13:36:21 crc kubenswrapper[4768]: W0217 13:36:21.304338 4768 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 17 13:36:21 crc kubenswrapper[4768]: W0217 13:36:21.304343 4768 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 17 13:36:21 crc kubenswrapper[4768]: W0217 13:36:21.304350 4768 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 17 13:36:21 crc kubenswrapper[4768]: W0217 13:36:21.304356 4768 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 17 13:36:21 crc kubenswrapper[4768]: W0217 13:36:21.304361 4768 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 17 13:36:21 crc kubenswrapper[4768]: W0217 13:36:21.304366 4768 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 17 13:36:21 crc kubenswrapper[4768]: W0217 13:36:21.304371 4768 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 17 13:36:21 crc kubenswrapper[4768]: W0217 13:36:21.304376 4768 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 17 13:36:21 crc kubenswrapper[4768]: W0217 13:36:21.304382 4768 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 17 13:36:21 crc kubenswrapper[4768]: W0217 13:36:21.304387 4768 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 17 13:36:21 crc kubenswrapper[4768]: W0217 13:36:21.304392 4768 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 17 13:36:21 crc kubenswrapper[4768]: W0217 13:36:21.304399 4768 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 17 13:36:21 crc kubenswrapper[4768]: W0217 13:36:21.304405 4768 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 17 13:36:21 crc kubenswrapper[4768]: W0217 13:36:21.304410 4768 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 17 13:36:21 crc kubenswrapper[4768]: W0217 13:36:21.304415 4768 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 17 13:36:21 crc kubenswrapper[4768]: W0217 13:36:21.304420 4768 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 17 13:36:21 crc kubenswrapper[4768]: W0217 13:36:21.304424 4768 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 17 13:36:21 crc kubenswrapper[4768]: W0217 13:36:21.304429 4768 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 17 13:36:21 crc kubenswrapper[4768]: W0217 13:36:21.304434 4768 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 17 13:36:21 crc kubenswrapper[4768]: W0217 13:36:21.304439 4768 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 17 13:36:21 crc kubenswrapper[4768]: W0217 13:36:21.304448 4768 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 17 13:36:21 crc kubenswrapper[4768]: W0217 13:36:21.304454 4768 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 17 13:36:21 crc kubenswrapper[4768]: W0217 13:36:21.304460 4768 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 17 13:36:21 crc kubenswrapper[4768]: W0217 13:36:21.304466 4768 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 17 13:36:21 crc kubenswrapper[4768]: W0217 13:36:21.304471 4768 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 17 13:36:21 crc kubenswrapper[4768]: W0217 13:36:21.304476 4768 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 17 13:36:21 crc kubenswrapper[4768]: W0217 13:36:21.304481 4768 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 17 13:36:21 crc kubenswrapper[4768]: W0217 13:36:21.304485 4768 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 17 13:36:21 crc kubenswrapper[4768]: W0217 13:36:21.304490 4768 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 17 13:36:21 crc kubenswrapper[4768]: W0217 13:36:21.304495 4768 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 17 13:36:21 crc kubenswrapper[4768]: W0217 13:36:21.304500 4768 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 17 13:36:21 crc kubenswrapper[4768]: I0217 13:36:21.304516 4768 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Feb 17 13:36:21 crc kubenswrapper[4768]: I0217 13:36:21.314903 4768 server.go:491] "Kubelet version" kubeletVersion="v1.31.5" Feb 17 13:36:21 crc kubenswrapper[4768]: I0217 13:36:21.314955 4768 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 17 13:36:21 crc kubenswrapper[4768]: W0217 13:36:21.315038 4768 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 17 13:36:21 crc kubenswrapper[4768]: W0217 13:36:21.315048 4768 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 17 13:36:21 crc kubenswrapper[4768]: W0217 13:36:21.315054 4768 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 17 13:36:21 crc kubenswrapper[4768]: W0217 13:36:21.315061 4768 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 17 13:36:21 crc kubenswrapper[4768]: W0217 13:36:21.315067 4768 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 17 13:36:21 crc kubenswrapper[4768]: W0217 13:36:21.315072 4768 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 17 13:36:21 crc kubenswrapper[4768]: W0217 13:36:21.315077 4768 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 17 13:36:21 crc kubenswrapper[4768]: W0217 13:36:21.315081 4768 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 17 13:36:21 crc kubenswrapper[4768]: W0217 13:36:21.315088 4768 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 17 13:36:21 crc kubenswrapper[4768]: W0217 13:36:21.315113 4768 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 17 13:36:21 crc kubenswrapper[4768]: W0217 13:36:21.315119 4768 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 17 13:36:21 crc kubenswrapper[4768]: W0217 13:36:21.315125 4768 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 17 13:36:21 crc kubenswrapper[4768]: W0217 13:36:21.315131 4768 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 17 13:36:21 crc kubenswrapper[4768]: W0217 13:36:21.315138 4768 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 17 13:36:21 crc kubenswrapper[4768]: W0217 13:36:21.315143 4768 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 17 13:36:21 crc kubenswrapper[4768]: W0217 13:36:21.315147 4768 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 17 13:36:21 crc kubenswrapper[4768]: W0217 13:36:21.315152 4768 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 17 13:36:21 crc kubenswrapper[4768]: W0217 13:36:21.315157 4768 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 17 13:36:21 crc kubenswrapper[4768]: W0217 13:36:21.315163 4768 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 17 13:36:21 crc kubenswrapper[4768]: W0217 13:36:21.315169 4768 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 17 13:36:21 crc kubenswrapper[4768]: W0217 13:36:21.315174 4768 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 17 13:36:21 crc kubenswrapper[4768]: W0217 13:36:21.315179 4768 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 17 13:36:21 crc kubenswrapper[4768]: W0217 13:36:21.315183 4768 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 17 13:36:21 crc kubenswrapper[4768]: W0217 13:36:21.315188 4768 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 17 13:36:21 crc kubenswrapper[4768]: W0217 13:36:21.315193 4768 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 17 13:36:21 crc kubenswrapper[4768]: W0217 13:36:21.315197 4768 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 17 13:36:21 crc kubenswrapper[4768]: W0217 13:36:21.315202 4768 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 17 13:36:21 crc kubenswrapper[4768]: W0217 13:36:21.315206 4768 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 17 13:36:21 crc kubenswrapper[4768]: W0217 13:36:21.315211 4768 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 17 13:36:21 crc kubenswrapper[4768]: W0217 13:36:21.315244 4768 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 17 13:36:21 crc kubenswrapper[4768]: W0217 13:36:21.315249 4768 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 17 13:36:21 crc kubenswrapper[4768]: W0217 13:36:21.315253 4768 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 17 13:36:21 crc kubenswrapper[4768]: W0217 13:36:21.315257 4768 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 17 13:36:21 crc kubenswrapper[4768]: W0217 13:36:21.315263 4768 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 17 13:36:21 crc kubenswrapper[4768]: W0217 13:36:21.315277 4768 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 17 13:36:21 crc kubenswrapper[4768]: W0217 13:36:21.315284 4768 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 17 13:36:21 crc kubenswrapper[4768]: W0217 13:36:21.315289 4768 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 17 13:36:21 crc kubenswrapper[4768]: W0217 13:36:21.315294 4768 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 17 13:36:21 crc kubenswrapper[4768]: W0217 13:36:21.315300 4768 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 17 13:36:21 crc kubenswrapper[4768]: W0217 13:36:21.315305 4768 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 17 13:36:21 crc kubenswrapper[4768]: W0217 13:36:21.315309 4768 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 17 13:36:21 crc kubenswrapper[4768]: W0217 13:36:21.315314 4768 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 17 13:36:21 crc kubenswrapper[4768]: W0217 13:36:21.315318 4768 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 17 13:36:21 crc kubenswrapper[4768]: W0217 13:36:21.315323 4768 feature_gate.go:330] unrecognized feature gate: Example Feb 17 13:36:21 crc kubenswrapper[4768]: W0217 13:36:21.315327 4768 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 17 13:36:21 crc kubenswrapper[4768]: W0217 13:36:21.315332 4768 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 17 13:36:21 crc kubenswrapper[4768]: W0217 13:36:21.315336 4768 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 17 13:36:21 crc kubenswrapper[4768]: W0217 13:36:21.315341 4768 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 17 13:36:21 crc kubenswrapper[4768]: W0217 13:36:21.315345 4768 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 17 13:36:21 crc kubenswrapper[4768]: W0217 13:36:21.315350 4768 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 17 13:36:21 crc kubenswrapper[4768]: W0217 13:36:21.315355 4768 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 17 13:36:21 crc kubenswrapper[4768]: W0217 13:36:21.315359 4768 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 17 13:36:21 crc kubenswrapper[4768]: W0217 13:36:21.315364 4768 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 17 13:36:21 crc kubenswrapper[4768]: W0217 13:36:21.315370 4768 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 17 13:36:21 crc kubenswrapper[4768]: W0217 13:36:21.315374 4768 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 17 13:36:21 crc kubenswrapper[4768]: W0217 13:36:21.315379 4768 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 17 13:36:21 crc kubenswrapper[4768]: W0217 13:36:21.315383 4768 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 17 13:36:21 crc kubenswrapper[4768]: W0217 13:36:21.315388 4768 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 17 13:36:21 crc kubenswrapper[4768]: W0217 13:36:21.315392 4768 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 17 13:36:21 crc kubenswrapper[4768]: W0217 13:36:21.315397 4768 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 17 13:36:21 crc kubenswrapper[4768]: W0217 13:36:21.315401 4768 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 17 13:36:21 crc kubenswrapper[4768]: W0217 13:36:21.315405 4768 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 17 13:36:21 crc kubenswrapper[4768]: W0217 13:36:21.315410 4768 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 17 13:36:21 crc kubenswrapper[4768]: W0217 13:36:21.315414 4768 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 17 13:36:21 crc kubenswrapper[4768]: W0217 13:36:21.315419 4768 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 17 13:36:21 crc kubenswrapper[4768]: W0217 13:36:21.315423 4768 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 17 13:36:21 crc kubenswrapper[4768]: W0217 13:36:21.315430 4768 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 17 13:36:21 crc kubenswrapper[4768]: W0217 13:36:21.315435 4768 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 17 13:36:21 crc kubenswrapper[4768]: W0217 13:36:21.315439 4768 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 17 13:36:21 crc kubenswrapper[4768]: W0217 13:36:21.315443 4768 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 17 13:36:21 crc kubenswrapper[4768]: W0217 13:36:21.315449 4768 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 17 13:36:21 crc kubenswrapper[4768]: I0217 13:36:21.315457 4768 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Feb 17 13:36:21 crc kubenswrapper[4768]: W0217 13:36:21.315611 4768 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 17 13:36:21 crc kubenswrapper[4768]: W0217 13:36:21.315619 4768 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 17 13:36:21 crc kubenswrapper[4768]: W0217 13:36:21.315623 4768 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 17 13:36:21 crc kubenswrapper[4768]: W0217 13:36:21.315628 4768 feature_gate.go:330] unrecognized feature gate: Example Feb 17 13:36:21 crc kubenswrapper[4768]: W0217 13:36:21.315633 4768 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 17 13:36:21 crc kubenswrapper[4768]: W0217 13:36:21.315638 4768 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 17 13:36:21 crc kubenswrapper[4768]: W0217 13:36:21.315643 4768 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 17 13:36:21 crc kubenswrapper[4768]: W0217 13:36:21.315648 4768 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 17 13:36:21 crc kubenswrapper[4768]: W0217 13:36:21.315652 4768 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 17 13:36:21 crc kubenswrapper[4768]: W0217 13:36:21.315657 4768 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 17 13:36:21 crc kubenswrapper[4768]: W0217 13:36:21.315661 4768 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 17 13:36:21 crc kubenswrapper[4768]: W0217 13:36:21.315666 4768 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 17 13:36:21 crc kubenswrapper[4768]: W0217 13:36:21.315672 4768 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 17 13:36:21 crc kubenswrapper[4768]: W0217 13:36:21.315677 4768 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 17 13:36:21 crc kubenswrapper[4768]: W0217 13:36:21.315682 4768 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 17 13:36:21 crc kubenswrapper[4768]: W0217 13:36:21.315688 4768 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 17 13:36:21 crc kubenswrapper[4768]: W0217 13:36:21.315693 4768 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 17 13:36:21 crc kubenswrapper[4768]: W0217 13:36:21.315698 4768 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 17 13:36:21 crc kubenswrapper[4768]: W0217 13:36:21.315703 4768 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 17 13:36:21 crc kubenswrapper[4768]: W0217 13:36:21.315709 4768 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 17 13:36:21 crc kubenswrapper[4768]: W0217 13:36:21.315715 4768 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 17 13:36:21 crc kubenswrapper[4768]: W0217 13:36:21.315722 4768 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 17 13:36:21 crc kubenswrapper[4768]: W0217 13:36:21.315727 4768 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 17 13:36:21 crc kubenswrapper[4768]: W0217 13:36:21.315732 4768 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 17 13:36:21 crc kubenswrapper[4768]: W0217 13:36:21.315737 4768 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 17 13:36:21 crc kubenswrapper[4768]: W0217 13:36:21.315743 4768 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 17 13:36:21 crc kubenswrapper[4768]: W0217 13:36:21.315748 4768 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 17 13:36:21 crc kubenswrapper[4768]: W0217 13:36:21.315753 4768 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 17 13:36:21 crc kubenswrapper[4768]: W0217 13:36:21.315757 4768 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 17 13:36:21 crc kubenswrapper[4768]: W0217 13:36:21.315763 4768 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 17 13:36:21 crc kubenswrapper[4768]: W0217 13:36:21.315767 4768 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 17 13:36:21 crc kubenswrapper[4768]: W0217 13:36:21.315772 4768 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 17 13:36:21 crc kubenswrapper[4768]: W0217 13:36:21.315777 4768 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 17 13:36:21 crc kubenswrapper[4768]: W0217 13:36:21.315781 4768 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 17 13:36:21 crc kubenswrapper[4768]: W0217 13:36:21.315787 4768 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 17 13:36:21 crc kubenswrapper[4768]: W0217 13:36:21.315792 4768 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 17 13:36:21 crc kubenswrapper[4768]: W0217 13:36:21.315796 4768 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 17 13:36:21 crc kubenswrapper[4768]: W0217 13:36:21.315801 4768 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 17 13:36:21 crc kubenswrapper[4768]: W0217 13:36:21.315806 4768 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 17 13:36:21 crc kubenswrapper[4768]: W0217 13:36:21.315810 4768 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 17 13:36:21 crc kubenswrapper[4768]: W0217 13:36:21.315815 4768 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 17 13:36:21 crc kubenswrapper[4768]: W0217 13:36:21.315819 4768 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 17 13:36:21 crc kubenswrapper[4768]: W0217 13:36:21.315824 4768 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 17 13:36:21 crc kubenswrapper[4768]: W0217 13:36:21.315828 4768 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 17 13:36:21 crc kubenswrapper[4768]: W0217 13:36:21.315833 4768 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 17 13:36:21 crc kubenswrapper[4768]: W0217 13:36:21.315838 4768 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 17 13:36:21 crc kubenswrapper[4768]: W0217 13:36:21.315842 4768 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 17 13:36:21 crc kubenswrapper[4768]: W0217 13:36:21.315848 4768 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 17 13:36:21 crc kubenswrapper[4768]: W0217 13:36:21.315852 4768 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 17 13:36:21 crc kubenswrapper[4768]: W0217 13:36:21.315857 4768 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 17 13:36:21 crc kubenswrapper[4768]: W0217 13:36:21.315862 4768 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 17 13:36:21 crc kubenswrapper[4768]: W0217 13:36:21.315866 4768 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 17 13:36:21 crc kubenswrapper[4768]: W0217 13:36:21.315870 4768 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 17 13:36:21 crc kubenswrapper[4768]: W0217 13:36:21.315875 4768 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 17 13:36:21 crc kubenswrapper[4768]: W0217 13:36:21.315880 4768 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 17 13:36:21 crc kubenswrapper[4768]: W0217 13:36:21.315884 4768 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 17 13:36:21 crc kubenswrapper[4768]: W0217 13:36:21.315889 4768 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 17 13:36:21 crc kubenswrapper[4768]: W0217 13:36:21.315893 4768 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 17 13:36:21 crc kubenswrapper[4768]: W0217 13:36:21.315899 4768 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 17 13:36:21 crc kubenswrapper[4768]: W0217 13:36:21.315904 4768 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 17 13:36:21 crc kubenswrapper[4768]: W0217 13:36:21.315909 4768 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 17 13:36:21 crc kubenswrapper[4768]: W0217 13:36:21.315914 4768 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 17 13:36:21 crc kubenswrapper[4768]: W0217 13:36:21.315918 4768 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 17 13:36:21 crc kubenswrapper[4768]: W0217 13:36:21.315923 4768 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 17 13:36:21 crc kubenswrapper[4768]: W0217 13:36:21.315928 4768 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 17 13:36:21 crc kubenswrapper[4768]: W0217 13:36:21.315932 4768 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 17 13:36:21 crc kubenswrapper[4768]: W0217 13:36:21.315937 4768 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 17 13:36:21 crc kubenswrapper[4768]: W0217 13:36:21.315943 4768 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 17 13:36:21 crc kubenswrapper[4768]: W0217 13:36:21.315949 4768 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 17 13:36:21 crc kubenswrapper[4768]: W0217 13:36:21.315954 4768 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 17 13:36:21 crc kubenswrapper[4768]: W0217 13:36:21.315960 4768 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 17 13:36:21 crc kubenswrapper[4768]: I0217 13:36:21.315969 4768 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Feb 17 13:36:21 crc kubenswrapper[4768]: I0217 13:36:21.316195 4768 server.go:940] "Client rotation is on, will bootstrap in background" Feb 17 13:36:21 crc kubenswrapper[4768]: I0217 13:36:21.321999 4768 bootstrap.go:85] "Current kubeconfig file contents are still valid, no bootstrap necessary" Feb 17 13:36:21 crc kubenswrapper[4768]: I0217 13:36:21.322158 4768 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Feb 17 13:36:21 crc kubenswrapper[4768]: I0217 13:36:21.323448 4768 server.go:997] "Starting client certificate rotation" Feb 17 13:36:21 crc kubenswrapper[4768]: I0217 13:36:21.323493 4768 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate rotation is enabled Feb 17 13:36:21 crc kubenswrapper[4768]: I0217 13:36:21.323690 4768 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2026-02-24 05:52:08 +0000 UTC, rotation deadline is 2026-01-14 22:15:31.177838845 +0000 UTC Feb 17 13:36:21 crc kubenswrapper[4768]: I0217 13:36:21.323787 4768 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Feb 17 13:36:21 crc kubenswrapper[4768]: I0217 13:36:21.353860 4768 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Feb 17 13:36:21 crc kubenswrapper[4768]: E0217 13:36:21.355939 4768 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.36:6443: connect: connection refused" logger="UnhandledError" Feb 17 13:36:21 crc kubenswrapper[4768]: I0217 13:36:21.360892 4768 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Feb 17 13:36:21 crc kubenswrapper[4768]: I0217 13:36:21.378166 4768 log.go:25] "Validated CRI v1 runtime API" Feb 17 13:36:21 crc kubenswrapper[4768]: I0217 13:36:21.410249 4768 log.go:25] "Validated CRI v1 image API" Feb 17 13:36:21 crc kubenswrapper[4768]: I0217 13:36:21.411931 4768 server.go:1437] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Feb 17 13:36:21 crc kubenswrapper[4768]: I0217 13:36:21.420354 4768 fs.go:133] Filesystem UUIDs: map[0b076daa-c26a-46d2-b3a6-72a8dbc6e257:/dev/vda4 2026-02-17-13-31-58-00:/dev/sr0 7B77-95E7:/dev/vda2 de0497b0-db1b-465a-b278-03db02455c71:/dev/vda3] Feb 17 13:36:21 crc kubenswrapper[4768]: I0217 13:36:21.420401 4768 fs.go:134] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/user/1000:{mountpoint:/run/user/1000 major:0 minor:42 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:30 fsType:tmpfs blockSize:0} /var/lib/etcd:{mountpoint:/var/lib/etcd major:0 minor:43 fsType:tmpfs blockSize:0}] Feb 17 13:36:21 crc kubenswrapper[4768]: I0217 13:36:21.441763 4768 manager.go:217] Machine: {Timestamp:2026-02-17 13:36:21.439373878 +0000 UTC m=+0.718760330 CPUVendorID:AuthenticAMD NumCores:12 NumPhysicalCores:1 NumSockets:12 CpuFrequency:2799998 MemoryCapacity:33654128640 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:21801e6708c44f15b81395eb736a7cec SystemUUID:85ded5cb-c1f6-4d1b-b23e-ee8660dcd6ef BootID:72b2b2e1-552d-4984-900d-b4db18ea60be Filesystems:[{Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:6730825728 Type:vfs Inodes:819200 HasInodes:true} {Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:85292941312 Type:vfs Inodes:41679680 HasInodes:true} {Device:/tmp DeviceMajor:0 DeviceMinor:30 Capacity:16827064320 Type:vfs Inodes:1048576 HasInodes:true} {Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true} {Device:/run/user/1000 DeviceMajor:0 DeviceMinor:42 Capacity:3365412864 Type:vfs Inodes:821634 HasInodes:true} {Device:/var/lib/etcd DeviceMajor:0 DeviceMinor:43 Capacity:1073741824 Type:vfs Inodes:4108170 HasInodes:true} {Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:16827064320 Type:vfs Inodes:4108170 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:214748364800 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:fa:16:3e:39:bd:a5 Speed:0 Mtu:1500} {Name:br-int MacAddress:d6:39:55:2e:22:71 Speed:0 Mtu:1400} {Name:ens3 MacAddress:fa:16:3e:39:bd:a5 Speed:-1 Mtu:1500} {Name:ens7 MacAddress:fa:16:3e:90:b7:af Speed:-1 Mtu:1500} {Name:ens7.20 MacAddress:52:54:00:77:94:7c Speed:-1 Mtu:1496} {Name:ens7.21 MacAddress:52:54:00:36:92:3d Speed:-1 Mtu:1496} {Name:ens7.22 MacAddress:52:54:00:d5:b7:c5 Speed:-1 Mtu:1496} {Name:eth10 MacAddress:e2:d7:47:97:bd:d3 Speed:0 Mtu:1500} {Name:ovn-k8s-mp0 MacAddress:0a:58:0a:d9:00:02 Speed:0 Mtu:1400} {Name:ovs-system MacAddress:f2:99:ec:20:0a:f6 Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:33654128640 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[10] Caches:[{Id:10 Size:32768 Type:Data Level:1} {Id:10 Size:32768 Type:Instruction Level:1} {Id:10 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:10 Size:16777216 Type:Unified Level:3}] SocketID:10 BookID: DrawerID:} {Id:0 Threads:[11] Caches:[{Id:11 Size:32768 Type:Data Level:1} {Id:11 Size:32768 Type:Instruction Level:1} {Id:11 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:11 Size:16777216 Type:Unified Level:3}] SocketID:11 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:} {Id:0 Threads:[8] Caches:[{Id:8 Size:32768 Type:Data Level:1} {Id:8 Size:32768 Type:Instruction Level:1} {Id:8 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:8 Size:16777216 Type:Unified Level:3}] SocketID:8 BookID: DrawerID:} {Id:0 Threads:[9] Caches:[{Id:9 Size:32768 Type:Data Level:1} {Id:9 Size:32768 Type:Instruction Level:1} {Id:9 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:9 Size:16777216 Type:Unified Level:3}] SocketID:9 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Feb 17 13:36:21 crc kubenswrapper[4768]: I0217 13:36:21.442037 4768 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Feb 17 13:36:21 crc kubenswrapper[4768]: I0217 13:36:21.442261 4768 manager.go:233] Version: {KernelVersion:5.14.0-427.50.2.el9_4.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 418.94.202502100215-0 DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Feb 17 13:36:21 crc kubenswrapper[4768]: I0217 13:36:21.444050 4768 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Feb 17 13:36:21 crc kubenswrapper[4768]: I0217 13:36:21.444232 4768 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 17 13:36:21 crc kubenswrapper[4768]: I0217 13:36:21.444267 4768 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"crc","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"200m","ephemeral-storage":"350Mi","memory":"350Mi"},"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Feb 17 13:36:21 crc kubenswrapper[4768]: I0217 13:36:21.444455 4768 topology_manager.go:138] "Creating topology manager with none policy" Feb 17 13:36:21 crc kubenswrapper[4768]: I0217 13:36:21.444465 4768 container_manager_linux.go:303] "Creating device plugin manager" Feb 17 13:36:21 crc kubenswrapper[4768]: I0217 13:36:21.444942 4768 manager.go:142] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Feb 17 13:36:21 crc kubenswrapper[4768]: I0217 13:36:21.445021 4768 server.go:66] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Feb 17 13:36:21 crc kubenswrapper[4768]: I0217 13:36:21.445223 4768 state_mem.go:36] "Initialized new in-memory state store" Feb 17 13:36:21 crc kubenswrapper[4768]: I0217 13:36:21.445659 4768 server.go:1245] "Using root directory" path="/var/lib/kubelet" Feb 17 13:36:21 crc kubenswrapper[4768]: I0217 13:36:21.449251 4768 kubelet.go:418] "Attempting to sync node with API server" Feb 17 13:36:21 crc kubenswrapper[4768]: I0217 13:36:21.449278 4768 kubelet.go:313] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 17 13:36:21 crc kubenswrapper[4768]: I0217 13:36:21.449304 4768 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Feb 17 13:36:21 crc kubenswrapper[4768]: I0217 13:36:21.449380 4768 kubelet.go:324] "Adding apiserver pod source" Feb 17 13:36:21 crc kubenswrapper[4768]: I0217 13:36:21.449400 4768 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 17 13:36:21 crc kubenswrapper[4768]: I0217 13:36:21.452818 4768 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="cri-o" version="1.31.5-4.rhaos4.18.gitdad78d5.el9" apiVersion="v1" Feb 17 13:36:21 crc kubenswrapper[4768]: W0217 13:36:21.453313 4768 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.36:6443: connect: connection refused Feb 17 13:36:21 crc kubenswrapper[4768]: E0217 13:36:21.453378 4768 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.36:6443: connect: connection refused" logger="UnhandledError" Feb 17 13:36:21 crc kubenswrapper[4768]: I0217 13:36:21.453595 4768 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-server-current.pem". Feb 17 13:36:21 crc kubenswrapper[4768]: W0217 13:36:21.454704 4768 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.36:6443: connect: connection refused Feb 17 13:36:21 crc kubenswrapper[4768]: E0217 13:36:21.454966 4768 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.36:6443: connect: connection refused" logger="UnhandledError" Feb 17 13:36:21 crc kubenswrapper[4768]: I0217 13:36:21.456175 4768 kubelet.go:854] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Feb 17 13:36:21 crc kubenswrapper[4768]: I0217 13:36:21.457739 4768 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Feb 17 13:36:21 crc kubenswrapper[4768]: I0217 13:36:21.457765 4768 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Feb 17 13:36:21 crc kubenswrapper[4768]: I0217 13:36:21.457774 4768 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Feb 17 13:36:21 crc kubenswrapper[4768]: I0217 13:36:21.457784 4768 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Feb 17 13:36:21 crc kubenswrapper[4768]: I0217 13:36:21.457811 4768 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Feb 17 13:36:21 crc kubenswrapper[4768]: I0217 13:36:21.457821 4768 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/secret" Feb 17 13:36:21 crc kubenswrapper[4768]: I0217 13:36:21.457830 4768 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Feb 17 13:36:21 crc kubenswrapper[4768]: I0217 13:36:21.457844 4768 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Feb 17 13:36:21 crc kubenswrapper[4768]: I0217 13:36:21.457855 4768 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/fc" Feb 17 13:36:21 crc kubenswrapper[4768]: I0217 13:36:21.457864 4768 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Feb 17 13:36:21 crc kubenswrapper[4768]: I0217 13:36:21.457881 4768 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/projected" Feb 17 13:36:21 crc kubenswrapper[4768]: I0217 13:36:21.457890 4768 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Feb 17 13:36:21 crc kubenswrapper[4768]: I0217 13:36:21.461849 4768 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/csi" Feb 17 13:36:21 crc kubenswrapper[4768]: I0217 13:36:21.462817 4768 server.go:1280] "Started kubelet" Feb 17 13:36:21 crc kubenswrapper[4768]: I0217 13:36:21.462879 4768 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.36:6443: connect: connection refused Feb 17 13:36:21 crc systemd[1]: Started Kubernetes Kubelet. Feb 17 13:36:21 crc kubenswrapper[4768]: I0217 13:36:21.464420 4768 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Feb 17 13:36:21 crc kubenswrapper[4768]: I0217 13:36:21.464417 4768 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Feb 17 13:36:21 crc kubenswrapper[4768]: I0217 13:36:21.469771 4768 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 17 13:36:21 crc kubenswrapper[4768]: I0217 13:36:21.472835 4768 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate rotation is enabled Feb 17 13:36:21 crc kubenswrapper[4768]: I0217 13:36:21.472878 4768 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 17 13:36:21 crc kubenswrapper[4768]: I0217 13:36:21.473479 4768 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-13 14:21:52.114526406 +0000 UTC Feb 17 13:36:21 crc kubenswrapper[4768]: I0217 13:36:21.473551 4768 volume_manager.go:287] "The desired_state_of_world populator starts" Feb 17 13:36:21 crc kubenswrapper[4768]: I0217 13:36:21.473580 4768 volume_manager.go:289] "Starting Kubelet Volume Manager" Feb 17 13:36:21 crc kubenswrapper[4768]: E0217 13:36:21.473601 4768 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 17 13:36:21 crc kubenswrapper[4768]: I0217 13:36:21.473655 4768 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Feb 17 13:36:21 crc kubenswrapper[4768]: E0217 13:36:21.472418 4768 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.36:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.18950c24fe2b7722 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-17 13:36:21.462775586 +0000 UTC m=+0.742162058,LastTimestamp:2026-02-17 13:36:21.462775586 +0000 UTC m=+0.742162058,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 17 13:36:21 crc kubenswrapper[4768]: I0217 13:36:21.475017 4768 factory.go:55] Registering systemd factory Feb 17 13:36:21 crc kubenswrapper[4768]: I0217 13:36:21.475059 4768 factory.go:221] Registration of the systemd container factory successfully Feb 17 13:36:21 crc kubenswrapper[4768]: E0217 13:36:21.478421 4768 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.36:6443: connect: connection refused" interval="200ms" Feb 17 13:36:21 crc kubenswrapper[4768]: W0217 13:36:21.478445 4768 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.36:6443: connect: connection refused Feb 17 13:36:21 crc kubenswrapper[4768]: E0217 13:36:21.478574 4768 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.36:6443: connect: connection refused" logger="UnhandledError" Feb 17 13:36:21 crc kubenswrapper[4768]: I0217 13:36:21.478635 4768 server.go:460] "Adding debug handlers to kubelet server" Feb 17 13:36:21 crc kubenswrapper[4768]: I0217 13:36:21.478886 4768 factory.go:153] Registering CRI-O factory Feb 17 13:36:21 crc kubenswrapper[4768]: I0217 13:36:21.478980 4768 factory.go:221] Registration of the crio container factory successfully Feb 17 13:36:21 crc kubenswrapper[4768]: I0217 13:36:21.479186 4768 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Feb 17 13:36:21 crc kubenswrapper[4768]: I0217 13:36:21.479263 4768 factory.go:103] Registering Raw factory Feb 17 13:36:21 crc kubenswrapper[4768]: I0217 13:36:21.479316 4768 manager.go:1196] Started watching for new ooms in manager Feb 17 13:36:21 crc kubenswrapper[4768]: I0217 13:36:21.480918 4768 manager.go:319] Starting recovery of all containers Feb 17 13:36:21 crc kubenswrapper[4768]: I0217 13:36:21.485135 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" seLinuxMountContext="" Feb 17 13:36:21 crc kubenswrapper[4768]: I0217 13:36:21.485331 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" seLinuxMountContext="" Feb 17 13:36:21 crc kubenswrapper[4768]: I0217 13:36:21.485401 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" seLinuxMountContext="" Feb 17 13:36:21 crc kubenswrapper[4768]: I0217 13:36:21.485467 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" seLinuxMountContext="" Feb 17 13:36:21 crc kubenswrapper[4768]: I0217 13:36:21.485521 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" seLinuxMountContext="" Feb 17 13:36:21 crc kubenswrapper[4768]: I0217 13:36:21.485574 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" seLinuxMountContext="" Feb 17 13:36:21 crc kubenswrapper[4768]: I0217 13:36:21.485628 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" seLinuxMountContext="" Feb 17 13:36:21 crc kubenswrapper[4768]: I0217 13:36:21.485687 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" seLinuxMountContext="" Feb 17 13:36:21 crc kubenswrapper[4768]: I0217 13:36:21.485747 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" seLinuxMountContext="" Feb 17 13:36:21 crc kubenswrapper[4768]: I0217 13:36:21.485802 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert" seLinuxMountContext="" Feb 17 13:36:21 crc kubenswrapper[4768]: I0217 13:36:21.485855 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" seLinuxMountContext="" Feb 17 13:36:21 crc kubenswrapper[4768]: I0217 13:36:21.485920 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" seLinuxMountContext="" Feb 17 13:36:21 crc kubenswrapper[4768]: I0217 13:36:21.485978 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" seLinuxMountContext="" Feb 17 13:36:21 crc kubenswrapper[4768]: I0217 13:36:21.486038 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" seLinuxMountContext="" Feb 17 13:36:21 crc kubenswrapper[4768]: I0217 13:36:21.486113 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" volumeName="kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" seLinuxMountContext="" Feb 17 13:36:21 crc kubenswrapper[4768]: I0217 13:36:21.486176 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" seLinuxMountContext="" Feb 17 13:36:21 crc kubenswrapper[4768]: I0217 13:36:21.486233 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" seLinuxMountContext="" Feb 17 13:36:21 crc kubenswrapper[4768]: I0217 13:36:21.486289 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" seLinuxMountContext="" Feb 17 13:36:21 crc kubenswrapper[4768]: I0217 13:36:21.486354 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" seLinuxMountContext="" Feb 17 13:36:21 crc kubenswrapper[4768]: I0217 13:36:21.486421 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" seLinuxMountContext="" Feb 17 13:36:21 crc kubenswrapper[4768]: I0217 13:36:21.486479 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" seLinuxMountContext="" Feb 17 13:36:21 crc kubenswrapper[4768]: I0217 13:36:21.486532 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" seLinuxMountContext="" Feb 17 13:36:21 crc kubenswrapper[4768]: I0217 13:36:21.486585 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" seLinuxMountContext="" Feb 17 13:36:21 crc kubenswrapper[4768]: I0217 13:36:21.486643 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" seLinuxMountContext="" Feb 17 13:36:21 crc kubenswrapper[4768]: I0217 13:36:21.486701 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" seLinuxMountContext="" Feb 17 13:36:21 crc kubenswrapper[4768]: I0217 13:36:21.486756 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm" seLinuxMountContext="" Feb 17 13:36:21 crc kubenswrapper[4768]: I0217 13:36:21.486836 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" seLinuxMountContext="" Feb 17 13:36:21 crc kubenswrapper[4768]: I0217 13:36:21.486972 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" seLinuxMountContext="" Feb 17 13:36:21 crc kubenswrapper[4768]: I0217 13:36:21.487047 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" seLinuxMountContext="" Feb 17 13:36:21 crc kubenswrapper[4768]: I0217 13:36:21.487119 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" seLinuxMountContext="" Feb 17 13:36:21 crc kubenswrapper[4768]: I0217 13:36:21.487188 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" seLinuxMountContext="" Feb 17 13:36:21 crc kubenswrapper[4768]: I0217 13:36:21.487244 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" seLinuxMountContext="" Feb 17 13:36:21 crc kubenswrapper[4768]: I0217 13:36:21.487296 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" seLinuxMountContext="" Feb 17 13:36:21 crc kubenswrapper[4768]: I0217 13:36:21.487348 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" seLinuxMountContext="" Feb 17 13:36:21 crc kubenswrapper[4768]: I0217 13:36:21.487407 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" seLinuxMountContext="" Feb 17 13:36:21 crc kubenswrapper[4768]: I0217 13:36:21.487463 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" seLinuxMountContext="" Feb 17 13:36:21 crc kubenswrapper[4768]: I0217 13:36:21.487519 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5" seLinuxMountContext="" Feb 17 13:36:21 crc kubenswrapper[4768]: I0217 13:36:21.487575 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" seLinuxMountContext="" Feb 17 13:36:21 crc kubenswrapper[4768]: I0217 13:36:21.487628 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49ef4625-1d3a-4a9f-b595-c2433d32326d" volumeName="kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" seLinuxMountContext="" Feb 17 13:36:21 crc kubenswrapper[4768]: I0217 13:36:21.487681 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" seLinuxMountContext="" Feb 17 13:36:21 crc kubenswrapper[4768]: I0217 13:36:21.487738 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" seLinuxMountContext="" Feb 17 13:36:21 crc kubenswrapper[4768]: I0217 13:36:21.487790 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" seLinuxMountContext="" Feb 17 13:36:21 crc kubenswrapper[4768]: I0217 13:36:21.487842 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" seLinuxMountContext="" Feb 17 13:36:21 crc kubenswrapper[4768]: I0217 13:36:21.487904 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" seLinuxMountContext="" Feb 17 13:36:21 crc kubenswrapper[4768]: I0217 13:36:21.487967 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" seLinuxMountContext="" Feb 17 13:36:21 crc kubenswrapper[4768]: I0217 13:36:21.488030 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" seLinuxMountContext="" Feb 17 13:36:21 crc kubenswrapper[4768]: I0217 13:36:21.488084 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" seLinuxMountContext="" Feb 17 13:36:21 crc kubenswrapper[4768]: I0217 13:36:21.488203 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" seLinuxMountContext="" Feb 17 13:36:21 crc kubenswrapper[4768]: I0217 13:36:21.488258 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" seLinuxMountContext="" Feb 17 13:36:21 crc kubenswrapper[4768]: I0217 13:36:21.488312 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" seLinuxMountContext="" Feb 17 13:36:21 crc kubenswrapper[4768]: I0217 13:36:21.488366 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" seLinuxMountContext="" Feb 17 13:36:21 crc kubenswrapper[4768]: I0217 13:36:21.488429 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" seLinuxMountContext="" Feb 17 13:36:21 crc kubenswrapper[4768]: I0217 13:36:21.488502 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb" seLinuxMountContext="" Feb 17 13:36:21 crc kubenswrapper[4768]: I0217 13:36:21.488596 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" seLinuxMountContext="" Feb 17 13:36:21 crc kubenswrapper[4768]: I0217 13:36:21.488663 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" seLinuxMountContext="" Feb 17 13:36:21 crc kubenswrapper[4768]: I0217 13:36:21.488723 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf" seLinuxMountContext="" Feb 17 13:36:21 crc kubenswrapper[4768]: I0217 13:36:21.488808 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" seLinuxMountContext="" Feb 17 13:36:21 crc kubenswrapper[4768]: I0217 13:36:21.489223 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" seLinuxMountContext="" Feb 17 13:36:21 crc kubenswrapper[4768]: I0217 13:36:21.489291 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" seLinuxMountContext="" Feb 17 13:36:21 crc kubenswrapper[4768]: I0217 13:36:21.489351 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" seLinuxMountContext="" Feb 17 13:36:21 crc kubenswrapper[4768]: I0217 13:36:21.489405 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script" seLinuxMountContext="" Feb 17 13:36:21 crc kubenswrapper[4768]: I0217 13:36:21.489474 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" seLinuxMountContext="" Feb 17 13:36:21 crc kubenswrapper[4768]: I0217 13:36:21.489545 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" seLinuxMountContext="" Feb 17 13:36:21 crc kubenswrapper[4768]: I0217 13:36:21.489608 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" seLinuxMountContext="" Feb 17 13:36:21 crc kubenswrapper[4768]: I0217 13:36:21.489669 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" seLinuxMountContext="" Feb 17 13:36:21 crc kubenswrapper[4768]: I0217 13:36:21.489746 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" seLinuxMountContext="" Feb 17 13:36:21 crc kubenswrapper[4768]: I0217 13:36:21.489818 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" seLinuxMountContext="" Feb 17 13:36:21 crc kubenswrapper[4768]: I0217 13:36:21.489911 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" seLinuxMountContext="" Feb 17 13:36:21 crc kubenswrapper[4768]: I0217 13:36:21.489995 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" seLinuxMountContext="" Feb 17 13:36:21 crc kubenswrapper[4768]: I0217 13:36:21.490088 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" seLinuxMountContext="" Feb 17 13:36:21 crc kubenswrapper[4768]: I0217 13:36:21.490193 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" seLinuxMountContext="" Feb 17 13:36:21 crc kubenswrapper[4768]: I0217 13:36:21.490269 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" seLinuxMountContext="" Feb 17 13:36:21 crc kubenswrapper[4768]: I0217 13:36:21.490351 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" seLinuxMountContext="" Feb 17 13:36:21 crc kubenswrapper[4768]: I0217 13:36:21.490416 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" seLinuxMountContext="" Feb 17 13:36:21 crc kubenswrapper[4768]: I0217 13:36:21.490479 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" seLinuxMountContext="" Feb 17 13:36:21 crc kubenswrapper[4768]: I0217 13:36:21.490562 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" seLinuxMountContext="" Feb 17 13:36:21 crc kubenswrapper[4768]: I0217 13:36:21.490641 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" seLinuxMountContext="" Feb 17 13:36:21 crc kubenswrapper[4768]: I0217 13:36:21.493289 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" seLinuxMountContext="" Feb 17 13:36:21 crc kubenswrapper[4768]: I0217 13:36:21.493936 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" seLinuxMountContext="" Feb 17 13:36:21 crc kubenswrapper[4768]: I0217 13:36:21.493962 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" seLinuxMountContext="" Feb 17 13:36:21 crc kubenswrapper[4768]: I0217 13:36:21.493982 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" seLinuxMountContext="" Feb 17 13:36:21 crc kubenswrapper[4768]: I0217 13:36:21.493997 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" seLinuxMountContext="" Feb 17 13:36:21 crc kubenswrapper[4768]: I0217 13:36:21.494013 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" seLinuxMountContext="" Feb 17 13:36:21 crc kubenswrapper[4768]: I0217 13:36:21.494025 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" seLinuxMountContext="" Feb 17 13:36:21 crc kubenswrapper[4768]: I0217 13:36:21.494070 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" seLinuxMountContext="" Feb 17 13:36:21 crc kubenswrapper[4768]: I0217 13:36:21.494087 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" seLinuxMountContext="" Feb 17 13:36:21 crc kubenswrapper[4768]: I0217 13:36:21.494120 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" seLinuxMountContext="" Feb 17 13:36:21 crc kubenswrapper[4768]: I0217 13:36:21.494137 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" seLinuxMountContext="" Feb 17 13:36:21 crc kubenswrapper[4768]: I0217 13:36:21.494148 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" seLinuxMountContext="" Feb 17 13:36:21 crc kubenswrapper[4768]: I0217 13:36:21.494164 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" seLinuxMountContext="" Feb 17 13:36:21 crc kubenswrapper[4768]: I0217 13:36:21.494179 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3b6479f0-333b-4a96-9adf-2099afdc2447" volumeName="kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr" seLinuxMountContext="" Feb 17 13:36:21 crc kubenswrapper[4768]: I0217 13:36:21.494195 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" seLinuxMountContext="" Feb 17 13:36:21 crc kubenswrapper[4768]: I0217 13:36:21.494209 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" seLinuxMountContext="" Feb 17 13:36:21 crc kubenswrapper[4768]: I0217 13:36:21.494222 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" seLinuxMountContext="" Feb 17 13:36:21 crc kubenswrapper[4768]: I0217 13:36:21.494233 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" seLinuxMountContext="" Feb 17 13:36:21 crc kubenswrapper[4768]: I0217 13:36:21.494247 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" seLinuxMountContext="" Feb 17 13:36:21 crc kubenswrapper[4768]: I0217 13:36:21.494259 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" seLinuxMountContext="" Feb 17 13:36:21 crc kubenswrapper[4768]: I0217 13:36:21.494272 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" volumeName="kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" seLinuxMountContext="" Feb 17 13:36:21 crc kubenswrapper[4768]: I0217 13:36:21.494302 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" seLinuxMountContext="" Feb 17 13:36:21 crc kubenswrapper[4768]: I0217 13:36:21.494314 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" seLinuxMountContext="" Feb 17 13:36:21 crc kubenswrapper[4768]: I0217 13:36:21.494331 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" seLinuxMountContext="" Feb 17 13:36:21 crc kubenswrapper[4768]: I0217 13:36:21.494343 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" seLinuxMountContext="" Feb 17 13:36:21 crc kubenswrapper[4768]: I0217 13:36:21.494354 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" seLinuxMountContext="" Feb 17 13:36:21 crc kubenswrapper[4768]: I0217 13:36:21.494370 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" seLinuxMountContext="" Feb 17 13:36:21 crc kubenswrapper[4768]: I0217 13:36:21.494397 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" seLinuxMountContext="" Feb 17 13:36:21 crc kubenswrapper[4768]: I0217 13:36:21.494413 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" seLinuxMountContext="" Feb 17 13:36:21 crc kubenswrapper[4768]: I0217 13:36:21.494465 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" seLinuxMountContext="" Feb 17 13:36:21 crc kubenswrapper[4768]: I0217 13:36:21.494485 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" seLinuxMountContext="" Feb 17 13:36:21 crc kubenswrapper[4768]: I0217 13:36:21.494500 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" seLinuxMountContext="" Feb 17 13:36:21 crc kubenswrapper[4768]: I0217 13:36:21.494516 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" seLinuxMountContext="" Feb 17 13:36:21 crc kubenswrapper[4768]: I0217 13:36:21.494536 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" seLinuxMountContext="" Feb 17 13:36:21 crc kubenswrapper[4768]: I0217 13:36:21.494548 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" seLinuxMountContext="" Feb 17 13:36:21 crc kubenswrapper[4768]: I0217 13:36:21.494564 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" seLinuxMountContext="" Feb 17 13:36:21 crc kubenswrapper[4768]: I0217 13:36:21.494582 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" seLinuxMountContext="" Feb 17 13:36:21 crc kubenswrapper[4768]: I0217 13:36:21.494593 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" seLinuxMountContext="" Feb 17 13:36:21 crc kubenswrapper[4768]: I0217 13:36:21.494606 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" seLinuxMountContext="" Feb 17 13:36:21 crc kubenswrapper[4768]: I0217 13:36:21.494620 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" seLinuxMountContext="" Feb 17 13:36:21 crc kubenswrapper[4768]: I0217 13:36:21.494632 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" seLinuxMountContext="" Feb 17 13:36:21 crc kubenswrapper[4768]: I0217 13:36:21.494648 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" seLinuxMountContext="" Feb 17 13:36:21 crc kubenswrapper[4768]: I0217 13:36:21.494662 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" seLinuxMountContext="" Feb 17 13:36:21 crc kubenswrapper[4768]: I0217 13:36:21.494674 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" seLinuxMountContext="" Feb 17 13:36:21 crc kubenswrapper[4768]: I0217 13:36:21.494690 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" seLinuxMountContext="" Feb 17 13:36:21 crc kubenswrapper[4768]: I0217 13:36:21.494703 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" seLinuxMountContext="" Feb 17 13:36:21 crc kubenswrapper[4768]: I0217 13:36:21.494718 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" seLinuxMountContext="" Feb 17 13:36:21 crc kubenswrapper[4768]: I0217 13:36:21.494734 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d751cbb-f2e2-430d-9754-c882a5e924a5" volumeName="kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl" seLinuxMountContext="" Feb 17 13:36:21 crc kubenswrapper[4768]: I0217 13:36:21.494751 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" seLinuxMountContext="" Feb 17 13:36:21 crc kubenswrapper[4768]: I0217 13:36:21.494771 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" seLinuxMountContext="" Feb 17 13:36:21 crc kubenswrapper[4768]: I0217 13:36:21.494785 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" seLinuxMountContext="" Feb 17 13:36:21 crc kubenswrapper[4768]: I0217 13:36:21.494797 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" seLinuxMountContext="" Feb 17 13:36:21 crc kubenswrapper[4768]: I0217 13:36:21.494813 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" seLinuxMountContext="" Feb 17 13:36:21 crc kubenswrapper[4768]: I0217 13:36:21.494825 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" seLinuxMountContext="" Feb 17 13:36:21 crc kubenswrapper[4768]: I0217 13:36:21.494838 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" seLinuxMountContext="" Feb 17 13:36:21 crc kubenswrapper[4768]: I0217 13:36:21.494850 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" seLinuxMountContext="" Feb 17 13:36:21 crc kubenswrapper[4768]: I0217 13:36:21.494861 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" seLinuxMountContext="" Feb 17 13:36:21 crc kubenswrapper[4768]: I0217 13:36:21.494875 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" seLinuxMountContext="" Feb 17 13:36:21 crc kubenswrapper[4768]: I0217 13:36:21.494886 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" seLinuxMountContext="" Feb 17 13:36:21 crc kubenswrapper[4768]: I0217 13:36:21.494932 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" seLinuxMountContext="" Feb 17 13:36:21 crc kubenswrapper[4768]: I0217 13:36:21.494944 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" seLinuxMountContext="" Feb 17 13:36:21 crc kubenswrapper[4768]: I0217 13:36:21.494956 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" seLinuxMountContext="" Feb 17 13:36:21 crc kubenswrapper[4768]: I0217 13:36:21.494972 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" seLinuxMountContext="" Feb 17 13:36:21 crc kubenswrapper[4768]: I0217 13:36:21.494983 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" seLinuxMountContext="" Feb 17 13:36:21 crc kubenswrapper[4768]: I0217 13:36:21.494998 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" seLinuxMountContext="" Feb 17 13:36:21 crc kubenswrapper[4768]: I0217 13:36:21.495008 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" seLinuxMountContext="" Feb 17 13:36:21 crc kubenswrapper[4768]: I0217 13:36:21.495019 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" seLinuxMountContext="" Feb 17 13:36:21 crc kubenswrapper[4768]: I0217 13:36:21.495031 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" seLinuxMountContext="" Feb 17 13:36:21 crc kubenswrapper[4768]: I0217 13:36:21.495042 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" seLinuxMountContext="" Feb 17 13:36:21 crc kubenswrapper[4768]: I0217 13:36:21.495052 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" seLinuxMountContext="" Feb 17 13:36:21 crc kubenswrapper[4768]: I0217 13:36:21.495066 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" seLinuxMountContext="" Feb 17 13:36:21 crc kubenswrapper[4768]: I0217 13:36:21.497460 4768 reconstruct.go:144] "Volume is marked device as uncertain and added into the actual state" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" deviceMountPath="/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount" Feb 17 13:36:21 crc kubenswrapper[4768]: I0217 13:36:21.497534 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" seLinuxMountContext="" Feb 17 13:36:21 crc kubenswrapper[4768]: I0217 13:36:21.497559 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" seLinuxMountContext="" Feb 17 13:36:21 crc kubenswrapper[4768]: I0217 13:36:21.497581 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" seLinuxMountContext="" Feb 17 13:36:21 crc kubenswrapper[4768]: I0217 13:36:21.497596 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf" seLinuxMountContext="" Feb 17 13:36:21 crc kubenswrapper[4768]: I0217 13:36:21.497610 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" seLinuxMountContext="" Feb 17 13:36:21 crc kubenswrapper[4768]: I0217 13:36:21.497629 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" seLinuxMountContext="" Feb 17 13:36:21 crc kubenswrapper[4768]: I0217 13:36:21.497646 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" seLinuxMountContext="" Feb 17 13:36:21 crc kubenswrapper[4768]: I0217 13:36:21.497664 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" seLinuxMountContext="" Feb 17 13:36:21 crc kubenswrapper[4768]: I0217 13:36:21.497679 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" seLinuxMountContext="" Feb 17 13:36:21 crc kubenswrapper[4768]: I0217 13:36:21.497702 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" seLinuxMountContext="" Feb 17 13:36:21 crc kubenswrapper[4768]: I0217 13:36:21.497725 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" seLinuxMountContext="" Feb 17 13:36:21 crc kubenswrapper[4768]: I0217 13:36:21.497741 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" seLinuxMountContext="" Feb 17 13:36:21 crc kubenswrapper[4768]: I0217 13:36:21.497759 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" seLinuxMountContext="" Feb 17 13:36:21 crc kubenswrapper[4768]: I0217 13:36:21.497771 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" seLinuxMountContext="" Feb 17 13:36:21 crc kubenswrapper[4768]: I0217 13:36:21.497785 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" seLinuxMountContext="" Feb 17 13:36:21 crc kubenswrapper[4768]: I0217 13:36:21.497803 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" seLinuxMountContext="" Feb 17 13:36:21 crc kubenswrapper[4768]: I0217 13:36:21.497818 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" seLinuxMountContext="" Feb 17 13:36:21 crc kubenswrapper[4768]: I0217 13:36:21.497836 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" seLinuxMountContext="" Feb 17 13:36:21 crc kubenswrapper[4768]: I0217 13:36:21.497849 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" seLinuxMountContext="" Feb 17 13:36:21 crc kubenswrapper[4768]: I0217 13:36:21.497862 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" seLinuxMountContext="" Feb 17 13:36:21 crc kubenswrapper[4768]: I0217 13:36:21.497879 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" seLinuxMountContext="" Feb 17 13:36:21 crc kubenswrapper[4768]: I0217 13:36:21.497893 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" seLinuxMountContext="" Feb 17 13:36:21 crc kubenswrapper[4768]: I0217 13:36:21.497913 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" seLinuxMountContext="" Feb 17 13:36:21 crc kubenswrapper[4768]: I0217 13:36:21.497955 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" seLinuxMountContext="" Feb 17 13:36:21 crc kubenswrapper[4768]: I0217 13:36:21.497974 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" seLinuxMountContext="" Feb 17 13:36:21 crc kubenswrapper[4768]: I0217 13:36:21.498029 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" seLinuxMountContext="" Feb 17 13:36:21 crc kubenswrapper[4768]: I0217 13:36:21.498051 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" seLinuxMountContext="" Feb 17 13:36:21 crc kubenswrapper[4768]: I0217 13:36:21.498076 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" seLinuxMountContext="" Feb 17 13:36:21 crc kubenswrapper[4768]: I0217 13:36:21.498091 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" seLinuxMountContext="" Feb 17 13:36:21 crc kubenswrapper[4768]: I0217 13:36:21.498124 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" seLinuxMountContext="" Feb 17 13:36:21 crc kubenswrapper[4768]: I0217 13:36:21.498142 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" seLinuxMountContext="" Feb 17 13:36:21 crc kubenswrapper[4768]: I0217 13:36:21.498156 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" seLinuxMountContext="" Feb 17 13:36:21 crc kubenswrapper[4768]: I0217 13:36:21.498172 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" seLinuxMountContext="" Feb 17 13:36:21 crc kubenswrapper[4768]: I0217 13:36:21.498197 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" seLinuxMountContext="" Feb 17 13:36:21 crc kubenswrapper[4768]: I0217 13:36:21.498210 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" seLinuxMountContext="" Feb 17 13:36:21 crc kubenswrapper[4768]: I0217 13:36:21.498228 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" seLinuxMountContext="" Feb 17 13:36:21 crc kubenswrapper[4768]: I0217 13:36:21.498250 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" seLinuxMountContext="" Feb 17 13:36:21 crc kubenswrapper[4768]: I0217 13:36:21.498265 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" seLinuxMountContext="" Feb 17 13:36:21 crc kubenswrapper[4768]: I0217 13:36:21.498283 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" seLinuxMountContext="" Feb 17 13:36:21 crc kubenswrapper[4768]: I0217 13:36:21.498298 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" seLinuxMountContext="" Feb 17 13:36:21 crc kubenswrapper[4768]: I0217 13:36:21.498315 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert" seLinuxMountContext="" Feb 17 13:36:21 crc kubenswrapper[4768]: I0217 13:36:21.498336 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" seLinuxMountContext="" Feb 17 13:36:21 crc kubenswrapper[4768]: I0217 13:36:21.498357 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" seLinuxMountContext="" Feb 17 13:36:21 crc kubenswrapper[4768]: I0217 13:36:21.498375 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" seLinuxMountContext="" Feb 17 13:36:21 crc kubenswrapper[4768]: I0217 13:36:21.499753 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" seLinuxMountContext="" Feb 17 13:36:21 crc kubenswrapper[4768]: I0217 13:36:21.499841 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="44663579-783b-4372-86d6-acf235a62d72" volumeName="kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" seLinuxMountContext="" Feb 17 13:36:21 crc kubenswrapper[4768]: I0217 13:36:21.499864 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" seLinuxMountContext="" Feb 17 13:36:21 crc kubenswrapper[4768]: I0217 13:36:21.499882 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" seLinuxMountContext="" Feb 17 13:36:21 crc kubenswrapper[4768]: I0217 13:36:21.499900 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides" seLinuxMountContext="" Feb 17 13:36:21 crc kubenswrapper[4768]: I0217 13:36:21.499919 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" seLinuxMountContext="" Feb 17 13:36:21 crc kubenswrapper[4768]: I0217 13:36:21.499951 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" seLinuxMountContext="" Feb 17 13:36:21 crc kubenswrapper[4768]: I0217 13:36:21.499970 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls" seLinuxMountContext="" Feb 17 13:36:21 crc kubenswrapper[4768]: I0217 13:36:21.499989 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" seLinuxMountContext="" Feb 17 13:36:21 crc kubenswrapper[4768]: I0217 13:36:21.500008 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" seLinuxMountContext="" Feb 17 13:36:21 crc kubenswrapper[4768]: I0217 13:36:21.500027 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" seLinuxMountContext="" Feb 17 13:36:21 crc kubenswrapper[4768]: I0217 13:36:21.500045 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" seLinuxMountContext="" Feb 17 13:36:21 crc kubenswrapper[4768]: I0217 13:36:21.500063 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" seLinuxMountContext="" Feb 17 13:36:21 crc kubenswrapper[4768]: I0217 13:36:21.500086 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" seLinuxMountContext="" Feb 17 13:36:21 crc kubenswrapper[4768]: I0217 13:36:21.500122 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" seLinuxMountContext="" Feb 17 13:36:21 crc kubenswrapper[4768]: I0217 13:36:21.500139 4768 reconstruct.go:97] "Volume reconstruction finished" Feb 17 13:36:21 crc kubenswrapper[4768]: I0217 13:36:21.500152 4768 reconciler.go:26] "Reconciler: start to sync state" Feb 17 13:36:21 crc kubenswrapper[4768]: I0217 13:36:21.503647 4768 manager.go:324] Recovery completed Feb 17 13:36:21 crc kubenswrapper[4768]: I0217 13:36:21.511616 4768 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 13:36:21 crc kubenswrapper[4768]: I0217 13:36:21.512977 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:36:21 crc kubenswrapper[4768]: I0217 13:36:21.513037 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:36:21 crc kubenswrapper[4768]: I0217 13:36:21.513053 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:36:21 crc kubenswrapper[4768]: I0217 13:36:21.514153 4768 cpu_manager.go:225] "Starting CPU manager" policy="none" Feb 17 13:36:21 crc kubenswrapper[4768]: I0217 13:36:21.514273 4768 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Feb 17 13:36:21 crc kubenswrapper[4768]: I0217 13:36:21.514375 4768 state_mem.go:36] "Initialized new in-memory state store" Feb 17 13:36:21 crc kubenswrapper[4768]: I0217 13:36:21.531207 4768 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 17 13:36:21 crc kubenswrapper[4768]: I0217 13:36:21.532812 4768 policy_none.go:49] "None policy: Start" Feb 17 13:36:21 crc kubenswrapper[4768]: I0217 13:36:21.532936 4768 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 17 13:36:21 crc kubenswrapper[4768]: I0217 13:36:21.532995 4768 status_manager.go:217] "Starting to sync pod status with apiserver" Feb 17 13:36:21 crc kubenswrapper[4768]: I0217 13:36:21.533027 4768 kubelet.go:2335] "Starting kubelet main sync loop" Feb 17 13:36:21 crc kubenswrapper[4768]: E0217 13:36:21.533081 4768 kubelet.go:2359] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 17 13:36:21 crc kubenswrapper[4768]: W0217 13:36:21.534980 4768 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.36:6443: connect: connection refused Feb 17 13:36:21 crc kubenswrapper[4768]: E0217 13:36:21.535052 4768 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.36:6443: connect: connection refused" logger="UnhandledError" Feb 17 13:36:21 crc kubenswrapper[4768]: I0217 13:36:21.537594 4768 memory_manager.go:170] "Starting memorymanager" policy="None" Feb 17 13:36:21 crc kubenswrapper[4768]: I0217 13:36:21.537622 4768 state_mem.go:35] "Initializing new in-memory state store" Feb 17 13:36:21 crc kubenswrapper[4768]: E0217 13:36:21.574677 4768 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 17 13:36:21 crc kubenswrapper[4768]: I0217 13:36:21.595440 4768 manager.go:334] "Starting Device Plugin manager" Feb 17 13:36:21 crc kubenswrapper[4768]: I0217 13:36:21.595527 4768 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 17 13:36:21 crc kubenswrapper[4768]: I0217 13:36:21.595544 4768 server.go:79] "Starting device plugin registration server" Feb 17 13:36:21 crc kubenswrapper[4768]: I0217 13:36:21.596003 4768 eviction_manager.go:189] "Eviction manager: starting control loop" Feb 17 13:36:21 crc kubenswrapper[4768]: I0217 13:36:21.596022 4768 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Feb 17 13:36:21 crc kubenswrapper[4768]: I0217 13:36:21.596237 4768 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Feb 17 13:36:21 crc kubenswrapper[4768]: I0217 13:36:21.596330 4768 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Feb 17 13:36:21 crc kubenswrapper[4768]: I0217 13:36:21.596340 4768 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 17 13:36:21 crc kubenswrapper[4768]: E0217 13:36:21.604742 4768 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Feb 17 13:36:21 crc kubenswrapper[4768]: I0217 13:36:21.635196 4768 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-controller-manager/kube-controller-manager-crc","openshift-kube-scheduler/openshift-kube-scheduler-crc","openshift-machine-config-operator/kube-rbac-proxy-crio-crc","openshift-etcd/etcd-crc","openshift-kube-apiserver/kube-apiserver-crc"] Feb 17 13:36:21 crc kubenswrapper[4768]: I0217 13:36:21.635326 4768 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 13:36:21 crc kubenswrapper[4768]: I0217 13:36:21.636429 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:36:21 crc kubenswrapper[4768]: I0217 13:36:21.636458 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:36:21 crc kubenswrapper[4768]: I0217 13:36:21.636468 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:36:21 crc kubenswrapper[4768]: I0217 13:36:21.636573 4768 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 13:36:21 crc kubenswrapper[4768]: I0217 13:36:21.636822 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 17 13:36:21 crc kubenswrapper[4768]: I0217 13:36:21.636866 4768 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 13:36:21 crc kubenswrapper[4768]: I0217 13:36:21.637349 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:36:21 crc kubenswrapper[4768]: I0217 13:36:21.637367 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:36:21 crc kubenswrapper[4768]: I0217 13:36:21.637377 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:36:21 crc kubenswrapper[4768]: I0217 13:36:21.637461 4768 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 13:36:21 crc kubenswrapper[4768]: I0217 13:36:21.637564 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 17 13:36:21 crc kubenswrapper[4768]: I0217 13:36:21.637588 4768 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 13:36:21 crc kubenswrapper[4768]: I0217 13:36:21.637724 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:36:21 crc kubenswrapper[4768]: I0217 13:36:21.637742 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:36:21 crc kubenswrapper[4768]: I0217 13:36:21.637750 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:36:21 crc kubenswrapper[4768]: I0217 13:36:21.638011 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:36:21 crc kubenswrapper[4768]: I0217 13:36:21.638029 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:36:21 crc kubenswrapper[4768]: I0217 13:36:21.638038 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:36:21 crc kubenswrapper[4768]: I0217 13:36:21.638146 4768 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 13:36:21 crc kubenswrapper[4768]: I0217 13:36:21.638217 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 17 13:36:21 crc kubenswrapper[4768]: I0217 13:36:21.638235 4768 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 13:36:21 crc kubenswrapper[4768]: I0217 13:36:21.638622 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:36:21 crc kubenswrapper[4768]: I0217 13:36:21.638634 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:36:21 crc kubenswrapper[4768]: I0217 13:36:21.638641 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:36:21 crc kubenswrapper[4768]: I0217 13:36:21.638844 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:36:21 crc kubenswrapper[4768]: I0217 13:36:21.638858 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:36:21 crc kubenswrapper[4768]: I0217 13:36:21.638865 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:36:21 crc kubenswrapper[4768]: I0217 13:36:21.638991 4768 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 13:36:21 crc kubenswrapper[4768]: I0217 13:36:21.639376 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Feb 17 13:36:21 crc kubenswrapper[4768]: I0217 13:36:21.639400 4768 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 13:36:21 crc kubenswrapper[4768]: I0217 13:36:21.639410 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:36:21 crc kubenswrapper[4768]: I0217 13:36:21.639435 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:36:21 crc kubenswrapper[4768]: I0217 13:36:21.639442 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:36:21 crc kubenswrapper[4768]: I0217 13:36:21.639858 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:36:21 crc kubenswrapper[4768]: I0217 13:36:21.639881 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:36:21 crc kubenswrapper[4768]: I0217 13:36:21.639891 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:36:21 crc kubenswrapper[4768]: I0217 13:36:21.639996 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:36:21 crc kubenswrapper[4768]: I0217 13:36:21.640018 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:36:21 crc kubenswrapper[4768]: I0217 13:36:21.640034 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:36:21 crc kubenswrapper[4768]: I0217 13:36:21.640054 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 13:36:21 crc kubenswrapper[4768]: I0217 13:36:21.640090 4768 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 13:36:21 crc kubenswrapper[4768]: I0217 13:36:21.640812 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:36:21 crc kubenswrapper[4768]: I0217 13:36:21.640827 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:36:21 crc kubenswrapper[4768]: I0217 13:36:21.640835 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:36:21 crc kubenswrapper[4768]: E0217 13:36:21.679512 4768 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.36:6443: connect: connection refused" interval="400ms" Feb 17 13:36:21 crc kubenswrapper[4768]: I0217 13:36:21.696911 4768 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 13:36:21 crc kubenswrapper[4768]: I0217 13:36:21.698890 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:36:21 crc kubenswrapper[4768]: I0217 13:36:21.699219 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:36:21 crc kubenswrapper[4768]: I0217 13:36:21.699384 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:36:21 crc kubenswrapper[4768]: I0217 13:36:21.699499 4768 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 17 13:36:21 crc kubenswrapper[4768]: E0217 13:36:21.700394 4768 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.36:6443: connect: connection refused" node="crc" Feb 17 13:36:21 crc kubenswrapper[4768]: I0217 13:36:21.701870 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 13:36:21 crc kubenswrapper[4768]: I0217 13:36:21.702430 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 17 13:36:21 crc kubenswrapper[4768]: I0217 13:36:21.702495 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 17 13:36:21 crc kubenswrapper[4768]: I0217 13:36:21.702565 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 17 13:36:21 crc kubenswrapper[4768]: I0217 13:36:21.702613 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 13:36:21 crc kubenswrapper[4768]: I0217 13:36:21.702655 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 17 13:36:21 crc kubenswrapper[4768]: I0217 13:36:21.702692 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 17 13:36:21 crc kubenswrapper[4768]: I0217 13:36:21.702787 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 17 13:36:21 crc kubenswrapper[4768]: I0217 13:36:21.702897 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 17 13:36:21 crc kubenswrapper[4768]: I0217 13:36:21.702962 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 17 13:36:21 crc kubenswrapper[4768]: I0217 13:36:21.703028 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 17 13:36:21 crc kubenswrapper[4768]: I0217 13:36:21.703130 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 13:36:21 crc kubenswrapper[4768]: I0217 13:36:21.703188 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 17 13:36:21 crc kubenswrapper[4768]: I0217 13:36:21.703231 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 17 13:36:21 crc kubenswrapper[4768]: I0217 13:36:21.703274 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 17 13:36:21 crc kubenswrapper[4768]: I0217 13:36:21.804790 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 17 13:36:21 crc kubenswrapper[4768]: I0217 13:36:21.804837 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 17 13:36:21 crc kubenswrapper[4768]: I0217 13:36:21.804856 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 17 13:36:21 crc kubenswrapper[4768]: I0217 13:36:21.804874 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 17 13:36:21 crc kubenswrapper[4768]: I0217 13:36:21.804903 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 13:36:21 crc kubenswrapper[4768]: I0217 13:36:21.804922 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 13:36:21 crc kubenswrapper[4768]: I0217 13:36:21.804940 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 17 13:36:21 crc kubenswrapper[4768]: I0217 13:36:21.804961 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 17 13:36:21 crc kubenswrapper[4768]: I0217 13:36:21.804984 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 17 13:36:21 crc kubenswrapper[4768]: I0217 13:36:21.805001 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 17 13:36:21 crc kubenswrapper[4768]: I0217 13:36:21.805019 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 17 13:36:21 crc kubenswrapper[4768]: I0217 13:36:21.805030 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 17 13:36:21 crc kubenswrapper[4768]: I0217 13:36:21.805045 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 17 13:36:21 crc kubenswrapper[4768]: I0217 13:36:21.805064 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 13:36:21 crc kubenswrapper[4768]: I0217 13:36:21.805091 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 17 13:36:21 crc kubenswrapper[4768]: I0217 13:36:21.805149 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 17 13:36:21 crc kubenswrapper[4768]: I0217 13:36:21.805154 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 17 13:36:21 crc kubenswrapper[4768]: I0217 13:36:21.805070 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 17 13:36:21 crc kubenswrapper[4768]: I0217 13:36:21.805177 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 17 13:36:21 crc kubenswrapper[4768]: I0217 13:36:21.805212 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 17 13:36:21 crc kubenswrapper[4768]: I0217 13:36:21.805180 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 13:36:21 crc kubenswrapper[4768]: I0217 13:36:21.805188 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 17 13:36:21 crc kubenswrapper[4768]: I0217 13:36:21.805244 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 17 13:36:21 crc kubenswrapper[4768]: I0217 13:36:21.805269 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 17 13:36:21 crc kubenswrapper[4768]: I0217 13:36:21.805276 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 17 13:36:21 crc kubenswrapper[4768]: I0217 13:36:21.805274 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 17 13:36:21 crc kubenswrapper[4768]: I0217 13:36:21.805253 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 17 13:36:21 crc kubenswrapper[4768]: I0217 13:36:21.805290 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 13:36:21 crc kubenswrapper[4768]: I0217 13:36:21.805351 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 13:36:21 crc kubenswrapper[4768]: I0217 13:36:21.805427 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 17 13:36:21 crc kubenswrapper[4768]: I0217 13:36:21.900569 4768 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 13:36:21 crc kubenswrapper[4768]: I0217 13:36:21.901706 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:36:21 crc kubenswrapper[4768]: I0217 13:36:21.901743 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:36:21 crc kubenswrapper[4768]: I0217 13:36:21.901758 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:36:21 crc kubenswrapper[4768]: I0217 13:36:21.901783 4768 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 17 13:36:21 crc kubenswrapper[4768]: E0217 13:36:21.902293 4768 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.36:6443: connect: connection refused" node="crc" Feb 17 13:36:21 crc kubenswrapper[4768]: I0217 13:36:21.970378 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 17 13:36:21 crc kubenswrapper[4768]: I0217 13:36:21.994077 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 17 13:36:22 crc kubenswrapper[4768]: I0217 13:36:22.002971 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 17 13:36:22 crc kubenswrapper[4768]: W0217 13:36:22.018735 4768 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf614b9022728cf315e60c057852e563e.slice/crio-37b11892daacd0a3db38d8515c2f287c2ac00b6a8d9007b0149da0bca470e66e WatchSource:0}: Error finding container 37b11892daacd0a3db38d8515c2f287c2ac00b6a8d9007b0149da0bca470e66e: Status 404 returned error can't find the container with id 37b11892daacd0a3db38d8515c2f287c2ac00b6a8d9007b0149da0bca470e66e Feb 17 13:36:22 crc kubenswrapper[4768]: I0217 13:36:22.019265 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Feb 17 13:36:22 crc kubenswrapper[4768]: W0217 13:36:22.021489 4768 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3dcd261975c3d6b9a6ad6367fd4facd3.slice/crio-2f117d36fbdedac7db37dde87cd377c5efbe18d6602737caee062e7b3f3e4a6a WatchSource:0}: Error finding container 2f117d36fbdedac7db37dde87cd377c5efbe18d6602737caee062e7b3f3e4a6a: Status 404 returned error can't find the container with id 2f117d36fbdedac7db37dde87cd377c5efbe18d6602737caee062e7b3f3e4a6a Feb 17 13:36:22 crc kubenswrapper[4768]: I0217 13:36:22.022744 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 13:36:22 crc kubenswrapper[4768]: W0217 13:36:22.023953 4768 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd1b160f5dda77d281dd8e69ec8d817f9.slice/crio-93d575d0da1470df38e98c794e5e3a24387d46a66352c2b50f5774cdfedeb747 WatchSource:0}: Error finding container 93d575d0da1470df38e98c794e5e3a24387d46a66352c2b50f5774cdfedeb747: Status 404 returned error can't find the container with id 93d575d0da1470df38e98c794e5e3a24387d46a66352c2b50f5774cdfedeb747 Feb 17 13:36:22 crc kubenswrapper[4768]: W0217 13:36:22.029999 4768 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2139d3e2895fc6797b9c76a1b4c9886d.slice/crio-a819bb001bd90aced6df67e8c2ef259e4fdc7db77b054e6c105dff3d4fefff76 WatchSource:0}: Error finding container a819bb001bd90aced6df67e8c2ef259e4fdc7db77b054e6c105dff3d4fefff76: Status 404 returned error can't find the container with id a819bb001bd90aced6df67e8c2ef259e4fdc7db77b054e6c105dff3d4fefff76 Feb 17 13:36:22 crc kubenswrapper[4768]: W0217 13:36:22.042443 4768 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf4b27818a5e8e43d0dc095d08835c792.slice/crio-454af107d965a0884578bb761c798c848ac1d1449bd222f825e1346fb863c4c1 WatchSource:0}: Error finding container 454af107d965a0884578bb761c798c848ac1d1449bd222f825e1346fb863c4c1: Status 404 returned error can't find the container with id 454af107d965a0884578bb761c798c848ac1d1449bd222f825e1346fb863c4c1 Feb 17 13:36:22 crc kubenswrapper[4768]: E0217 13:36:22.080322 4768 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.36:6443: connect: connection refused" interval="800ms" Feb 17 13:36:22 crc kubenswrapper[4768]: I0217 13:36:22.303149 4768 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 13:36:22 crc kubenswrapper[4768]: I0217 13:36:22.305076 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:36:22 crc kubenswrapper[4768]: I0217 13:36:22.305138 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:36:22 crc kubenswrapper[4768]: I0217 13:36:22.305147 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:36:22 crc kubenswrapper[4768]: I0217 13:36:22.305173 4768 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 17 13:36:22 crc kubenswrapper[4768]: E0217 13:36:22.305732 4768 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.36:6443: connect: connection refused" node="crc" Feb 17 13:36:22 crc kubenswrapper[4768]: I0217 13:36:22.463791 4768 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.36:6443: connect: connection refused Feb 17 13:36:22 crc kubenswrapper[4768]: I0217 13:36:22.473956 4768 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-08 11:23:22.079547304 +0000 UTC Feb 17 13:36:22 crc kubenswrapper[4768]: I0217 13:36:22.540200 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"37b11892daacd0a3db38d8515c2f287c2ac00b6a8d9007b0149da0bca470e66e"} Feb 17 13:36:22 crc kubenswrapper[4768]: I0217 13:36:22.541829 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"454af107d965a0884578bb761c798c848ac1d1449bd222f825e1346fb863c4c1"} Feb 17 13:36:22 crc kubenswrapper[4768]: I0217 13:36:22.543629 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"a819bb001bd90aced6df67e8c2ef259e4fdc7db77b054e6c105dff3d4fefff76"} Feb 17 13:36:22 crc kubenswrapper[4768]: I0217 13:36:22.544715 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"93d575d0da1470df38e98c794e5e3a24387d46a66352c2b50f5774cdfedeb747"} Feb 17 13:36:22 crc kubenswrapper[4768]: I0217 13:36:22.545502 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"2f117d36fbdedac7db37dde87cd377c5efbe18d6602737caee062e7b3f3e4a6a"} Feb 17 13:36:22 crc kubenswrapper[4768]: W0217 13:36:22.669982 4768 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.36:6443: connect: connection refused Feb 17 13:36:22 crc kubenswrapper[4768]: E0217 13:36:22.670058 4768 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.36:6443: connect: connection refused" logger="UnhandledError" Feb 17 13:36:22 crc kubenswrapper[4768]: W0217 13:36:22.781615 4768 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.36:6443: connect: connection refused Feb 17 13:36:22 crc kubenswrapper[4768]: E0217 13:36:22.781721 4768 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.36:6443: connect: connection refused" logger="UnhandledError" Feb 17 13:36:22 crc kubenswrapper[4768]: E0217 13:36:22.881340 4768 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.36:6443: connect: connection refused" interval="1.6s" Feb 17 13:36:22 crc kubenswrapper[4768]: W0217 13:36:22.925443 4768 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.36:6443: connect: connection refused Feb 17 13:36:22 crc kubenswrapper[4768]: E0217 13:36:22.925543 4768 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.36:6443: connect: connection refused" logger="UnhandledError" Feb 17 13:36:23 crc kubenswrapper[4768]: W0217 13:36:23.013009 4768 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.36:6443: connect: connection refused Feb 17 13:36:23 crc kubenswrapper[4768]: E0217 13:36:23.013123 4768 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.36:6443: connect: connection refused" logger="UnhandledError" Feb 17 13:36:23 crc kubenswrapper[4768]: I0217 13:36:23.106477 4768 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 13:36:23 crc kubenswrapper[4768]: I0217 13:36:23.108175 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:36:23 crc kubenswrapper[4768]: I0217 13:36:23.108213 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:36:23 crc kubenswrapper[4768]: I0217 13:36:23.108222 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:36:23 crc kubenswrapper[4768]: I0217 13:36:23.108245 4768 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 17 13:36:23 crc kubenswrapper[4768]: E0217 13:36:23.108740 4768 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.36:6443: connect: connection refused" node="crc" Feb 17 13:36:23 crc kubenswrapper[4768]: I0217 13:36:23.455546 4768 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Feb 17 13:36:23 crc kubenswrapper[4768]: E0217 13:36:23.456581 4768 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.36:6443: connect: connection refused" logger="UnhandledError" Feb 17 13:36:23 crc kubenswrapper[4768]: I0217 13:36:23.464445 4768 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.36:6443: connect: connection refused Feb 17 13:36:23 crc kubenswrapper[4768]: I0217 13:36:23.474478 4768 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-23 03:10:25.977499989 +0000 UTC Feb 17 13:36:23 crc kubenswrapper[4768]: I0217 13:36:23.553927 4768 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="87f97eca8d376c69de8b0d6da939ba490478da25fa382f152052590d2ca927f2" exitCode=0 Feb 17 13:36:23 crc kubenswrapper[4768]: I0217 13:36:23.554036 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"87f97eca8d376c69de8b0d6da939ba490478da25fa382f152052590d2ca927f2"} Feb 17 13:36:23 crc kubenswrapper[4768]: I0217 13:36:23.554087 4768 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 13:36:23 crc kubenswrapper[4768]: I0217 13:36:23.555156 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:36:23 crc kubenswrapper[4768]: I0217 13:36:23.555186 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:36:23 crc kubenswrapper[4768]: I0217 13:36:23.555195 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:36:23 crc kubenswrapper[4768]: I0217 13:36:23.556451 4768 generic.go:334] "Generic (PLEG): container finished" podID="d1b160f5dda77d281dd8e69ec8d817f9" containerID="e3517c374f232f5fc5707d00d0596ee543879b70ba6f3e35f0c2819cebaa41d0" exitCode=0 Feb 17 13:36:23 crc kubenswrapper[4768]: I0217 13:36:23.556494 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerDied","Data":"e3517c374f232f5fc5707d00d0596ee543879b70ba6f3e35f0c2819cebaa41d0"} Feb 17 13:36:23 crc kubenswrapper[4768]: I0217 13:36:23.556564 4768 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 13:36:23 crc kubenswrapper[4768]: I0217 13:36:23.557632 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:36:23 crc kubenswrapper[4768]: I0217 13:36:23.557689 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:36:23 crc kubenswrapper[4768]: I0217 13:36:23.557714 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:36:23 crc kubenswrapper[4768]: I0217 13:36:23.559039 4768 generic.go:334] "Generic (PLEG): container finished" podID="3dcd261975c3d6b9a6ad6367fd4facd3" containerID="3492da352fa4e1b54d266098578ea4f42e40cf1ebe2d6e511dc804473efd9e64" exitCode=0 Feb 17 13:36:23 crc kubenswrapper[4768]: I0217 13:36:23.559074 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerDied","Data":"3492da352fa4e1b54d266098578ea4f42e40cf1ebe2d6e511dc804473efd9e64"} Feb 17 13:36:23 crc kubenswrapper[4768]: I0217 13:36:23.559170 4768 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 13:36:23 crc kubenswrapper[4768]: I0217 13:36:23.559791 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:36:23 crc kubenswrapper[4768]: I0217 13:36:23.559820 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:36:23 crc kubenswrapper[4768]: I0217 13:36:23.559832 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:36:23 crc kubenswrapper[4768]: I0217 13:36:23.565633 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"22e1c78ff231c7a76ebb06589f5f14caed8c0b72475083230c80b4512ae2a048"} Feb 17 13:36:23 crc kubenswrapper[4768]: I0217 13:36:23.565674 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"0adb0161ea24422b8b1443a81f00e9c38f60bf8d0718075ce189158200b28a78"} Feb 17 13:36:23 crc kubenswrapper[4768]: I0217 13:36:23.565684 4768 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 13:36:23 crc kubenswrapper[4768]: I0217 13:36:23.565723 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"f45aa99773693c07f27add72aa8305d47c48deeb24382aa9ac8bca232fd26ff3"} Feb 17 13:36:23 crc kubenswrapper[4768]: I0217 13:36:23.565738 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"47db5ef94c07ecce7b368bb809a736c9532fe3e47081cfa7e248b8b40d1d243f"} Feb 17 13:36:23 crc kubenswrapper[4768]: I0217 13:36:23.566671 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:36:23 crc kubenswrapper[4768]: I0217 13:36:23.566699 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:36:23 crc kubenswrapper[4768]: I0217 13:36:23.566707 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:36:23 crc kubenswrapper[4768]: I0217 13:36:23.568457 4768 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="9dffd0f3cc3b89ababe812ea2e550f319242913da2908f466cbb8d9ac541e4b9" exitCode=0 Feb 17 13:36:23 crc kubenswrapper[4768]: I0217 13:36:23.568484 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"9dffd0f3cc3b89ababe812ea2e550f319242913da2908f466cbb8d9ac541e4b9"} Feb 17 13:36:23 crc kubenswrapper[4768]: I0217 13:36:23.568561 4768 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 13:36:23 crc kubenswrapper[4768]: I0217 13:36:23.569257 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:36:23 crc kubenswrapper[4768]: I0217 13:36:23.569280 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:36:23 crc kubenswrapper[4768]: I0217 13:36:23.569288 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:36:23 crc kubenswrapper[4768]: I0217 13:36:23.574023 4768 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 13:36:23 crc kubenswrapper[4768]: I0217 13:36:23.575289 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:36:23 crc kubenswrapper[4768]: I0217 13:36:23.575328 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:36:23 crc kubenswrapper[4768]: I0217 13:36:23.575345 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:36:24 crc kubenswrapper[4768]: I0217 13:36:24.464441 4768 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.36:6443: connect: connection refused Feb 17 13:36:24 crc kubenswrapper[4768]: I0217 13:36:24.475014 4768 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-15 12:26:08.768569969 +0000 UTC Feb 17 13:36:24 crc kubenswrapper[4768]: E0217 13:36:24.484028 4768 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.36:6443: connect: connection refused" interval="3.2s" Feb 17 13:36:24 crc kubenswrapper[4768]: I0217 13:36:24.572289 4768 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="2107246f378febfafc99b78ae01b5006c65aab0c6802bc6f63eabe7c4f6afa55" exitCode=0 Feb 17 13:36:24 crc kubenswrapper[4768]: I0217 13:36:24.572334 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"2107246f378febfafc99b78ae01b5006c65aab0c6802bc6f63eabe7c4f6afa55"} Feb 17 13:36:24 crc kubenswrapper[4768]: I0217 13:36:24.572431 4768 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 13:36:24 crc kubenswrapper[4768]: I0217 13:36:24.573373 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:36:24 crc kubenswrapper[4768]: I0217 13:36:24.573406 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:36:24 crc kubenswrapper[4768]: I0217 13:36:24.573418 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:36:24 crc kubenswrapper[4768]: I0217 13:36:24.576240 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"69f54a203eacaea308baea34fc388357ae2762ef503475c8438f326ed643b401"} Feb 17 13:36:24 crc kubenswrapper[4768]: I0217 13:36:24.576266 4768 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 13:36:24 crc kubenswrapper[4768]: I0217 13:36:24.576933 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:36:24 crc kubenswrapper[4768]: I0217 13:36:24.576960 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:36:24 crc kubenswrapper[4768]: I0217 13:36:24.576971 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:36:24 crc kubenswrapper[4768]: I0217 13:36:24.578384 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"28f9a3cd67e4819d5c4e24b1b0d8576898a01c75184c958d65ce81f2523ce7f4"} Feb 17 13:36:24 crc kubenswrapper[4768]: I0217 13:36:24.578427 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"e06287439e9f86ed8e81d6d3e8b07691439a8227d2eb2b819db1a7e6de078c18"} Feb 17 13:36:24 crc kubenswrapper[4768]: I0217 13:36:24.578441 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"1a9d10a3cac9b0b43e1e7c3fb80828cd2d694a46314618dc023ffec8a43488c5"} Feb 17 13:36:24 crc kubenswrapper[4768]: I0217 13:36:24.578401 4768 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 13:36:24 crc kubenswrapper[4768]: I0217 13:36:24.579178 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:36:24 crc kubenswrapper[4768]: I0217 13:36:24.579216 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:36:24 crc kubenswrapper[4768]: I0217 13:36:24.579228 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:36:24 crc kubenswrapper[4768]: I0217 13:36:24.585940 4768 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 13:36:24 crc kubenswrapper[4768]: I0217 13:36:24.586039 4768 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 13:36:24 crc kubenswrapper[4768]: I0217 13:36:24.586367 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"56007909b34b10d26eb61969c4c981eb15672cbfc95eeb424d47068a11f2d69f"} Feb 17 13:36:24 crc kubenswrapper[4768]: I0217 13:36:24.586397 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"7641c2023d65064decc69606567dff6eff07116a6cc62f9d1b1cddb34a39a407"} Feb 17 13:36:24 crc kubenswrapper[4768]: I0217 13:36:24.586408 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"52dd97c46a2e371a68dcd9558be0f94d3bdd20c7a6ba6a65178fc09cdceea903"} Feb 17 13:36:24 crc kubenswrapper[4768]: I0217 13:36:24.586416 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"c4f0320f35632987c0e1f7eb6655c7221d7c159f1473088fa143a7c69a60f5b0"} Feb 17 13:36:24 crc kubenswrapper[4768]: I0217 13:36:24.586426 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"1dd90501c9cf3ad0b27fefe4999afdfeaf0a8262728c8d47cd4714b0ef71d340"} Feb 17 13:36:24 crc kubenswrapper[4768]: I0217 13:36:24.586775 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:36:24 crc kubenswrapper[4768]: I0217 13:36:24.586803 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:36:24 crc kubenswrapper[4768]: I0217 13:36:24.586814 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:36:24 crc kubenswrapper[4768]: I0217 13:36:24.587336 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:36:24 crc kubenswrapper[4768]: I0217 13:36:24.587356 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:36:24 crc kubenswrapper[4768]: I0217 13:36:24.587366 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:36:24 crc kubenswrapper[4768]: I0217 13:36:24.683074 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 17 13:36:24 crc kubenswrapper[4768]: I0217 13:36:24.709671 4768 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 13:36:24 crc kubenswrapper[4768]: I0217 13:36:24.710622 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:36:24 crc kubenswrapper[4768]: I0217 13:36:24.710655 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:36:24 crc kubenswrapper[4768]: I0217 13:36:24.710666 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:36:24 crc kubenswrapper[4768]: I0217 13:36:24.710690 4768 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 17 13:36:24 crc kubenswrapper[4768]: E0217 13:36:24.711140 4768 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.36:6443: connect: connection refused" node="crc" Feb 17 13:36:24 crc kubenswrapper[4768]: W0217 13:36:24.756398 4768 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.36:6443: connect: connection refused Feb 17 13:36:24 crc kubenswrapper[4768]: E0217 13:36:24.756457 4768 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.36:6443: connect: connection refused" logger="UnhandledError" Feb 17 13:36:25 crc kubenswrapper[4768]: W0217 13:36:25.020004 4768 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.36:6443: connect: connection refused Feb 17 13:36:25 crc kubenswrapper[4768]: E0217 13:36:25.020078 4768 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.36:6443: connect: connection refused" logger="UnhandledError" Feb 17 13:36:25 crc kubenswrapper[4768]: I0217 13:36:25.128559 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 17 13:36:25 crc kubenswrapper[4768]: W0217 13:36:25.415557 4768 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.36:6443: connect: connection refused Feb 17 13:36:25 crc kubenswrapper[4768]: E0217 13:36:25.415626 4768 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.36:6443: connect: connection refused" logger="UnhandledError" Feb 17 13:36:25 crc kubenswrapper[4768]: I0217 13:36:25.475149 4768 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-31 01:46:33.430266683 +0000 UTC Feb 17 13:36:25 crc kubenswrapper[4768]: I0217 13:36:25.590727 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Feb 17 13:36:25 crc kubenswrapper[4768]: I0217 13:36:25.593250 4768 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="56007909b34b10d26eb61969c4c981eb15672cbfc95eeb424d47068a11f2d69f" exitCode=255 Feb 17 13:36:25 crc kubenswrapper[4768]: I0217 13:36:25.593499 4768 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 13:36:25 crc kubenswrapper[4768]: I0217 13:36:25.593634 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"56007909b34b10d26eb61969c4c981eb15672cbfc95eeb424d47068a11f2d69f"} Feb 17 13:36:25 crc kubenswrapper[4768]: I0217 13:36:25.594622 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:36:25 crc kubenswrapper[4768]: I0217 13:36:25.594670 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:36:25 crc kubenswrapper[4768]: I0217 13:36:25.594685 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:36:25 crc kubenswrapper[4768]: I0217 13:36:25.595424 4768 scope.go:117] "RemoveContainer" containerID="56007909b34b10d26eb61969c4c981eb15672cbfc95eeb424d47068a11f2d69f" Feb 17 13:36:25 crc kubenswrapper[4768]: I0217 13:36:25.599168 4768 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="309faa0d43bd63336ec6832d9a08dc991060132f351f4502bc645c5cdb44eba4" exitCode=0 Feb 17 13:36:25 crc kubenswrapper[4768]: I0217 13:36:25.599243 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"309faa0d43bd63336ec6832d9a08dc991060132f351f4502bc645c5cdb44eba4"} Feb 17 13:36:25 crc kubenswrapper[4768]: I0217 13:36:25.599283 4768 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 13:36:25 crc kubenswrapper[4768]: I0217 13:36:25.599392 4768 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 17 13:36:25 crc kubenswrapper[4768]: I0217 13:36:25.599407 4768 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 13:36:25 crc kubenswrapper[4768]: I0217 13:36:25.599428 4768 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 13:36:25 crc kubenswrapper[4768]: I0217 13:36:25.600255 4768 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 13:36:25 crc kubenswrapper[4768]: I0217 13:36:25.600949 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:36:25 crc kubenswrapper[4768]: I0217 13:36:25.600985 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:36:25 crc kubenswrapper[4768]: I0217 13:36:25.600999 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:36:25 crc kubenswrapper[4768]: I0217 13:36:25.601953 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:36:25 crc kubenswrapper[4768]: I0217 13:36:25.601979 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:36:25 crc kubenswrapper[4768]: I0217 13:36:25.601991 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:36:25 crc kubenswrapper[4768]: I0217 13:36:25.602671 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:36:25 crc kubenswrapper[4768]: I0217 13:36:25.602697 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:36:25 crc kubenswrapper[4768]: I0217 13:36:25.602709 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:36:25 crc kubenswrapper[4768]: I0217 13:36:25.603122 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:36:25 crc kubenswrapper[4768]: I0217 13:36:25.603145 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:36:25 crc kubenswrapper[4768]: I0217 13:36:25.603156 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:36:26 crc kubenswrapper[4768]: I0217 13:36:26.475648 4768 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-18 10:19:43.699512712 +0000 UTC Feb 17 13:36:26 crc kubenswrapper[4768]: I0217 13:36:26.604554 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Feb 17 13:36:26 crc kubenswrapper[4768]: I0217 13:36:26.607486 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"9ecebe409455190192bdd157c1aa1c3eaec5d7930ccfaa6c456a1d04be7fff2f"} Feb 17 13:36:26 crc kubenswrapper[4768]: I0217 13:36:26.607514 4768 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 17 13:36:26 crc kubenswrapper[4768]: I0217 13:36:26.607562 4768 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 13:36:26 crc kubenswrapper[4768]: I0217 13:36:26.608563 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:36:26 crc kubenswrapper[4768]: I0217 13:36:26.608610 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:36:26 crc kubenswrapper[4768]: I0217 13:36:26.608622 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:36:26 crc kubenswrapper[4768]: I0217 13:36:26.614676 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"137c122cfd02da7fcb06a9eae59629ccf02205c5b254e09e950e775b2c5f93a7"} Feb 17 13:36:26 crc kubenswrapper[4768]: I0217 13:36:26.614739 4768 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 13:36:26 crc kubenswrapper[4768]: I0217 13:36:26.614742 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"08891b7721554a46919aaae9fae8ee68b9b2c8a5819848b1aad0c8d24d3fd341"} Feb 17 13:36:26 crc kubenswrapper[4768]: I0217 13:36:26.614767 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"4f351d4e9063867602e0aa41fd95bcece8a18260056a970d51ba2bbd7090811e"} Feb 17 13:36:26 crc kubenswrapper[4768]: I0217 13:36:26.614787 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"3095083de5a9a5662c021f3cdf454a4625041c7a45b4d34ecc5ce21f54e9ffec"} Feb 17 13:36:26 crc kubenswrapper[4768]: I0217 13:36:26.614814 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"f619f1c26af3481394839405f326f3f372bfbf1d55cc078eea563459a0b7cb33"} Feb 17 13:36:26 crc kubenswrapper[4768]: I0217 13:36:26.614911 4768 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 13:36:26 crc kubenswrapper[4768]: I0217 13:36:26.615979 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:36:26 crc kubenswrapper[4768]: I0217 13:36:26.616034 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:36:26 crc kubenswrapper[4768]: I0217 13:36:26.616054 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:36:26 crc kubenswrapper[4768]: I0217 13:36:26.616973 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:36:26 crc kubenswrapper[4768]: I0217 13:36:26.617038 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:36:26 crc kubenswrapper[4768]: I0217 13:36:26.617062 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:36:27 crc kubenswrapper[4768]: I0217 13:36:27.476686 4768 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-25 18:26:42.660686079 +0000 UTC Feb 17 13:36:27 crc kubenswrapper[4768]: I0217 13:36:27.617081 4768 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 17 13:36:27 crc kubenswrapper[4768]: I0217 13:36:27.617171 4768 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 13:36:27 crc kubenswrapper[4768]: I0217 13:36:27.617245 4768 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 13:36:27 crc kubenswrapper[4768]: I0217 13:36:27.618161 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:36:27 crc kubenswrapper[4768]: I0217 13:36:27.618216 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:36:27 crc kubenswrapper[4768]: I0217 13:36:27.618227 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:36:27 crc kubenswrapper[4768]: I0217 13:36:27.618440 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:36:27 crc kubenswrapper[4768]: I0217 13:36:27.618497 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:36:27 crc kubenswrapper[4768]: I0217 13:36:27.618518 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:36:27 crc kubenswrapper[4768]: I0217 13:36:27.685286 4768 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Feb 17 13:36:27 crc kubenswrapper[4768]: I0217 13:36:27.825394 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 17 13:36:27 crc kubenswrapper[4768]: I0217 13:36:27.825584 4768 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 13:36:27 crc kubenswrapper[4768]: I0217 13:36:27.826721 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:36:27 crc kubenswrapper[4768]: I0217 13:36:27.826768 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:36:27 crc kubenswrapper[4768]: I0217 13:36:27.826781 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:36:27 crc kubenswrapper[4768]: I0217 13:36:27.834355 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 17 13:36:27 crc kubenswrapper[4768]: I0217 13:36:27.911641 4768 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 13:36:27 crc kubenswrapper[4768]: I0217 13:36:27.913070 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:36:27 crc kubenswrapper[4768]: I0217 13:36:27.913099 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:36:27 crc kubenswrapper[4768]: I0217 13:36:27.913124 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:36:27 crc kubenswrapper[4768]: I0217 13:36:27.913145 4768 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 17 13:36:28 crc kubenswrapper[4768]: I0217 13:36:28.129415 4768 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 17 13:36:28 crc kubenswrapper[4768]: I0217 13:36:28.129503 4768 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 17 13:36:28 crc kubenswrapper[4768]: I0217 13:36:28.315806 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 13:36:28 crc kubenswrapper[4768]: I0217 13:36:28.413210 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-crc" Feb 17 13:36:28 crc kubenswrapper[4768]: I0217 13:36:28.477231 4768 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-16 11:54:52.277544724 +0000 UTC Feb 17 13:36:28 crc kubenswrapper[4768]: I0217 13:36:28.618764 4768 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 17 13:36:28 crc kubenswrapper[4768]: I0217 13:36:28.618821 4768 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 13:36:28 crc kubenswrapper[4768]: I0217 13:36:28.619257 4768 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 13:36:28 crc kubenswrapper[4768]: I0217 13:36:28.619304 4768 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 13:36:28 crc kubenswrapper[4768]: I0217 13:36:28.619788 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:36:28 crc kubenswrapper[4768]: I0217 13:36:28.619825 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:36:28 crc kubenswrapper[4768]: I0217 13:36:28.619837 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:36:28 crc kubenswrapper[4768]: I0217 13:36:28.620617 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:36:28 crc kubenswrapper[4768]: I0217 13:36:28.620637 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:36:28 crc kubenswrapper[4768]: I0217 13:36:28.620659 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:36:28 crc kubenswrapper[4768]: I0217 13:36:28.620664 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:36:28 crc kubenswrapper[4768]: I0217 13:36:28.620671 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:36:28 crc kubenswrapper[4768]: I0217 13:36:28.620679 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:36:29 crc kubenswrapper[4768]: I0217 13:36:29.469091 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 13:36:29 crc kubenswrapper[4768]: I0217 13:36:29.477372 4768 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-24 23:32:23.40441518 +0000 UTC Feb 17 13:36:29 crc kubenswrapper[4768]: I0217 13:36:29.540291 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-crc" Feb 17 13:36:29 crc kubenswrapper[4768]: I0217 13:36:29.620563 4768 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 17 13:36:29 crc kubenswrapper[4768]: I0217 13:36:29.620589 4768 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 13:36:29 crc kubenswrapper[4768]: I0217 13:36:29.620599 4768 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 13:36:29 crc kubenswrapper[4768]: I0217 13:36:29.621687 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:36:29 crc kubenswrapper[4768]: I0217 13:36:29.621786 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:36:29 crc kubenswrapper[4768]: I0217 13:36:29.621864 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:36:29 crc kubenswrapper[4768]: I0217 13:36:29.621742 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:36:29 crc kubenswrapper[4768]: I0217 13:36:29.621992 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:36:29 crc kubenswrapper[4768]: I0217 13:36:29.622008 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:36:30 crc kubenswrapper[4768]: I0217 13:36:30.264273 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 17 13:36:30 crc kubenswrapper[4768]: I0217 13:36:30.264800 4768 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 13:36:30 crc kubenswrapper[4768]: I0217 13:36:30.266305 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:36:30 crc kubenswrapper[4768]: I0217 13:36:30.266517 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:36:30 crc kubenswrapper[4768]: I0217 13:36:30.266656 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:36:30 crc kubenswrapper[4768]: I0217 13:36:30.477949 4768 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-23 21:00:45.947009386 +0000 UTC Feb 17 13:36:31 crc kubenswrapper[4768]: I0217 13:36:31.247985 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 17 13:36:31 crc kubenswrapper[4768]: I0217 13:36:31.248632 4768 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 13:36:31 crc kubenswrapper[4768]: I0217 13:36:31.250189 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:36:31 crc kubenswrapper[4768]: I0217 13:36:31.250277 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:36:31 crc kubenswrapper[4768]: I0217 13:36:31.250291 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:36:31 crc kubenswrapper[4768]: I0217 13:36:31.479148 4768 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-12 03:40:38.913084863 +0000 UTC Feb 17 13:36:31 crc kubenswrapper[4768]: E0217 13:36:31.604967 4768 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Feb 17 13:36:31 crc kubenswrapper[4768]: I0217 13:36:31.900041 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 13:36:31 crc kubenswrapper[4768]: I0217 13:36:31.900284 4768 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 13:36:31 crc kubenswrapper[4768]: I0217 13:36:31.901309 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:36:31 crc kubenswrapper[4768]: I0217 13:36:31.901347 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:36:31 crc kubenswrapper[4768]: I0217 13:36:31.901359 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:36:32 crc kubenswrapper[4768]: I0217 13:36:32.479276 4768 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-10 16:02:59.124524027 +0000 UTC Feb 17 13:36:33 crc kubenswrapper[4768]: I0217 13:36:33.480175 4768 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-02 15:43:25.224547995 +0000 UTC Feb 17 13:36:34 crc kubenswrapper[4768]: I0217 13:36:34.481021 4768 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-24 07:39:04.85964909 +0000 UTC Feb 17 13:36:34 crc kubenswrapper[4768]: I0217 13:36:34.687548 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 17 13:36:34 crc kubenswrapper[4768]: I0217 13:36:34.687773 4768 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 13:36:34 crc kubenswrapper[4768]: I0217 13:36:34.689358 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:36:34 crc kubenswrapper[4768]: I0217 13:36:34.689423 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:36:34 crc kubenswrapper[4768]: I0217 13:36:34.689447 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:36:35 crc kubenswrapper[4768]: I0217 13:36:35.465056 4768 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": net/http: TLS handshake timeout Feb 17 13:36:35 crc kubenswrapper[4768]: I0217 13:36:35.481427 4768 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-15 09:58:39.367647001 +0000 UTC Feb 17 13:36:35 crc kubenswrapper[4768]: W0217 13:36:35.716177 4768 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": net/http: TLS handshake timeout Feb 17 13:36:35 crc kubenswrapper[4768]: I0217 13:36:35.716329 4768 trace.go:236] Trace[1001802414]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (17-Feb-2026 13:36:25.714) (total time: 10001ms): Feb 17 13:36:35 crc kubenswrapper[4768]: Trace[1001802414]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (13:36:35.716) Feb 17 13:36:35 crc kubenswrapper[4768]: Trace[1001802414]: [10.001590428s] [10.001590428s] END Feb 17 13:36:35 crc kubenswrapper[4768]: E0217 13:36:35.716365 4768 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" Feb 17 13:36:35 crc kubenswrapper[4768]: I0217 13:36:35.843164 4768 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Feb 17 13:36:35 crc kubenswrapper[4768]: I0217 13:36:35.843375 4768 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Feb 17 13:36:35 crc kubenswrapper[4768]: I0217 13:36:35.847850 4768 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Feb 17 13:36:35 crc kubenswrapper[4768]: I0217 13:36:35.848176 4768 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Feb 17 13:36:36 crc kubenswrapper[4768]: I0217 13:36:36.482438 4768 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-13 06:29:09.661377089 +0000 UTC Feb 17 13:36:37 crc kubenswrapper[4768]: I0217 13:36:37.484197 4768 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-16 02:27:13.882775077 +0000 UTC Feb 17 13:36:38 crc kubenswrapper[4768]: I0217 13:36:38.129708 4768 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 17 13:36:38 crc kubenswrapper[4768]: I0217 13:36:38.129808 4768 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 17 13:36:38 crc kubenswrapper[4768]: I0217 13:36:38.322825 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 13:36:38 crc kubenswrapper[4768]: I0217 13:36:38.323244 4768 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 13:36:38 crc kubenswrapper[4768]: I0217 13:36:38.323623 4768 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Feb 17 13:36:38 crc kubenswrapper[4768]: I0217 13:36:38.323713 4768 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Feb 17 13:36:38 crc kubenswrapper[4768]: I0217 13:36:38.324267 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:36:38 crc kubenswrapper[4768]: I0217 13:36:38.324459 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:36:38 crc kubenswrapper[4768]: I0217 13:36:38.324529 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:36:38 crc kubenswrapper[4768]: I0217 13:36:38.327379 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 13:36:38 crc kubenswrapper[4768]: I0217 13:36:38.446015 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-crc" Feb 17 13:36:38 crc kubenswrapper[4768]: I0217 13:36:38.446251 4768 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 13:36:38 crc kubenswrapper[4768]: I0217 13:36:38.449457 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:36:38 crc kubenswrapper[4768]: I0217 13:36:38.449525 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:36:38 crc kubenswrapper[4768]: I0217 13:36:38.449543 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:36:38 crc kubenswrapper[4768]: I0217 13:36:38.462235 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-crc" Feb 17 13:36:38 crc kubenswrapper[4768]: I0217 13:36:38.485067 4768 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-03 00:33:55.293747688 +0000 UTC Feb 17 13:36:38 crc kubenswrapper[4768]: I0217 13:36:38.644266 4768 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 13:36:38 crc kubenswrapper[4768]: I0217 13:36:38.644455 4768 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 13:36:38 crc kubenswrapper[4768]: I0217 13:36:38.645427 4768 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Feb 17 13:36:38 crc kubenswrapper[4768]: I0217 13:36:38.645480 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:36:38 crc kubenswrapper[4768]: I0217 13:36:38.645508 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:36:38 crc kubenswrapper[4768]: I0217 13:36:38.645524 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:36:38 crc kubenswrapper[4768]: I0217 13:36:38.645501 4768 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Feb 17 13:36:38 crc kubenswrapper[4768]: I0217 13:36:38.647177 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:36:38 crc kubenswrapper[4768]: I0217 13:36:38.647283 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:36:38 crc kubenswrapper[4768]: I0217 13:36:38.647315 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:36:39 crc kubenswrapper[4768]: I0217 13:36:39.485470 4768 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-18 18:37:38.687356222 +0000 UTC Feb 17 13:36:40 crc kubenswrapper[4768]: I0217 13:36:40.486042 4768 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-12 06:57:13.536025468 +0000 UTC Feb 17 13:36:40 crc kubenswrapper[4768]: E0217 13:36:40.842081 4768 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": context deadline exceeded" interval="6.4s" Feb 17 13:36:40 crc kubenswrapper[4768]: I0217 13:36:40.845630 4768 trace.go:236] Trace[1598860576]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (17-Feb-2026 13:36:30.823) (total time: 10022ms): Feb 17 13:36:40 crc kubenswrapper[4768]: Trace[1598860576]: ---"Objects listed" error: 10022ms (13:36:40.845) Feb 17 13:36:40 crc kubenswrapper[4768]: Trace[1598860576]: [10.022136461s] [10.022136461s] END Feb 17 13:36:40 crc kubenswrapper[4768]: I0217 13:36:40.845662 4768 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Feb 17 13:36:40 crc kubenswrapper[4768]: E0217 13:36:40.846722 4768 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes \"crc\" is forbidden: autoscaling.openshift.io/ManagedNode infra config cache not synchronized" node="crc" Feb 17 13:36:40 crc kubenswrapper[4768]: I0217 13:36:40.846732 4768 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Feb 17 13:36:40 crc kubenswrapper[4768]: I0217 13:36:40.847245 4768 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Feb 17 13:36:40 crc kubenswrapper[4768]: I0217 13:36:40.848414 4768 trace.go:236] Trace[459352279]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (17-Feb-2026 13:36:29.780) (total time: 11067ms): Feb 17 13:36:40 crc kubenswrapper[4768]: Trace[459352279]: ---"Objects listed" error: 11067ms (13:36:40.848) Feb 17 13:36:40 crc kubenswrapper[4768]: Trace[459352279]: [11.067563293s] [11.067563293s] END Feb 17 13:36:40 crc kubenswrapper[4768]: I0217 13:36:40.848431 4768 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Feb 17 13:36:40 crc kubenswrapper[4768]: I0217 13:36:40.850702 4768 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Feb 17 13:36:40 crc kubenswrapper[4768]: I0217 13:36:40.870082 4768 csr.go:261] certificate signing request csr-hmmc5 is approved, waiting to be issued Feb 17 13:36:40 crc kubenswrapper[4768]: I0217 13:36:40.885125 4768 csr.go:257] certificate signing request csr-hmmc5 is issued Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.323692 4768 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Feb 17 13:36:41 crc kubenswrapper[4768]: W0217 13:36:41.323873 4768 reflector.go:484] k8s.io/client-go/informers/factory.go:160: watch of *v1.Node ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Feb 17 13:36:41 crc kubenswrapper[4768]: W0217 13:36:41.323948 4768 reflector.go:484] k8s.io/client-go/informers/factory.go:160: watch of *v1.CSIDriver ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Feb 17 13:36:41 crc kubenswrapper[4768]: W0217 13:36:41.324047 4768 reflector.go:484] k8s.io/client-go/informers/factory.go:160: watch of *v1.Service ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.461066 4768 apiserver.go:52] "Watching apiserver" Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.486336 4768 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-02 13:14:00.201128446 +0000 UTC Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.495148 4768 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.495459 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/node-resolver-6l7rv","openshift-network-console/networking-console-plugin-85b44fc459-gdk6g","openshift-network-diagnostics/network-check-source-55646444c4-trplf","openshift-network-diagnostics/network-check-target-xd92c","openshift-network-node-identity/network-node-identity-vrzqb","openshift-network-operator/iptables-alerter-4ln5h","openshift-network-operator/network-operator-58b4c7f79c-55gtf"] Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.495815 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.496071 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.496071 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.496087 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 13:36:41 crc kubenswrapper[4768]: E0217 13:36:41.496414 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.496429 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.496143 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 13:36:41 crc kubenswrapper[4768]: E0217 13:36:41.496492 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 13:36:41 crc kubenswrapper[4768]: E0217 13:36:41.496508 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.496555 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-6l7rv" Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.497412 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.497889 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.498070 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.498620 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.499450 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.499713 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.499719 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.499731 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.499755 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.503895 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.504162 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.504163 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.513020 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.522597 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.531639 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:41Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.542421 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.550852 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:41Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.560269 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:41Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.566372 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6l7rv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dac64b2f-4b0b-454c-96e0-fc7d563d300f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:41Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:41Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7njgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:36:41Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6l7rv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.573477 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:41Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.575015 4768 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.582550 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:41Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.590895 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:41Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.600402 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:41Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.609182 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6l7rv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dac64b2f-4b0b-454c-96e0-fc7d563d300f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:41Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:41Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7njgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:36:41Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6l7rv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.620057 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.632784 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:41Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.640374 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.650210 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.650268 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.650305 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.650337 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.650368 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.650401 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.650436 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.650469 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.650502 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.650534 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.650545 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" (OuterVolumeSpecName: "kube-api-access-dbsvg") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "kube-api-access-dbsvg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.650561 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.650619 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.650641 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.650659 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.650678 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.650696 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.650742 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.650764 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.650787 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.650807 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.650850 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.650870 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.650890 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.650908 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.650939 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.650959 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.650977 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.650993 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") pod \"44663579-783b-4372-86d6-acf235a62d72\" (UID: \"44663579-783b-4372-86d6-acf235a62d72\") " Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.651012 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.651029 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.651046 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.651063 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.651079 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.651096 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.651094 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" (OuterVolumeSpecName: "kube-api-access-mg5zb") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "kube-api-access-mg5zb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.651133 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.651156 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.651185 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.651219 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.651234 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.651250 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.651264 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.651294 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.651309 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.651324 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.651339 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.651355 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.651372 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.651389 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.651404 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.651419 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.651435 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.651457 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.651564 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.651963 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" (OuterVolumeSpecName: "audit") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "audit". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.652224 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.652309 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.652326 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" (OuterVolumeSpecName: "available-featuregates") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "available-featuregates". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.652336 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.652400 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.652425 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" (OuterVolumeSpecName: "config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.652436 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.652671 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.652711 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.652739 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.652762 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.652788 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.652840 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.652865 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.652886 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.652913 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.652934 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.653210 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.653259 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.653291 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.653317 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.653340 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.653362 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.653411 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.653433 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.653456 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.654414 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.654452 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.654496 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.654517 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.654571 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.654617 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.655306 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.655369 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.655400 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.655450 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.655485 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.655550 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.655578 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.655627 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.655738 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.652452 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.655791 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" (OuterVolumeSpecName: "config") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.652529 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" (OuterVolumeSpecName: "kube-api-access-4d4hj") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "kube-api-access-4d4hj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.652546 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.652961 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.653160 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.653621 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.653648 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.655851 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.653708 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" (OuterVolumeSpecName: "control-plane-machine-set-operator-tls") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "control-plane-machine-set-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.653718 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" (OuterVolumeSpecName: "kube-api-access-zgdk5") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "kube-api-access-zgdk5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.653830 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" (OuterVolumeSpecName: "kube-api-access-x4zgh") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "kube-api-access-x4zgh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.653838 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" (OuterVolumeSpecName: "kube-api-access-wxkg8") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "kube-api-access-wxkg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.653896 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" (OuterVolumeSpecName: "kube-api-access-vt5rc") pod "44663579-783b-4372-86d6-acf235a62d72" (UID: "44663579-783b-4372-86d6-acf235a62d72"). InnerVolumeSpecName "kube-api-access-vt5rc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.653978 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" (OuterVolumeSpecName: "certs") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.654017 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.654150 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.654199 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" (OuterVolumeSpecName: "client-ca") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.654294 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" (OuterVolumeSpecName: "kube-api-access-sb6h7") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "kube-api-access-sb6h7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.654362 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" (OuterVolumeSpecName: "machine-api-operator-tls") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "machine-api-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.654601 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.654624 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.654837 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" (OuterVolumeSpecName: "kube-api-access-249nr") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "kube-api-access-249nr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.654930 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" (OuterVolumeSpecName: "machine-approver-tls") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "machine-approver-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.654966 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.655013 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" (OuterVolumeSpecName: "kube-api-access-279lb") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "kube-api-access-279lb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.655222 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.655391 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.655611 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.655701 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" (OuterVolumeSpecName: "config") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.655632 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.655746 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" (OuterVolumeSpecName: "console-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.655772 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.655929 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" (OuterVolumeSpecName: "kube-api-access-2w9zh") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "kube-api-access-2w9zh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.655956 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.655794 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.656577 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.656608 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.656632 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.656654 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.656677 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.656698 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.656651 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.656760 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" (OuterVolumeSpecName: "kube-api-access-htfz6") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "kube-api-access-htfz6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.656818 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.657049 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" (OuterVolumeSpecName: "kube-api-access-pcxfs") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "kube-api-access-pcxfs". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.657161 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" (OuterVolumeSpecName: "etcd-service-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.657289 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" (OuterVolumeSpecName: "config") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.656570 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.658661 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" (OuterVolumeSpecName: "samples-operator-tls") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "samples-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.659816 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.660073 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.660311 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.660358 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" (OuterVolumeSpecName: "kube-api-access-7c4vf") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "kube-api-access-7c4vf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.660396 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" (OuterVolumeSpecName: "kube-api-access-6ccd8") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "kube-api-access-6ccd8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.660938 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" (OuterVolumeSpecName: "config-volume") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.661121 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.661088 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.656721 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.661419 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.661443 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.661461 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.661477 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.661492 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.661510 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.661525 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.661541 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.661558 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.661573 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" (OuterVolumeSpecName: "kube-api-access-2d4wz") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "kube-api-access-2d4wz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.661601 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.661611 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" (OuterVolumeSpecName: "kube-api-access-kfwg7") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "kube-api-access-kfwg7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.661619 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.661659 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.661690 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.661742 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.661762 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.661811 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.661999 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" (OuterVolumeSpecName: "kube-api-access-d6qdx") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "kube-api-access-d6qdx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.662698 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" (OuterVolumeSpecName: "kube-api-access-x7zkh") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "kube-api-access-x7zkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.663011 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.663013 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" (OuterVolumeSpecName: "kube-api-access-mnrrd") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "kube-api-access-mnrrd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.663152 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" (OuterVolumeSpecName: "kube-api-access-fqsjt") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "kube-api-access-fqsjt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.663184 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" (OuterVolumeSpecName: "serviceca") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.663272 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.663534 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.663600 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" (OuterVolumeSpecName: "utilities") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.663628 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.663693 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" (OuterVolumeSpecName: "kube-api-access-8tdtz") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "kube-api-access-8tdtz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.663781 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.663848 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.663905 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" (OuterVolumeSpecName: "kube-api-access-w7l8j") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "kube-api-access-w7l8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.664006 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" (OuterVolumeSpecName: "etcd-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.664158 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.664237 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.664311 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.664324 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.664541 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" (OuterVolumeSpecName: "kube-api-access-zkvpv") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "kube-api-access-zkvpv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.664802 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.665062 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.664983 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" (OuterVolumeSpecName: "config") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.661806 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" (OuterVolumeSpecName: "config") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.665419 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.665458 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.665512 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.665535 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.665615 4768 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="9ecebe409455190192bdd157c1aa1c3eaec5d7930ccfaa6c456a1d04be7fff2f" exitCode=255 Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.665662 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.665716 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"9ecebe409455190192bdd157c1aa1c3eaec5d7930ccfaa6c456a1d04be7fff2f"} Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.665780 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.665788 4768 scope.go:117] "RemoveContainer" containerID="56007909b34b10d26eb61969c4c981eb15672cbfc95eeb424d47068a11f2d69f" Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.666016 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.666408 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.666252 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.666441 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") pod \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\" (UID: \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\") " Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.666465 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.666495 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.666521 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.666544 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.666728 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" (OuterVolumeSpecName: "kube-api-access-jhbk2") pod "bd23aa5c-e532-4e53-bccf-e79f130c5ae8" (UID: "bd23aa5c-e532-4e53-bccf-e79f130c5ae8"). InnerVolumeSpecName "kube-api-access-jhbk2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.666858 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.667062 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" (OuterVolumeSpecName: "kube-api-access-lzf88") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "kube-api-access-lzf88". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.666916 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" (OuterVolumeSpecName: "client-ca") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.667218 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" (OuterVolumeSpecName: "kube-api-access-pj782") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "kube-api-access-pj782". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.667433 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" (OuterVolumeSpecName: "kube-api-access-cfbct") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "kube-api-access-cfbct". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.667458 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.667894 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" (OuterVolumeSpecName: "kube-api-access-bf2bz") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "kube-api-access-bf2bz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.667936 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.668190 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.668208 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.668250 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.668678 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.668759 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.668804 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 17 13:36:41 crc kubenswrapper[4768]: E0217 13:36:41.668852 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 13:36:42.16882559 +0000 UTC m=+21.448212032 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.668894 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.668933 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.668972 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.669005 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.669032 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.669148 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.669235 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.669289 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.669320 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.669547 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" (OuterVolumeSpecName: "kube-api-access-w9rds") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "kube-api-access-w9rds". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.669693 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.669700 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" (OuterVolumeSpecName: "config") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.669723 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.669747 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.669769 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.669791 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.669820 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.669885 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") pod \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\" (UID: \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\") " Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.669950 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.669987 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.670047 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.670129 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.670167 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.670213 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.670246 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.670290 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.670311 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.670333 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.670376 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.670404 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.670457 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.670491 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.670514 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" (OuterVolumeSpecName: "kube-api-access-xcgwh") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "kube-api-access-xcgwh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.670516 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.670538 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.670559 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") pod \"49ef4625-1d3a-4a9f-b595-c2433d32326d\" (UID: \"49ef4625-1d3a-4a9f-b595-c2433d32326d\") " Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.670559 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" (OuterVolumeSpecName: "cert") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.670600 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.670623 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.670638 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.670657 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.670698 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.670714 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.670736 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.670809 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.670888 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.670918 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" (OuterVolumeSpecName: "utilities") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.670942 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.670975 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.671044 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.671065 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.671179 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.671236 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.671273 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.671275 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" (OuterVolumeSpecName: "utilities") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.671324 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.671357 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.671413 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.671441 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.671513 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.671592 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.671622 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.671677 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.671712 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.671771 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.671793 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.671840 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.671859 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.671898 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.671929 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.671981 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.672006 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.672032 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.672087 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.672154 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.672209 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.673837 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.673890 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.673920 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.673952 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.673986 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.674016 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.674045 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.674072 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.674265 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.674364 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.674502 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.675676 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.675767 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/dac64b2f-4b0b-454c-96e0-fc7d563d300f-hosts-file\") pod \"node-resolver-6l7rv\" (UID: \"dac64b2f-4b0b-454c-96e0-fc7d563d300f\") " pod="openshift-dns/node-resolver-6l7rv" Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.676337 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.676610 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.676742 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.676790 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7njgg\" (UniqueName: \"kubernetes.io/projected/dac64b2f-4b0b-454c-96e0-fc7d563d300f-kube-api-access-7njgg\") pod \"node-resolver-6l7rv\" (UID: \"dac64b2f-4b0b-454c-96e0-fc7d563d300f\") " pod="openshift-dns/node-resolver-6l7rv" Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.671354 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" (OuterVolumeSpecName: "images") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.671580 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" (OuterVolumeSpecName: "kube-api-access-tk88c") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "kube-api-access-tk88c". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.671758 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.672856 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.673142 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" (OuterVolumeSpecName: "kube-api-access-d4lsv") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "kube-api-access-d4lsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.673338 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" (OuterVolumeSpecName: "node-bootstrap-token") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "node-bootstrap-token". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.676364 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 13:36:41 crc kubenswrapper[4768]: E0217 13:36:41.676551 4768 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.678531 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" (OuterVolumeSpecName: "utilities") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.678799 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.679145 4768 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") on node \"crc\" DevicePath \"\"" Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.679223 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" (OuterVolumeSpecName: "config") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.676684 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" (OuterVolumeSpecName: "kube-api-access-x2m85") pod "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" (UID: "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d"). InnerVolumeSpecName "kube-api-access-x2m85". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.677085 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" (OuterVolumeSpecName: "signing-key") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.677162 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" (OuterVolumeSpecName: "kube-api-access-fcqwp") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "kube-api-access-fcqwp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.677460 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.677546 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.678005 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" (OuterVolumeSpecName: "kube-api-access-s4n52") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "kube-api-access-s4n52". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 13:36:41 crc kubenswrapper[4768]: E0217 13:36:41.679348 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-17 13:36:42.179280991 +0000 UTC m=+21.458667473 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.680414 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" (OuterVolumeSpecName: "package-server-manager-serving-cert") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "package-server-manager-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.679204 4768 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.680449 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.680473 4768 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.679337 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.680092 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.680272 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.680620 4768 reconciler_common.go:293] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") on node \"crc\" DevicePath \"\"" Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.680661 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.680707 4768 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") on node \"crc\" DevicePath \"\"" Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.680729 4768 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") on node \"crc\" DevicePath \"\"" Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.680646 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" (OuterVolumeSpecName: "config") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.680743 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.680881 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.681921 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.680359 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" (OuterVolumeSpecName: "mcc-auth-proxy-config") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "mcc-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.683338 4768 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.683434 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" (OuterVolumeSpecName: "images") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.683604 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" (OuterVolumeSpecName: "kube-api-access-jkwtn") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "kube-api-access-jkwtn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.683671 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.683765 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" (OuterVolumeSpecName: "config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.684241 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" (OuterVolumeSpecName: "config") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.684752 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" (OuterVolumeSpecName: "image-registry-operator-tls") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "image-registry-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.684763 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" (OuterVolumeSpecName: "kube-api-access-ngvvp") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "kube-api-access-ngvvp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.684895 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" (OuterVolumeSpecName: "kube-api-access-rnphk") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "kube-api-access-rnphk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.684955 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.685199 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.685422 4768 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") on node \"crc\" DevicePath \"\"" Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.685490 4768 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.685527 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") on node \"crc\" DevicePath \"\"" Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.685562 4768 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.685596 4768 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") on node \"crc\" DevicePath \"\"" Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.685629 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") on node \"crc\" DevicePath \"\"" Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.685635 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" (OuterVolumeSpecName: "kube-api-access-qg5z5") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "kube-api-access-qg5z5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.685659 4768 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.685663 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" (OuterVolumeSpecName: "kube-api-access-lz9wn") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "kube-api-access-lz9wn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.685699 4768 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") on node \"crc\" DevicePath \"\"" Feb 17 13:36:41 crc kubenswrapper[4768]: E0217 13:36:41.685803 4768 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.685885 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" (OuterVolumeSpecName: "kube-api-access-nzwt7") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "kube-api-access-nzwt7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.685896 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" (OuterVolumeSpecName: "kube-api-access-pjr6v") pod "49ef4625-1d3a-4a9f-b595-c2433d32326d" (UID: "49ef4625-1d3a-4a9f-b595-c2433d32326d"). InnerVolumeSpecName "kube-api-access-pjr6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 13:36:41 crc kubenswrapper[4768]: E0217 13:36:41.685911 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-17 13:36:42.185885771 +0000 UTC m=+21.465272223 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.686892 4768 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.687207 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.687900 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.688427 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.688647 4768 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.688707 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.688730 4768 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") on node \"crc\" DevicePath \"\"" Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.688748 4768 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") on node \"crc\" DevicePath \"\"" Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.688762 4768 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.688777 4768 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") on node \"crc\" DevicePath \"\"" Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.688793 4768 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.688812 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") on node \"crc\" DevicePath \"\"" Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.688833 4768 reconciler_common.go:293] "Volume detached for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") on node \"crc\" DevicePath \"\"" Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.688846 4768 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.688860 4768 reconciler_common.go:293] "Volume detached for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") on node \"crc\" DevicePath \"\"" Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.688877 4768 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.688892 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") on node \"crc\" DevicePath \"\"" Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.688907 4768 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") on node \"crc\" DevicePath \"\"" Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.688921 4768 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.688934 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") on node \"crc\" DevicePath \"\"" Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.688950 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") on node \"crc\" DevicePath \"\"" Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.688965 4768 reconciler_common.go:293] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") on node \"crc\" DevicePath \"\"" Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.688978 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") on node \"crc\" DevicePath \"\"" Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.688994 4768 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") on node \"crc\" DevicePath \"\"" Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.689007 4768 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.689022 4768 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") on node \"crc\" DevicePath \"\"" Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.689036 4768 reconciler_common.go:293] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.689050 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") on node \"crc\" DevicePath \"\"" Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.689064 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") on node \"crc\" DevicePath \"\"" Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.689077 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") on node \"crc\" DevicePath \"\"" Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.689091 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") on node \"crc\" DevicePath \"\"" Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.689131 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") on node \"crc\" DevicePath \"\"" Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.689145 4768 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") on node \"crc\" DevicePath \"\"" Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.691045 4768 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.691070 4768 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.691085 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") on node \"crc\" DevicePath \"\"" Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.691125 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") on node \"crc\" DevicePath \"\"" Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.691143 4768 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") on node \"crc\" DevicePath \"\"" Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.691155 4768 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") on node \"crc\" DevicePath \"\"" Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.691168 4768 reconciler_common.go:293] "Volume detached for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") on node \"crc\" DevicePath \"\"" Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.691165 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" (OuterVolumeSpecName: "kube-api-access-w4xd4") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "kube-api-access-w4xd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.691181 4768 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") on node \"crc\" DevicePath \"\"" Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.691207 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.691228 4768 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") on node \"crc\" DevicePath \"\"" Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.691291 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") on node \"crc\" DevicePath \"\"" Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.691314 4768 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.691438 4768 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.691460 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") on node \"crc\" DevicePath \"\"" Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.691476 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") on node \"crc\" DevicePath \"\"" Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.691493 4768 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") on node \"crc\" DevicePath \"\"" Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.691507 4768 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.691521 4768 reconciler_common.go:293] "Volume detached for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") on node \"crc\" DevicePath \"\"" Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.691534 4768 reconciler_common.go:293] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") on node \"crc\" DevicePath \"\"" Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.691548 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") on node \"crc\" DevicePath \"\"" Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.691561 4768 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") on node \"crc\" DevicePath \"\"" Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.691573 4768 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") on node \"crc\" DevicePath \"\"" Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.691586 4768 reconciler_common.go:293] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") on node \"crc\" DevicePath \"\"" Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.691624 4768 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.691637 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") on node \"crc\" DevicePath \"\"" Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.691687 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") on node \"crc\" DevicePath \"\"" Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.691701 4768 reconciler_common.go:293] "Volume detached for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") on node \"crc\" DevicePath \"\"" Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.691714 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") on node \"crc\" DevicePath \"\"" Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.691730 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") on node \"crc\" DevicePath \"\"" Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.691744 4768 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") on node \"crc\" DevicePath \"\"" Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.691757 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") on node \"crc\" DevicePath \"\"" Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.691770 4768 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") on node \"crc\" DevicePath \"\"" Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.691783 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.691796 4768 reconciler_common.go:293] "Volume detached for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") on node \"crc\" DevicePath \"\"" Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.691808 4768 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") on node \"crc\" DevicePath \"\"" Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.691822 4768 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.691835 4768 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") on node \"crc\" DevicePath \"\"" Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.691852 4768 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.691866 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") on node \"crc\" DevicePath \"\"" Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.691879 4768 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") on node \"crc\" DevicePath \"\"" Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.691892 4768 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.691906 4768 reconciler_common.go:293] "Volume detached for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") on node \"crc\" DevicePath \"\"" Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.691920 4768 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.691934 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") on node \"crc\" DevicePath \"\"" Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.691949 4768 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.691965 4768 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.691978 4768 reconciler_common.go:293] "Volume detached for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") on node \"crc\" DevicePath \"\"" Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.692022 4768 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") on node \"crc\" DevicePath \"\"" Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.692075 4768 reconciler_common.go:293] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") on node \"crc\" DevicePath \"\"" Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.692088 4768 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.692124 4768 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") on node \"crc\" DevicePath \"\"" Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.692138 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") on node \"crc\" DevicePath \"\"" Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.692150 4768 reconciler_common.go:293] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") on node \"crc\" DevicePath \"\"" Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.692162 4768 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") on node \"crc\" DevicePath \"\"" Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.692173 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") on node \"crc\" DevicePath \"\"" Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.692187 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") on node \"crc\" DevicePath \"\"" Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.692199 4768 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") on node \"crc\" DevicePath \"\"" Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.692212 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") on node \"crc\" DevicePath \"\"" Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.692226 4768 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.692238 4768 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") on node \"crc\" DevicePath \"\"" Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.692378 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") on node \"crc\" DevicePath \"\"" Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.692475 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") on node \"crc\" DevicePath \"\"" Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.693310 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.693886 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:41Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 17 13:36:41 crc kubenswrapper[4768]: E0217 13:36:41.701411 4768 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 17 13:36:41 crc kubenswrapper[4768]: E0217 13:36:41.701515 4768 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 17 13:36:41 crc kubenswrapper[4768]: E0217 13:36:41.701540 4768 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 17 13:36:41 crc kubenswrapper[4768]: E0217 13:36:41.701630 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-17 13:36:42.201601424 +0000 UTC m=+21.480987886 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.703692 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.704289 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" (OuterVolumeSpecName: "service-ca") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.704809 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:41Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.704886 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.705219 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" (OuterVolumeSpecName: "mcd-auth-proxy-config") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "mcd-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.706079 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.706135 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.706780 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.706874 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" (OuterVolumeSpecName: "kube-api-access-qs4fp") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "kube-api-access-qs4fp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.707091 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" (OuterVolumeSpecName: "config") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 13:36:41 crc kubenswrapper[4768]: E0217 13:36:41.707759 4768 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 17 13:36:41 crc kubenswrapper[4768]: E0217 13:36:41.707864 4768 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 17 13:36:41 crc kubenswrapper[4768]: E0217 13:36:41.707988 4768 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 17 13:36:41 crc kubenswrapper[4768]: E0217 13:36:41.708179 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-17 13:36:42.20809174 +0000 UTC m=+21.487478182 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.707760 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.709643 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" (OuterVolumeSpecName: "kube-api-access-gf66m") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "kube-api-access-gf66m". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.710210 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" (OuterVolumeSpecName: "kube-api-access-9xfj7") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "kube-api-access-9xfj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.710362 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.710406 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" (OuterVolumeSpecName: "kube-api-access-xcphl") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "kube-api-access-xcphl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.710937 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.712078 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" (OuterVolumeSpecName: "config") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.713694 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.715905 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.716219 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.716523 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:41Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.716737 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" (OuterVolumeSpecName: "config") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.716984 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.716933 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.717239 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.717349 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" (OuterVolumeSpecName: "default-certificate") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "default-certificate". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.717536 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" (OuterVolumeSpecName: "signing-cabundle") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-cabundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.717522 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" (OuterVolumeSpecName: "kube-api-access-v47cf") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "kube-api-access-v47cf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.717880 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" (OuterVolumeSpecName: "stats-auth") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "stats-auth". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.718597 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.718754 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.718858 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.731358 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" (OuterVolumeSpecName: "config") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.732169 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" (OuterVolumeSpecName: "kube-api-access-6g6sz") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "kube-api-access-6g6sz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.732506 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" (OuterVolumeSpecName: "multus-daemon-config") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "multus-daemon-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.732581 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.732759 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6l7rv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dac64b2f-4b0b-454c-96e0-fc7d563d300f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:41Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:41Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7njgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:36:41Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6l7rv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.732797 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.732866 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.732870 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" (OuterVolumeSpecName: "service-ca") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.733688 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.733782 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.734433 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.734889 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.734948 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.735194 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.735683 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.735942 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.744530 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.744965 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.753074 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.754995 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.755811 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:41Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.756869 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.766407 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.775122 4768 scope.go:117] "RemoveContainer" containerID="9ecebe409455190192bdd157c1aa1c3eaec5d7930ccfaa6c456a1d04be7fff2f" Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.775299 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Feb 17 13:36:41 crc kubenswrapper[4768]: E0217 13:36:41.775428 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.793953 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/dac64b2f-4b0b-454c-96e0-fc7d563d300f-hosts-file\") pod \"node-resolver-6l7rv\" (UID: \"dac64b2f-4b0b-454c-96e0-fc7d563d300f\") " pod="openshift-dns/node-resolver-6l7rv" Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.794010 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.794039 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.794069 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7njgg\" (UniqueName: \"kubernetes.io/projected/dac64b2f-4b0b-454c-96e0-fc7d563d300f-kube-api-access-7njgg\") pod \"node-resolver-6l7rv\" (UID: \"dac64b2f-4b0b-454c-96e0-fc7d563d300f\") " pod="openshift-dns/node-resolver-6l7rv" Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.794137 4768 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") on node \"crc\" DevicePath \"\"" Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.794148 4768 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.794159 4768 reconciler_common.go:293] "Volume detached for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.794169 4768 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") on node \"crc\" DevicePath \"\"" Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.794177 4768 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.794185 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") on node \"crc\" DevicePath \"\"" Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.794199 4768 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") on node \"crc\" DevicePath \"\"" Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.794209 4768 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.794216 4768 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.794224 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") on node \"crc\" DevicePath \"\"" Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.794234 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") on node \"crc\" DevicePath \"\"" Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.794242 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") on node \"crc\" DevicePath \"\"" Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.794251 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") on node \"crc\" DevicePath \"\"" Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.794259 4768 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.794267 4768 reconciler_common.go:293] "Volume detached for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") on node \"crc\" DevicePath \"\"" Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.794276 4768 reconciler_common.go:293] "Volume detached for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") on node \"crc\" DevicePath \"\"" Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.794284 4768 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.794292 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") on node \"crc\" DevicePath \"\"" Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.794301 4768 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.794217 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.794167 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.794663 4768 reconciler_common.go:293] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") on node \"crc\" DevicePath \"\"" Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.794715 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") on node \"crc\" DevicePath \"\"" Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.794734 4768 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") on node \"crc\" DevicePath \"\"" Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.794755 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.794768 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") on node \"crc\" DevicePath \"\"" Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.794777 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") on node \"crc\" DevicePath \"\"" Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.794786 4768 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.794795 4768 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") on node \"crc\" DevicePath \"\"" Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.794805 4768 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") on node \"crc\" DevicePath \"\"" Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.794788 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/dac64b2f-4b0b-454c-96e0-fc7d563d300f-hosts-file\") pod \"node-resolver-6l7rv\" (UID: \"dac64b2f-4b0b-454c-96e0-fc7d563d300f\") " pod="openshift-dns/node-resolver-6l7rv" Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.794813 4768 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") on node \"crc\" DevicePath \"\"" Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.794936 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") on node \"crc\" DevicePath \"\"" Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.794951 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") on node \"crc\" DevicePath \"\"" Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.794964 4768 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.794978 4768 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") on node \"crc\" DevicePath \"\"" Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.794991 4768 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.795003 4768 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") on node \"crc\" DevicePath \"\"" Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.795015 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") on node \"crc\" DevicePath \"\"" Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.795027 4768 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.795039 4768 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.795051 4768 reconciler_common.go:293] "Volume detached for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.795067 4768 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") on node \"crc\" DevicePath \"\"" Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.795079 4768 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") on node \"crc\" DevicePath \"\"" Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.795091 4768 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") on node \"crc\" DevicePath \"\"" Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.795154 4768 reconciler_common.go:293] "Volume detached for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") on node \"crc\" DevicePath \"\"" Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.795167 4768 reconciler_common.go:293] "Volume detached for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.795181 4768 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.795192 4768 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") on node \"crc\" DevicePath \"\"" Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.795204 4768 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.795215 4768 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") on node \"crc\" DevicePath \"\"" Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.795227 4768 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") on node \"crc\" DevicePath \"\"" Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.795239 4768 reconciler_common.go:293] "Volume detached for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") on node \"crc\" DevicePath \"\"" Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.795252 4768 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.795263 4768 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") on node \"crc\" DevicePath \"\"" Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.795275 4768 reconciler_common.go:293] "Volume detached for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") on node \"crc\" DevicePath \"\"" Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.795287 4768 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") on node \"crc\" DevicePath \"\"" Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.795298 4768 reconciler_common.go:293] "Volume detached for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") on node \"crc\" DevicePath \"\"" Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.795309 4768 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.795321 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") on node \"crc\" DevicePath \"\"" Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.795333 4768 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") on node \"crc\" DevicePath \"\"" Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.795346 4768 reconciler_common.go:293] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.795358 4768 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") on node \"crc\" DevicePath \"\"" Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.795372 4768 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.795410 4768 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.795426 4768 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.795440 4768 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.795483 4768 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.795498 4768 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") on node \"crc\" DevicePath \"\"" Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.795510 4768 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.795523 4768 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.795536 4768 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.795550 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") on node \"crc\" DevicePath \"\"" Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.795562 4768 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.795574 4768 reconciler_common.go:293] "Volume detached for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") on node \"crc\" DevicePath \"\"" Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.795586 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") on node \"crc\" DevicePath \"\"" Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.795601 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") on node \"crc\" DevicePath \"\"" Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.795616 4768 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") on node \"crc\" DevicePath \"\"" Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.795628 4768 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.795640 4768 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") on node \"crc\" DevicePath \"\"" Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.795652 4768 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.795665 4768 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") on node \"crc\" DevicePath \"\"" Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.795676 4768 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.795688 4768 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.795700 4768 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.795712 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") on node \"crc\" DevicePath \"\"" Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.795726 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") on node \"crc\" DevicePath \"\"" Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.795738 4768 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.795751 4768 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") on node \"crc\" DevicePath \"\"" Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.795765 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") on node \"crc\" DevicePath \"\"" Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.795777 4768 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") on node \"crc\" DevicePath \"\"" Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.808849 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7njgg\" (UniqueName: \"kubernetes.io/projected/dac64b2f-4b0b-454c-96e0-fc7d563d300f-kube-api-access-7njgg\") pod \"node-resolver-6l7rv\" (UID: \"dac64b2f-4b0b-454c-96e0-fc7d563d300f\") " pod="openshift-dns/node-resolver-6l7rv" Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.808952 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.818737 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 17 13:36:41 crc kubenswrapper[4768]: W0217 13:36:41.822033 4768 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod37a5e44f_9a88_4405_be8a_b645485e7312.slice/crio-70f045ffbdab8eb2711c0c5150aec23350851f7dfa5013df94abaaad607ef8c8 WatchSource:0}: Error finding container 70f045ffbdab8eb2711c0c5150aec23350851f7dfa5013df94abaaad607ef8c8: Status 404 returned error can't find the container with id 70f045ffbdab8eb2711c0c5150aec23350851f7dfa5013df94abaaad607ef8c8 Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.825407 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.831411 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-6l7rv" Feb 17 13:36:41 crc kubenswrapper[4768]: W0217 13:36:41.831800 4768 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd75a4c96_2883_4a0b_bab2_0fab2b6c0b49.slice/crio-6dcae2465dc490b25166ad9989dc2f5c917fa986fb65599b0218bf846a4fc0a5 WatchSource:0}: Error finding container 6dcae2465dc490b25166ad9989dc2f5c917fa986fb65599b0218bf846a4fc0a5: Status 404 returned error can't find the container with id 6dcae2465dc490b25166ad9989dc2f5c917fa986fb65599b0218bf846a4fc0a5 Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.886368 4768 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2027-02-17 13:31:40 +0000 UTC, rotation deadline is 2026-12-08 07:29:17.413342615 +0000 UTC Feb 17 13:36:41 crc kubenswrapper[4768]: I0217 13:36:41.886791 4768 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 7049h52m35.526555779s for next certificate rotation Feb 17 13:36:42 crc kubenswrapper[4768]: I0217 13:36:42.064465 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-daemon-p97z4"] Feb 17 13:36:42 crc kubenswrapper[4768]: I0217 13:36:42.064812 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-jjjqj"] Feb 17 13:36:42 crc kubenswrapper[4768]: I0217 13:36:42.065011 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-p97z4" Feb 17 13:36:42 crc kubenswrapper[4768]: I0217 13:36:42.065198 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-jjjqj" Feb 17 13:36:42 crc kubenswrapper[4768]: I0217 13:36:42.069484 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Feb 17 13:36:42 crc kubenswrapper[4768]: I0217 13:36:42.069629 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Feb 17 13:36:42 crc kubenswrapper[4768]: I0217 13:36:42.069907 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Feb 17 13:36:42 crc kubenswrapper[4768]: I0217 13:36:42.069911 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Feb 17 13:36:42 crc kubenswrapper[4768]: I0217 13:36:42.069922 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Feb 17 13:36:42 crc kubenswrapper[4768]: I0217 13:36:42.069999 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Feb 17 13:36:42 crc kubenswrapper[4768]: I0217 13:36:42.070018 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Feb 17 13:36:42 crc kubenswrapper[4768]: I0217 13:36:42.070035 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Feb 17 13:36:42 crc kubenswrapper[4768]: I0217 13:36:42.070347 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Feb 17 13:36:42 crc kubenswrapper[4768]: I0217 13:36:42.070483 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Feb 17 13:36:42 crc kubenswrapper[4768]: I0217 13:36:42.081767 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 17 13:36:42 crc kubenswrapper[4768]: I0217 13:36:42.083907 4768 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Feb 17 13:36:42 crc kubenswrapper[4768]: I0217 13:36:42.100481 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b2h62\" (UniqueName: \"kubernetes.io/projected/10c685ba-8fe0-425c-958c-3fb6754d3d84-kube-api-access-b2h62\") pod \"machine-config-daemon-p97z4\" (UID: \"10c685ba-8fe0-425c-958c-3fb6754d3d84\") " pod="openshift-machine-config-operator/machine-config-daemon-p97z4" Feb 17 13:36:42 crc kubenswrapper[4768]: I0217 13:36:42.100561 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/e044bf1f-26b2-4a39-86e6-0440eff3eaa9-multus-cni-dir\") pod \"multus-jjjqj\" (UID: \"e044bf1f-26b2-4a39-86e6-0440eff3eaa9\") " pod="openshift-multus/multus-jjjqj" Feb 17 13:36:42 crc kubenswrapper[4768]: I0217 13:36:42.100618 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/e044bf1f-26b2-4a39-86e6-0440eff3eaa9-cnibin\") pod \"multus-jjjqj\" (UID: \"e044bf1f-26b2-4a39-86e6-0440eff3eaa9\") " pod="openshift-multus/multus-jjjqj" Feb 17 13:36:42 crc kubenswrapper[4768]: I0217 13:36:42.100644 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/e044bf1f-26b2-4a39-86e6-0440eff3eaa9-os-release\") pod \"multus-jjjqj\" (UID: \"e044bf1f-26b2-4a39-86e6-0440eff3eaa9\") " pod="openshift-multus/multus-jjjqj" Feb 17 13:36:42 crc kubenswrapper[4768]: I0217 13:36:42.100693 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/10c685ba-8fe0-425c-958c-3fb6754d3d84-mcd-auth-proxy-config\") pod \"machine-config-daemon-p97z4\" (UID: \"10c685ba-8fe0-425c-958c-3fb6754d3d84\") " pod="openshift-machine-config-operator/machine-config-daemon-p97z4" Feb 17 13:36:42 crc kubenswrapper[4768]: I0217 13:36:42.100719 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/e044bf1f-26b2-4a39-86e6-0440eff3eaa9-host-run-multus-certs\") pod \"multus-jjjqj\" (UID: \"e044bf1f-26b2-4a39-86e6-0440eff3eaa9\") " pod="openshift-multus/multus-jjjqj" Feb 17 13:36:42 crc kubenswrapper[4768]: I0217 13:36:42.101385 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/e044bf1f-26b2-4a39-86e6-0440eff3eaa9-cni-binary-copy\") pod \"multus-jjjqj\" (UID: \"e044bf1f-26b2-4a39-86e6-0440eff3eaa9\") " pod="openshift-multus/multus-jjjqj" Feb 17 13:36:42 crc kubenswrapper[4768]: I0217 13:36:42.101449 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/e044bf1f-26b2-4a39-86e6-0440eff3eaa9-host-var-lib-kubelet\") pod \"multus-jjjqj\" (UID: \"e044bf1f-26b2-4a39-86e6-0440eff3eaa9\") " pod="openshift-multus/multus-jjjqj" Feb 17 13:36:42 crc kubenswrapper[4768]: I0217 13:36:42.101474 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/e044bf1f-26b2-4a39-86e6-0440eff3eaa9-hostroot\") pod \"multus-jjjqj\" (UID: \"e044bf1f-26b2-4a39-86e6-0440eff3eaa9\") " pod="openshift-multus/multus-jjjqj" Feb 17 13:36:42 crc kubenswrapper[4768]: I0217 13:36:42.101542 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/e044bf1f-26b2-4a39-86e6-0440eff3eaa9-host-run-k8s-cni-cncf-io\") pod \"multus-jjjqj\" (UID: \"e044bf1f-26b2-4a39-86e6-0440eff3eaa9\") " pod="openshift-multus/multus-jjjqj" Feb 17 13:36:42 crc kubenswrapper[4768]: I0217 13:36:42.101607 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/e044bf1f-26b2-4a39-86e6-0440eff3eaa9-host-var-lib-cni-multus\") pod \"multus-jjjqj\" (UID: \"e044bf1f-26b2-4a39-86e6-0440eff3eaa9\") " pod="openshift-multus/multus-jjjqj" Feb 17 13:36:42 crc kubenswrapper[4768]: I0217 13:36:42.101632 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/e044bf1f-26b2-4a39-86e6-0440eff3eaa9-multus-daemon-config\") pod \"multus-jjjqj\" (UID: \"e044bf1f-26b2-4a39-86e6-0440eff3eaa9\") " pod="openshift-multus/multus-jjjqj" Feb 17 13:36:42 crc kubenswrapper[4768]: I0217 13:36:42.101703 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/e044bf1f-26b2-4a39-86e6-0440eff3eaa9-etc-kubernetes\") pod \"multus-jjjqj\" (UID: \"e044bf1f-26b2-4a39-86e6-0440eff3eaa9\") " pod="openshift-multus/multus-jjjqj" Feb 17 13:36:42 crc kubenswrapper[4768]: I0217 13:36:42.101730 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jlkgb\" (UniqueName: \"kubernetes.io/projected/e044bf1f-26b2-4a39-86e6-0440eff3eaa9-kube-api-access-jlkgb\") pod \"multus-jjjqj\" (UID: \"e044bf1f-26b2-4a39-86e6-0440eff3eaa9\") " pod="openshift-multus/multus-jjjqj" Feb 17 13:36:42 crc kubenswrapper[4768]: I0217 13:36:42.101894 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/e044bf1f-26b2-4a39-86e6-0440eff3eaa9-system-cni-dir\") pod \"multus-jjjqj\" (UID: \"e044bf1f-26b2-4a39-86e6-0440eff3eaa9\") " pod="openshift-multus/multus-jjjqj" Feb 17 13:36:42 crc kubenswrapper[4768]: I0217 13:36:42.101959 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/e044bf1f-26b2-4a39-86e6-0440eff3eaa9-multus-socket-dir-parent\") pod \"multus-jjjqj\" (UID: \"e044bf1f-26b2-4a39-86e6-0440eff3eaa9\") " pod="openshift-multus/multus-jjjqj" Feb 17 13:36:42 crc kubenswrapper[4768]: I0217 13:36:42.102344 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/10c685ba-8fe0-425c-958c-3fb6754d3d84-proxy-tls\") pod \"machine-config-daemon-p97z4\" (UID: \"10c685ba-8fe0-425c-958c-3fb6754d3d84\") " pod="openshift-machine-config-operator/machine-config-daemon-p97z4" Feb 17 13:36:42 crc kubenswrapper[4768]: I0217 13:36:42.102430 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/10c685ba-8fe0-425c-958c-3fb6754d3d84-rootfs\") pod \"machine-config-daemon-p97z4\" (UID: \"10c685ba-8fe0-425c-958c-3fb6754d3d84\") " pod="openshift-machine-config-operator/machine-config-daemon-p97z4" Feb 17 13:36:42 crc kubenswrapper[4768]: I0217 13:36:42.102457 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/e044bf1f-26b2-4a39-86e6-0440eff3eaa9-host-run-netns\") pod \"multus-jjjqj\" (UID: \"e044bf1f-26b2-4a39-86e6-0440eff3eaa9\") " pod="openshift-multus/multus-jjjqj" Feb 17 13:36:42 crc kubenswrapper[4768]: I0217 13:36:42.102486 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/e044bf1f-26b2-4a39-86e6-0440eff3eaa9-host-var-lib-cni-bin\") pod \"multus-jjjqj\" (UID: \"e044bf1f-26b2-4a39-86e6-0440eff3eaa9\") " pod="openshift-multus/multus-jjjqj" Feb 17 13:36:42 crc kubenswrapper[4768]: I0217 13:36:42.102509 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/e044bf1f-26b2-4a39-86e6-0440eff3eaa9-multus-conf-dir\") pod \"multus-jjjqj\" (UID: \"e044bf1f-26b2-4a39-86e6-0440eff3eaa9\") " pod="openshift-multus/multus-jjjqj" Feb 17 13:36:42 crc kubenswrapper[4768]: I0217 13:36:42.111187 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-p97z4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"10c685ba-8fe0-425c-958c-3fb6754d3d84\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b2h62\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b2h62\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:36:42Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-p97z4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 17 13:36:42 crc kubenswrapper[4768]: I0217 13:36:42.131072 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-jjjqj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e044bf1f-26b2-4a39-86e6-0440eff3eaa9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jlkgb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:36:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-jjjqj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 17 13:36:42 crc kubenswrapper[4768]: I0217 13:36:42.141241 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"88441663-1346-4328-a433-63cb4d9b6722\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1dd90501c9cf3ad0b27fefe4999afdfeaf0a8262728c8d47cd4714b0ef71d340\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://52dd97c46a2e371a68dcd9558be0f94d3bdd20c7a6ba6a65178fc09cdceea903\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4f0320f35632987c0e1f7eb6655c7221d7c159f1473088fa143a7c69a60f5b0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ecebe409455190192bdd157c1aa1c3eaec5d7930ccfaa6c456a1d04be7fff2f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://56007909b34b10d26eb61969c4c981eb15672cbfc95eeb424d47068a11f2d69f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-17T13:36:25Z\\\",\\\"message\\\":\\\"W0217 13:36:24.674264 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0217 13:36:24.676596 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771335384 cert, and key in /tmp/serving-cert-2323124499/serving-signer.crt, /tmp/serving-cert-2323124499/serving-signer.key\\\\nI0217 13:36:25.041663 1 observer_polling.go:159] Starting file observer\\\\nW0217 13:36:25.046527 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0217 13:36:25.046685 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0217 13:36:25.048335 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2323124499/tls.crt::/tmp/serving-cert-2323124499/tls.key\\\\\\\"\\\\nF0217 13:36:25.371205 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T13:36:24Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9ecebe409455190192bdd157c1aa1c3eaec5d7930ccfaa6c456a1d04be7fff2f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-17T13:36:41Z\\\",\\\"message\\\":\\\"le observer\\\\nW0217 13:36:40.862072 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0217 13:36:40.862247 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0217 13:36:40.862863 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1936381186/tls.crt::/tmp/serving-cert-1936381186/tls.key\\\\\\\"\\\\nI0217 13:36:41.350711 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0217 13:36:41.368959 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0217 13:36:41.368982 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0217 13:36:41.369003 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0217 13:36:41.369008 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0217 13:36:41.374960 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0217 13:36:41.374983 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 13:36:41.374989 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0217 13:36:41.374990 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0217 13:36:41.374994 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0217 13:36:41.375023 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0217 13:36:41.375028 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0217 13:36:41.375032 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0217 13:36:41.377284 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T13:36:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7641c2023d65064decc69606567dff6eff07116a6cc62f9d1b1cddb34a39a407\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:24Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9dffd0f3cc3b89ababe812ea2e550f319242913da2908f466cbb8d9ac541e4b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9dffd0f3cc3b89ababe812ea2e550f319242913da2908f466cbb8d9ac541e4b9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:36:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:36:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:36:21Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 17 13:36:42 crc kubenswrapper[4768]: I0217 13:36:42.153313 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:41Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 17 13:36:42 crc kubenswrapper[4768]: I0217 13:36:42.162419 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 17 13:36:42 crc kubenswrapper[4768]: I0217 13:36:42.171435 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:41Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 17 13:36:42 crc kubenswrapper[4768]: I0217 13:36:42.185465 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:41Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 17 13:36:42 crc kubenswrapper[4768]: I0217 13:36:42.202294 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:41Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 17 13:36:42 crc kubenswrapper[4768]: I0217 13:36:42.203508 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 13:36:42 crc kubenswrapper[4768]: E0217 13:36:42.203694 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 13:36:43.20366597 +0000 UTC m=+22.483052412 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 13:36:42 crc kubenswrapper[4768]: I0217 13:36:42.203811 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/e044bf1f-26b2-4a39-86e6-0440eff3eaa9-host-var-lib-cni-bin\") pod \"multus-jjjqj\" (UID: \"e044bf1f-26b2-4a39-86e6-0440eff3eaa9\") " pod="openshift-multus/multus-jjjqj" Feb 17 13:36:42 crc kubenswrapper[4768]: I0217 13:36:42.203902 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/e044bf1f-26b2-4a39-86e6-0440eff3eaa9-multus-conf-dir\") pod \"multus-jjjqj\" (UID: \"e044bf1f-26b2-4a39-86e6-0440eff3eaa9\") " pod="openshift-multus/multus-jjjqj" Feb 17 13:36:42 crc kubenswrapper[4768]: I0217 13:36:42.204021 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/10c685ba-8fe0-425c-958c-3fb6754d3d84-rootfs\") pod \"machine-config-daemon-p97z4\" (UID: \"10c685ba-8fe0-425c-958c-3fb6754d3d84\") " pod="openshift-machine-config-operator/machine-config-daemon-p97z4" Feb 17 13:36:42 crc kubenswrapper[4768]: I0217 13:36:42.204127 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/e044bf1f-26b2-4a39-86e6-0440eff3eaa9-host-run-netns\") pod \"multus-jjjqj\" (UID: \"e044bf1f-26b2-4a39-86e6-0440eff3eaa9\") " pod="openshift-multus/multus-jjjqj" Feb 17 13:36:42 crc kubenswrapper[4768]: I0217 13:36:42.204236 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b2h62\" (UniqueName: \"kubernetes.io/projected/10c685ba-8fe0-425c-958c-3fb6754d3d84-kube-api-access-b2h62\") pod \"machine-config-daemon-p97z4\" (UID: \"10c685ba-8fe0-425c-958c-3fb6754d3d84\") " pod="openshift-machine-config-operator/machine-config-daemon-p97z4" Feb 17 13:36:42 crc kubenswrapper[4768]: I0217 13:36:42.204324 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/e044bf1f-26b2-4a39-86e6-0440eff3eaa9-multus-cni-dir\") pod \"multus-jjjqj\" (UID: \"e044bf1f-26b2-4a39-86e6-0440eff3eaa9\") " pod="openshift-multus/multus-jjjqj" Feb 17 13:36:42 crc kubenswrapper[4768]: I0217 13:36:42.204407 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/e044bf1f-26b2-4a39-86e6-0440eff3eaa9-cnibin\") pod \"multus-jjjqj\" (UID: \"e044bf1f-26b2-4a39-86e6-0440eff3eaa9\") " pod="openshift-multus/multus-jjjqj" Feb 17 13:36:42 crc kubenswrapper[4768]: I0217 13:36:42.204492 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/e044bf1f-26b2-4a39-86e6-0440eff3eaa9-os-release\") pod \"multus-jjjqj\" (UID: \"e044bf1f-26b2-4a39-86e6-0440eff3eaa9\") " pod="openshift-multus/multus-jjjqj" Feb 17 13:36:42 crc kubenswrapper[4768]: I0217 13:36:42.204566 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/e044bf1f-26b2-4a39-86e6-0440eff3eaa9-os-release\") pod \"multus-jjjqj\" (UID: \"e044bf1f-26b2-4a39-86e6-0440eff3eaa9\") " pod="openshift-multus/multus-jjjqj" Feb 17 13:36:42 crc kubenswrapper[4768]: I0217 13:36:42.204155 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/10c685ba-8fe0-425c-958c-3fb6754d3d84-rootfs\") pod \"machine-config-daemon-p97z4\" (UID: \"10c685ba-8fe0-425c-958c-3fb6754d3d84\") " pod="openshift-machine-config-operator/machine-config-daemon-p97z4" Feb 17 13:36:42 crc kubenswrapper[4768]: I0217 13:36:42.203950 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/e044bf1f-26b2-4a39-86e6-0440eff3eaa9-multus-conf-dir\") pod \"multus-jjjqj\" (UID: \"e044bf1f-26b2-4a39-86e6-0440eff3eaa9\") " pod="openshift-multus/multus-jjjqj" Feb 17 13:36:42 crc kubenswrapper[4768]: I0217 13:36:42.204519 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/e044bf1f-26b2-4a39-86e6-0440eff3eaa9-cnibin\") pod \"multus-jjjqj\" (UID: \"e044bf1f-26b2-4a39-86e6-0440eff3eaa9\") " pod="openshift-multus/multus-jjjqj" Feb 17 13:36:42 crc kubenswrapper[4768]: I0217 13:36:42.204184 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/e044bf1f-26b2-4a39-86e6-0440eff3eaa9-host-run-netns\") pod \"multus-jjjqj\" (UID: \"e044bf1f-26b2-4a39-86e6-0440eff3eaa9\") " pod="openshift-multus/multus-jjjqj" Feb 17 13:36:42 crc kubenswrapper[4768]: I0217 13:36:42.203922 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/e044bf1f-26b2-4a39-86e6-0440eff3eaa9-host-var-lib-cni-bin\") pod \"multus-jjjqj\" (UID: \"e044bf1f-26b2-4a39-86e6-0440eff3eaa9\") " pod="openshift-multus/multus-jjjqj" Feb 17 13:36:42 crc kubenswrapper[4768]: I0217 13:36:42.204648 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/e044bf1f-26b2-4a39-86e6-0440eff3eaa9-multus-cni-dir\") pod \"multus-jjjqj\" (UID: \"e044bf1f-26b2-4a39-86e6-0440eff3eaa9\") " pod="openshift-multus/multus-jjjqj" Feb 17 13:36:42 crc kubenswrapper[4768]: I0217 13:36:42.204575 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/10c685ba-8fe0-425c-958c-3fb6754d3d84-mcd-auth-proxy-config\") pod \"machine-config-daemon-p97z4\" (UID: \"10c685ba-8fe0-425c-958c-3fb6754d3d84\") " pod="openshift-machine-config-operator/machine-config-daemon-p97z4" Feb 17 13:36:42 crc kubenswrapper[4768]: I0217 13:36:42.204963 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/e044bf1f-26b2-4a39-86e6-0440eff3eaa9-host-run-multus-certs\") pod \"multus-jjjqj\" (UID: \"e044bf1f-26b2-4a39-86e6-0440eff3eaa9\") " pod="openshift-multus/multus-jjjqj" Feb 17 13:36:42 crc kubenswrapper[4768]: I0217 13:36:42.205052 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 13:36:42 crc kubenswrapper[4768]: I0217 13:36:42.205139 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/e044bf1f-26b2-4a39-86e6-0440eff3eaa9-host-run-multus-certs\") pod \"multus-jjjqj\" (UID: \"e044bf1f-26b2-4a39-86e6-0440eff3eaa9\") " pod="openshift-multus/multus-jjjqj" Feb 17 13:36:42 crc kubenswrapper[4768]: E0217 13:36:42.205207 4768 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 17 13:36:42 crc kubenswrapper[4768]: E0217 13:36:42.205272 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-17 13:36:43.205259376 +0000 UTC m=+22.484645878 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 17 13:36:42 crc kubenswrapper[4768]: I0217 13:36:42.205377 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/e044bf1f-26b2-4a39-86e6-0440eff3eaa9-cni-binary-copy\") pod \"multus-jjjqj\" (UID: \"e044bf1f-26b2-4a39-86e6-0440eff3eaa9\") " pod="openshift-multus/multus-jjjqj" Feb 17 13:36:42 crc kubenswrapper[4768]: I0217 13:36:42.205402 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/10c685ba-8fe0-425c-958c-3fb6754d3d84-mcd-auth-proxy-config\") pod \"machine-config-daemon-p97z4\" (UID: \"10c685ba-8fe0-425c-958c-3fb6754d3d84\") " pod="openshift-machine-config-operator/machine-config-daemon-p97z4" Feb 17 13:36:42 crc kubenswrapper[4768]: I0217 13:36:42.205474 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/e044bf1f-26b2-4a39-86e6-0440eff3eaa9-host-var-lib-kubelet\") pod \"multus-jjjqj\" (UID: \"e044bf1f-26b2-4a39-86e6-0440eff3eaa9\") " pod="openshift-multus/multus-jjjqj" Feb 17 13:36:42 crc kubenswrapper[4768]: I0217 13:36:42.205591 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/e044bf1f-26b2-4a39-86e6-0440eff3eaa9-hostroot\") pod \"multus-jjjqj\" (UID: \"e044bf1f-26b2-4a39-86e6-0440eff3eaa9\") " pod="openshift-multus/multus-jjjqj" Feb 17 13:36:42 crc kubenswrapper[4768]: I0217 13:36:42.205620 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/e044bf1f-26b2-4a39-86e6-0440eff3eaa9-host-run-k8s-cni-cncf-io\") pod \"multus-jjjqj\" (UID: \"e044bf1f-26b2-4a39-86e6-0440eff3eaa9\") " pod="openshift-multus/multus-jjjqj" Feb 17 13:36:42 crc kubenswrapper[4768]: I0217 13:36:42.205645 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/e044bf1f-26b2-4a39-86e6-0440eff3eaa9-host-var-lib-cni-multus\") pod \"multus-jjjqj\" (UID: \"e044bf1f-26b2-4a39-86e6-0440eff3eaa9\") " pod="openshift-multus/multus-jjjqj" Feb 17 13:36:42 crc kubenswrapper[4768]: I0217 13:36:42.205668 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/e044bf1f-26b2-4a39-86e6-0440eff3eaa9-multus-daemon-config\") pod \"multus-jjjqj\" (UID: \"e044bf1f-26b2-4a39-86e6-0440eff3eaa9\") " pod="openshift-multus/multus-jjjqj" Feb 17 13:36:42 crc kubenswrapper[4768]: I0217 13:36:42.205715 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/e044bf1f-26b2-4a39-86e6-0440eff3eaa9-host-run-k8s-cni-cncf-io\") pod \"multus-jjjqj\" (UID: \"e044bf1f-26b2-4a39-86e6-0440eff3eaa9\") " pod="openshift-multus/multus-jjjqj" Feb 17 13:36:42 crc kubenswrapper[4768]: I0217 13:36:42.205723 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/e044bf1f-26b2-4a39-86e6-0440eff3eaa9-host-var-lib-cni-multus\") pod \"multus-jjjqj\" (UID: \"e044bf1f-26b2-4a39-86e6-0440eff3eaa9\") " pod="openshift-multus/multus-jjjqj" Feb 17 13:36:42 crc kubenswrapper[4768]: I0217 13:36:42.205732 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/e044bf1f-26b2-4a39-86e6-0440eff3eaa9-hostroot\") pod \"multus-jjjqj\" (UID: \"e044bf1f-26b2-4a39-86e6-0440eff3eaa9\") " pod="openshift-multus/multus-jjjqj" Feb 17 13:36:42 crc kubenswrapper[4768]: I0217 13:36:42.205776 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/e044bf1f-26b2-4a39-86e6-0440eff3eaa9-etc-kubernetes\") pod \"multus-jjjqj\" (UID: \"e044bf1f-26b2-4a39-86e6-0440eff3eaa9\") " pod="openshift-multus/multus-jjjqj" Feb 17 13:36:42 crc kubenswrapper[4768]: I0217 13:36:42.205832 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/e044bf1f-26b2-4a39-86e6-0440eff3eaa9-etc-kubernetes\") pod \"multus-jjjqj\" (UID: \"e044bf1f-26b2-4a39-86e6-0440eff3eaa9\") " pod="openshift-multus/multus-jjjqj" Feb 17 13:36:42 crc kubenswrapper[4768]: I0217 13:36:42.205861 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jlkgb\" (UniqueName: \"kubernetes.io/projected/e044bf1f-26b2-4a39-86e6-0440eff3eaa9-kube-api-access-jlkgb\") pod \"multus-jjjqj\" (UID: \"e044bf1f-26b2-4a39-86e6-0440eff3eaa9\") " pod="openshift-multus/multus-jjjqj" Feb 17 13:36:42 crc kubenswrapper[4768]: I0217 13:36:42.205944 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 13:36:42 crc kubenswrapper[4768]: I0217 13:36:42.205969 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/e044bf1f-26b2-4a39-86e6-0440eff3eaa9-system-cni-dir\") pod \"multus-jjjqj\" (UID: \"e044bf1f-26b2-4a39-86e6-0440eff3eaa9\") " pod="openshift-multus/multus-jjjqj" Feb 17 13:36:42 crc kubenswrapper[4768]: E0217 13:36:42.206044 4768 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 17 13:36:42 crc kubenswrapper[4768]: E0217 13:36:42.206116 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-17 13:36:43.20607855 +0000 UTC m=+22.485465032 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 17 13:36:42 crc kubenswrapper[4768]: I0217 13:36:42.206171 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/e044bf1f-26b2-4a39-86e6-0440eff3eaa9-system-cni-dir\") pod \"multus-jjjqj\" (UID: \"e044bf1f-26b2-4a39-86e6-0440eff3eaa9\") " pod="openshift-multus/multus-jjjqj" Feb 17 13:36:42 crc kubenswrapper[4768]: I0217 13:36:42.206179 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/e044bf1f-26b2-4a39-86e6-0440eff3eaa9-multus-socket-dir-parent\") pod \"multus-jjjqj\" (UID: \"e044bf1f-26b2-4a39-86e6-0440eff3eaa9\") " pod="openshift-multus/multus-jjjqj" Feb 17 13:36:42 crc kubenswrapper[4768]: I0217 13:36:42.206203 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/10c685ba-8fe0-425c-958c-3fb6754d3d84-proxy-tls\") pod \"machine-config-daemon-p97z4\" (UID: \"10c685ba-8fe0-425c-958c-3fb6754d3d84\") " pod="openshift-machine-config-operator/machine-config-daemon-p97z4" Feb 17 13:36:42 crc kubenswrapper[4768]: I0217 13:36:42.206256 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/e044bf1f-26b2-4a39-86e6-0440eff3eaa9-multus-socket-dir-parent\") pod \"multus-jjjqj\" (UID: \"e044bf1f-26b2-4a39-86e6-0440eff3eaa9\") " pod="openshift-multus/multus-jjjqj" Feb 17 13:36:42 crc kubenswrapper[4768]: I0217 13:36:42.206288 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 13:36:42 crc kubenswrapper[4768]: I0217 13:36:42.206275 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/e044bf1f-26b2-4a39-86e6-0440eff3eaa9-cni-binary-copy\") pod \"multus-jjjqj\" (UID: \"e044bf1f-26b2-4a39-86e6-0440eff3eaa9\") " pod="openshift-multus/multus-jjjqj" Feb 17 13:36:42 crc kubenswrapper[4768]: E0217 13:36:42.206377 4768 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 17 13:36:42 crc kubenswrapper[4768]: E0217 13:36:42.206392 4768 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 17 13:36:42 crc kubenswrapper[4768]: E0217 13:36:42.206404 4768 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 17 13:36:42 crc kubenswrapper[4768]: E0217 13:36:42.206447 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-17 13:36:43.20643407 +0000 UTC m=+22.485820512 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 17 13:36:42 crc kubenswrapper[4768]: I0217 13:36:42.206483 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/e044bf1f-26b2-4a39-86e6-0440eff3eaa9-multus-daemon-config\") pod \"multus-jjjqj\" (UID: \"e044bf1f-26b2-4a39-86e6-0440eff3eaa9\") " pod="openshift-multus/multus-jjjqj" Feb 17 13:36:42 crc kubenswrapper[4768]: I0217 13:36:42.206663 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/e044bf1f-26b2-4a39-86e6-0440eff3eaa9-host-var-lib-kubelet\") pod \"multus-jjjqj\" (UID: \"e044bf1f-26b2-4a39-86e6-0440eff3eaa9\") " pod="openshift-multus/multus-jjjqj" Feb 17 13:36:42 crc kubenswrapper[4768]: I0217 13:36:42.209356 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/10c685ba-8fe0-425c-958c-3fb6754d3d84-proxy-tls\") pod \"machine-config-daemon-p97z4\" (UID: \"10c685ba-8fe0-425c-958c-3fb6754d3d84\") " pod="openshift-machine-config-operator/machine-config-daemon-p97z4" Feb 17 13:36:42 crc kubenswrapper[4768]: I0217 13:36:42.216527 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6l7rv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dac64b2f-4b0b-454c-96e0-fc7d563d300f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:41Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:41Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7njgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:36:41Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6l7rv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 17 13:36:42 crc kubenswrapper[4768]: I0217 13:36:42.226993 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b2h62\" (UniqueName: \"kubernetes.io/projected/10c685ba-8fe0-425c-958c-3fb6754d3d84-kube-api-access-b2h62\") pod \"machine-config-daemon-p97z4\" (UID: \"10c685ba-8fe0-425c-958c-3fb6754d3d84\") " pod="openshift-machine-config-operator/machine-config-daemon-p97z4" Feb 17 13:36:42 crc kubenswrapper[4768]: I0217 13:36:42.227692 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jlkgb\" (UniqueName: \"kubernetes.io/projected/e044bf1f-26b2-4a39-86e6-0440eff3eaa9-kube-api-access-jlkgb\") pod \"multus-jjjqj\" (UID: \"e044bf1f-26b2-4a39-86e6-0440eff3eaa9\") " pod="openshift-multus/multus-jjjqj" Feb 17 13:36:42 crc kubenswrapper[4768]: I0217 13:36:42.227911 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-jjjqj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e044bf1f-26b2-4a39-86e6-0440eff3eaa9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jlkgb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:36:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-jjjqj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 17 13:36:42 crc kubenswrapper[4768]: I0217 13:36:42.237288 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 17 13:36:42 crc kubenswrapper[4768]: I0217 13:36:42.244749 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-p97z4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"10c685ba-8fe0-425c-958c-3fb6754d3d84\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b2h62\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b2h62\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:36:42Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-p97z4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 17 13:36:42 crc kubenswrapper[4768]: I0217 13:36:42.254621 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"88441663-1346-4328-a433-63cb4d9b6722\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1dd90501c9cf3ad0b27fefe4999afdfeaf0a8262728c8d47cd4714b0ef71d340\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://52dd97c46a2e371a68dcd9558be0f94d3bdd20c7a6ba6a65178fc09cdceea903\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4f0320f35632987c0e1f7eb6655c7221d7c159f1473088fa143a7c69a60f5b0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ecebe409455190192bdd157c1aa1c3eaec5d7930ccfaa6c456a1d04be7fff2f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://56007909b34b10d26eb61969c4c981eb15672cbfc95eeb424d47068a11f2d69f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-17T13:36:25Z\\\",\\\"message\\\":\\\"W0217 13:36:24.674264 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0217 13:36:24.676596 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771335384 cert, and key in /tmp/serving-cert-2323124499/serving-signer.crt, /tmp/serving-cert-2323124499/serving-signer.key\\\\nI0217 13:36:25.041663 1 observer_polling.go:159] Starting file observer\\\\nW0217 13:36:25.046527 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0217 13:36:25.046685 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0217 13:36:25.048335 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2323124499/tls.crt::/tmp/serving-cert-2323124499/tls.key\\\\\\\"\\\\nF0217 13:36:25.371205 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T13:36:24Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9ecebe409455190192bdd157c1aa1c3eaec5d7930ccfaa6c456a1d04be7fff2f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-17T13:36:41Z\\\",\\\"message\\\":\\\"le observer\\\\nW0217 13:36:40.862072 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0217 13:36:40.862247 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0217 13:36:40.862863 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1936381186/tls.crt::/tmp/serving-cert-1936381186/tls.key\\\\\\\"\\\\nI0217 13:36:41.350711 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0217 13:36:41.368959 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0217 13:36:41.368982 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0217 13:36:41.369003 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0217 13:36:41.369008 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0217 13:36:41.374960 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0217 13:36:41.374983 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 13:36:41.374989 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0217 13:36:41.374990 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0217 13:36:41.374994 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0217 13:36:41.375023 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0217 13:36:41.375028 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0217 13:36:41.375032 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0217 13:36:41.377284 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T13:36:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7641c2023d65064decc69606567dff6eff07116a6cc62f9d1b1cddb34a39a407\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:24Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9dffd0f3cc3b89ababe812ea2e550f319242913da2908f466cbb8d9ac541e4b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9dffd0f3cc3b89ababe812ea2e550f319242913da2908f466cbb8d9ac541e4b9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:36:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:36:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:36:21Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 17 13:36:42 crc kubenswrapper[4768]: I0217 13:36:42.263287 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:41Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 17 13:36:42 crc kubenswrapper[4768]: I0217 13:36:42.272550 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 17 13:36:42 crc kubenswrapper[4768]: I0217 13:36:42.282249 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:41Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 17 13:36:42 crc kubenswrapper[4768]: I0217 13:36:42.299368 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:41Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 17 13:36:42 crc kubenswrapper[4768]: I0217 13:36:42.306676 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 13:36:42 crc kubenswrapper[4768]: E0217 13:36:42.306898 4768 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 17 13:36:42 crc kubenswrapper[4768]: E0217 13:36:42.307097 4768 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 17 13:36:42 crc kubenswrapper[4768]: E0217 13:36:42.307222 4768 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 17 13:36:42 crc kubenswrapper[4768]: E0217 13:36:42.307362 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-17 13:36:43.307342195 +0000 UTC m=+22.586728637 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 17 13:36:42 crc kubenswrapper[4768]: I0217 13:36:42.330643 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:41Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 17 13:36:42 crc kubenswrapper[4768]: I0217 13:36:42.352029 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6l7rv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dac64b2f-4b0b-454c-96e0-fc7d563d300f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:41Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:41Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7njgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:36:41Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6l7rv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 17 13:36:42 crc kubenswrapper[4768]: I0217 13:36:42.385644 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-p97z4" Feb 17 13:36:42 crc kubenswrapper[4768]: I0217 13:36:42.385690 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-jjjqj" Feb 17 13:36:42 crc kubenswrapper[4768]: W0217 13:36:42.404146 4768 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod10c685ba_8fe0_425c_958c_3fb6754d3d84.slice/crio-34f0f125cccda39844ccffcd96f6c46f3f28c663ab132f5c61ef36289f9272aa WatchSource:0}: Error finding container 34f0f125cccda39844ccffcd96f6c46f3f28c663ab132f5c61ef36289f9272aa: Status 404 returned error can't find the container with id 34f0f125cccda39844ccffcd96f6c46f3f28c663ab132f5c61ef36289f9272aa Feb 17 13:36:42 crc kubenswrapper[4768]: I0217 13:36:42.454165 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-additional-cni-plugins-6xvnz"] Feb 17 13:36:42 crc kubenswrapper[4768]: I0217 13:36:42.454766 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-5cplg"] Feb 17 13:36:42 crc kubenswrapper[4768]: I0217 13:36:42.455002 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-6xvnz" Feb 17 13:36:42 crc kubenswrapper[4768]: I0217 13:36:42.455484 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-5cplg" Feb 17 13:36:42 crc kubenswrapper[4768]: I0217 13:36:42.460122 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Feb 17 13:36:42 crc kubenswrapper[4768]: I0217 13:36:42.460326 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Feb 17 13:36:42 crc kubenswrapper[4768]: I0217 13:36:42.460344 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Feb 17 13:36:42 crc kubenswrapper[4768]: I0217 13:36:42.460472 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Feb 17 13:36:42 crc kubenswrapper[4768]: I0217 13:36:42.460516 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Feb 17 13:36:42 crc kubenswrapper[4768]: I0217 13:36:42.461610 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Feb 17 13:36:42 crc kubenswrapper[4768]: I0217 13:36:42.463138 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Feb 17 13:36:42 crc kubenswrapper[4768]: I0217 13:36:42.463159 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Feb 17 13:36:42 crc kubenswrapper[4768]: I0217 13:36:42.463544 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Feb 17 13:36:42 crc kubenswrapper[4768]: I0217 13:36:42.472778 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"88441663-1346-4328-a433-63cb4d9b6722\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1dd90501c9cf3ad0b27fefe4999afdfeaf0a8262728c8d47cd4714b0ef71d340\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://52dd97c46a2e371a68dcd9558be0f94d3bdd20c7a6ba6a65178fc09cdceea903\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4f0320f35632987c0e1f7eb6655c7221d7c159f1473088fa143a7c69a60f5b0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ecebe409455190192bdd157c1aa1c3eaec5d7930ccfaa6c456a1d04be7fff2f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://56007909b34b10d26eb61969c4c981eb15672cbfc95eeb424d47068a11f2d69f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-17T13:36:25Z\\\",\\\"message\\\":\\\"W0217 13:36:24.674264 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0217 13:36:24.676596 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771335384 cert, and key in /tmp/serving-cert-2323124499/serving-signer.crt, /tmp/serving-cert-2323124499/serving-signer.key\\\\nI0217 13:36:25.041663 1 observer_polling.go:159] Starting file observer\\\\nW0217 13:36:25.046527 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0217 13:36:25.046685 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0217 13:36:25.048335 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2323124499/tls.crt::/tmp/serving-cert-2323124499/tls.key\\\\\\\"\\\\nF0217 13:36:25.371205 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T13:36:24Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9ecebe409455190192bdd157c1aa1c3eaec5d7930ccfaa6c456a1d04be7fff2f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-17T13:36:41Z\\\",\\\"message\\\":\\\"le observer\\\\nW0217 13:36:40.862072 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0217 13:36:40.862247 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0217 13:36:40.862863 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1936381186/tls.crt::/tmp/serving-cert-1936381186/tls.key\\\\\\\"\\\\nI0217 13:36:41.350711 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0217 13:36:41.368959 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0217 13:36:41.368982 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0217 13:36:41.369003 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0217 13:36:41.369008 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0217 13:36:41.374960 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0217 13:36:41.374983 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 13:36:41.374989 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0217 13:36:41.374990 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0217 13:36:41.374994 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0217 13:36:41.375023 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0217 13:36:41.375028 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0217 13:36:41.375032 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0217 13:36:41.377284 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T13:36:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7641c2023d65064decc69606567dff6eff07116a6cc62f9d1b1cddb34a39a407\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:24Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9dffd0f3cc3b89ababe812ea2e550f319242913da2908f466cbb8d9ac541e4b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9dffd0f3cc3b89ababe812ea2e550f319242913da2908f466cbb8d9ac541e4b9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:36:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:36:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:36:21Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 17 13:36:42 crc kubenswrapper[4768]: I0217 13:36:42.486751 4768 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-05 10:35:45.895044974 +0000 UTC Feb 17 13:36:42 crc kubenswrapper[4768]: I0217 13:36:42.493932 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:41Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 17 13:36:42 crc kubenswrapper[4768]: I0217 13:36:42.505984 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 17 13:36:42 crc kubenswrapper[4768]: I0217 13:36:42.512077 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/742e6df8-2a68-426e-982c-ef825c6efca3-run-systemd\") pod \"ovnkube-node-5cplg\" (UID: \"742e6df8-2a68-426e-982c-ef825c6efca3\") " pod="openshift-ovn-kubernetes/ovnkube-node-5cplg" Feb 17 13:36:42 crc kubenswrapper[4768]: I0217 13:36:42.512140 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/742e6df8-2a68-426e-982c-ef825c6efca3-run-ovn\") pod \"ovnkube-node-5cplg\" (UID: \"742e6df8-2a68-426e-982c-ef825c6efca3\") " pod="openshift-ovn-kubernetes/ovnkube-node-5cplg" Feb 17 13:36:42 crc kubenswrapper[4768]: I0217 13:36:42.512166 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/68ae92b3-aced-409b-901b-252d2364cc01-cnibin\") pod \"multus-additional-cni-plugins-6xvnz\" (UID: \"68ae92b3-aced-409b-901b-252d2364cc01\") " pod="openshift-multus/multus-additional-cni-plugins-6xvnz" Feb 17 13:36:42 crc kubenswrapper[4768]: I0217 13:36:42.512187 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/742e6df8-2a68-426e-982c-ef825c6efca3-env-overrides\") pod \"ovnkube-node-5cplg\" (UID: \"742e6df8-2a68-426e-982c-ef825c6efca3\") " pod="openshift-ovn-kubernetes/ovnkube-node-5cplg" Feb 17 13:36:42 crc kubenswrapper[4768]: I0217 13:36:42.512203 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/742e6df8-2a68-426e-982c-ef825c6efca3-log-socket\") pod \"ovnkube-node-5cplg\" (UID: \"742e6df8-2a68-426e-982c-ef825c6efca3\") " pod="openshift-ovn-kubernetes/ovnkube-node-5cplg" Feb 17 13:36:42 crc kubenswrapper[4768]: I0217 13:36:42.512222 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/742e6df8-2a68-426e-982c-ef825c6efca3-host-slash\") pod \"ovnkube-node-5cplg\" (UID: \"742e6df8-2a68-426e-982c-ef825c6efca3\") " pod="openshift-ovn-kubernetes/ovnkube-node-5cplg" Feb 17 13:36:42 crc kubenswrapper[4768]: I0217 13:36:42.512237 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/742e6df8-2a68-426e-982c-ef825c6efca3-node-log\") pod \"ovnkube-node-5cplg\" (UID: \"742e6df8-2a68-426e-982c-ef825c6efca3\") " pod="openshift-ovn-kubernetes/ovnkube-node-5cplg" Feb 17 13:36:42 crc kubenswrapper[4768]: I0217 13:36:42.512255 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/742e6df8-2a68-426e-982c-ef825c6efca3-host-cni-netd\") pod \"ovnkube-node-5cplg\" (UID: \"742e6df8-2a68-426e-982c-ef825c6efca3\") " pod="openshift-ovn-kubernetes/ovnkube-node-5cplg" Feb 17 13:36:42 crc kubenswrapper[4768]: I0217 13:36:42.512271 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/68ae92b3-aced-409b-901b-252d2364cc01-os-release\") pod \"multus-additional-cni-plugins-6xvnz\" (UID: \"68ae92b3-aced-409b-901b-252d2364cc01\") " pod="openshift-multus/multus-additional-cni-plugins-6xvnz" Feb 17 13:36:42 crc kubenswrapper[4768]: I0217 13:36:42.512290 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-42dxs\" (UniqueName: \"kubernetes.io/projected/68ae92b3-aced-409b-901b-252d2364cc01-kube-api-access-42dxs\") pod \"multus-additional-cni-plugins-6xvnz\" (UID: \"68ae92b3-aced-409b-901b-252d2364cc01\") " pod="openshift-multus/multus-additional-cni-plugins-6xvnz" Feb 17 13:36:42 crc kubenswrapper[4768]: I0217 13:36:42.512315 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/742e6df8-2a68-426e-982c-ef825c6efca3-host-kubelet\") pod \"ovnkube-node-5cplg\" (UID: \"742e6df8-2a68-426e-982c-ef825c6efca3\") " pod="openshift-ovn-kubernetes/ovnkube-node-5cplg" Feb 17 13:36:42 crc kubenswrapper[4768]: I0217 13:36:42.512331 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/742e6df8-2a68-426e-982c-ef825c6efca3-run-openvswitch\") pod \"ovnkube-node-5cplg\" (UID: \"742e6df8-2a68-426e-982c-ef825c6efca3\") " pod="openshift-ovn-kubernetes/ovnkube-node-5cplg" Feb 17 13:36:42 crc kubenswrapper[4768]: I0217 13:36:42.512355 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/68ae92b3-aced-409b-901b-252d2364cc01-tuning-conf-dir\") pod \"multus-additional-cni-plugins-6xvnz\" (UID: \"68ae92b3-aced-409b-901b-252d2364cc01\") " pod="openshift-multus/multus-additional-cni-plugins-6xvnz" Feb 17 13:36:42 crc kubenswrapper[4768]: I0217 13:36:42.512369 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/742e6df8-2a68-426e-982c-ef825c6efca3-systemd-units\") pod \"ovnkube-node-5cplg\" (UID: \"742e6df8-2a68-426e-982c-ef825c6efca3\") " pod="openshift-ovn-kubernetes/ovnkube-node-5cplg" Feb 17 13:36:42 crc kubenswrapper[4768]: I0217 13:36:42.512386 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/742e6df8-2a68-426e-982c-ef825c6efca3-host-run-netns\") pod \"ovnkube-node-5cplg\" (UID: \"742e6df8-2a68-426e-982c-ef825c6efca3\") " pod="openshift-ovn-kubernetes/ovnkube-node-5cplg" Feb 17 13:36:42 crc kubenswrapper[4768]: I0217 13:36:42.512402 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/742e6df8-2a68-426e-982c-ef825c6efca3-ovnkube-script-lib\") pod \"ovnkube-node-5cplg\" (UID: \"742e6df8-2a68-426e-982c-ef825c6efca3\") " pod="openshift-ovn-kubernetes/ovnkube-node-5cplg" Feb 17 13:36:42 crc kubenswrapper[4768]: I0217 13:36:42.512417 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tg6ql\" (UniqueName: \"kubernetes.io/projected/742e6df8-2a68-426e-982c-ef825c6efca3-kube-api-access-tg6ql\") pod \"ovnkube-node-5cplg\" (UID: \"742e6df8-2a68-426e-982c-ef825c6efca3\") " pod="openshift-ovn-kubernetes/ovnkube-node-5cplg" Feb 17 13:36:42 crc kubenswrapper[4768]: I0217 13:36:42.512433 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/742e6df8-2a68-426e-982c-ef825c6efca3-host-run-ovn-kubernetes\") pod \"ovnkube-node-5cplg\" (UID: \"742e6df8-2a68-426e-982c-ef825c6efca3\") " pod="openshift-ovn-kubernetes/ovnkube-node-5cplg" Feb 17 13:36:42 crc kubenswrapper[4768]: I0217 13:36:42.512463 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/742e6df8-2a68-426e-982c-ef825c6efca3-ovnkube-config\") pod \"ovnkube-node-5cplg\" (UID: \"742e6df8-2a68-426e-982c-ef825c6efca3\") " pod="openshift-ovn-kubernetes/ovnkube-node-5cplg" Feb 17 13:36:42 crc kubenswrapper[4768]: I0217 13:36:42.512485 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/68ae92b3-aced-409b-901b-252d2364cc01-system-cni-dir\") pod \"multus-additional-cni-plugins-6xvnz\" (UID: \"68ae92b3-aced-409b-901b-252d2364cc01\") " pod="openshift-multus/multus-additional-cni-plugins-6xvnz" Feb 17 13:36:42 crc kubenswrapper[4768]: I0217 13:36:42.512501 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/68ae92b3-aced-409b-901b-252d2364cc01-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-6xvnz\" (UID: \"68ae92b3-aced-409b-901b-252d2364cc01\") " pod="openshift-multus/multus-additional-cni-plugins-6xvnz" Feb 17 13:36:42 crc kubenswrapper[4768]: I0217 13:36:42.512514 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/742e6df8-2a68-426e-982c-ef825c6efca3-host-cni-bin\") pod \"ovnkube-node-5cplg\" (UID: \"742e6df8-2a68-426e-982c-ef825c6efca3\") " pod="openshift-ovn-kubernetes/ovnkube-node-5cplg" Feb 17 13:36:42 crc kubenswrapper[4768]: I0217 13:36:42.512528 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/742e6df8-2a68-426e-982c-ef825c6efca3-ovn-node-metrics-cert\") pod \"ovnkube-node-5cplg\" (UID: \"742e6df8-2a68-426e-982c-ef825c6efca3\") " pod="openshift-ovn-kubernetes/ovnkube-node-5cplg" Feb 17 13:36:42 crc kubenswrapper[4768]: I0217 13:36:42.512544 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/68ae92b3-aced-409b-901b-252d2364cc01-cni-binary-copy\") pod \"multus-additional-cni-plugins-6xvnz\" (UID: \"68ae92b3-aced-409b-901b-252d2364cc01\") " pod="openshift-multus/multus-additional-cni-plugins-6xvnz" Feb 17 13:36:42 crc kubenswrapper[4768]: I0217 13:36:42.512562 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/742e6df8-2a68-426e-982c-ef825c6efca3-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-5cplg\" (UID: \"742e6df8-2a68-426e-982c-ef825c6efca3\") " pod="openshift-ovn-kubernetes/ovnkube-node-5cplg" Feb 17 13:36:42 crc kubenswrapper[4768]: I0217 13:36:42.512580 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/742e6df8-2a68-426e-982c-ef825c6efca3-etc-openvswitch\") pod \"ovnkube-node-5cplg\" (UID: \"742e6df8-2a68-426e-982c-ef825c6efca3\") " pod="openshift-ovn-kubernetes/ovnkube-node-5cplg" Feb 17 13:36:42 crc kubenswrapper[4768]: I0217 13:36:42.512596 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/742e6df8-2a68-426e-982c-ef825c6efca3-var-lib-openvswitch\") pod \"ovnkube-node-5cplg\" (UID: \"742e6df8-2a68-426e-982c-ef825c6efca3\") " pod="openshift-ovn-kubernetes/ovnkube-node-5cplg" Feb 17 13:36:42 crc kubenswrapper[4768]: I0217 13:36:42.519515 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:41Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 17 13:36:42 crc kubenswrapper[4768]: I0217 13:36:42.530366 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:41Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 17 13:36:42 crc kubenswrapper[4768]: I0217 13:36:42.539522 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:41Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 17 13:36:42 crc kubenswrapper[4768]: I0217 13:36:42.547388 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6l7rv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dac64b2f-4b0b-454c-96e0-fc7d563d300f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:41Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:41Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7njgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:36:41Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6l7rv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 17 13:36:42 crc kubenswrapper[4768]: I0217 13:36:42.560459 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 17 13:36:42 crc kubenswrapper[4768]: I0217 13:36:42.569832 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-p97z4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"10c685ba-8fe0-425c-958c-3fb6754d3d84\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b2h62\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b2h62\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:36:42Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-p97z4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 17 13:36:42 crc kubenswrapper[4768]: I0217 13:36:42.580923 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-jjjqj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e044bf1f-26b2-4a39-86e6-0440eff3eaa9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jlkgb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:36:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-jjjqj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 17 13:36:42 crc kubenswrapper[4768]: I0217 13:36:42.595031 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-6xvnz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"68ae92b3-aced-409b-901b-252d2364cc01\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42dxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42dxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42dxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42dxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42dxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42dxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42dxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:36:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-6xvnz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 17 13:36:42 crc kubenswrapper[4768]: I0217 13:36:42.604647 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 17 13:36:42 crc kubenswrapper[4768]: I0217 13:36:42.613691 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/742e6df8-2a68-426e-982c-ef825c6efca3-host-kubelet\") pod \"ovnkube-node-5cplg\" (UID: \"742e6df8-2a68-426e-982c-ef825c6efca3\") " pod="openshift-ovn-kubernetes/ovnkube-node-5cplg" Feb 17 13:36:42 crc kubenswrapper[4768]: I0217 13:36:42.613921 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/742e6df8-2a68-426e-982c-ef825c6efca3-run-openvswitch\") pod \"ovnkube-node-5cplg\" (UID: \"742e6df8-2a68-426e-982c-ef825c6efca3\") " pod="openshift-ovn-kubernetes/ovnkube-node-5cplg" Feb 17 13:36:42 crc kubenswrapper[4768]: I0217 13:36:42.614030 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/742e6df8-2a68-426e-982c-ef825c6efca3-run-openvswitch\") pod \"ovnkube-node-5cplg\" (UID: \"742e6df8-2a68-426e-982c-ef825c6efca3\") " pod="openshift-ovn-kubernetes/ovnkube-node-5cplg" Feb 17 13:36:42 crc kubenswrapper[4768]: I0217 13:36:42.613961 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-p97z4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"10c685ba-8fe0-425c-958c-3fb6754d3d84\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b2h62\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b2h62\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:36:42Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-p97z4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 17 13:36:42 crc kubenswrapper[4768]: I0217 13:36:42.614213 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/68ae92b3-aced-409b-901b-252d2364cc01-tuning-conf-dir\") pod \"multus-additional-cni-plugins-6xvnz\" (UID: \"68ae92b3-aced-409b-901b-252d2364cc01\") " pod="openshift-multus/multus-additional-cni-plugins-6xvnz" Feb 17 13:36:42 crc kubenswrapper[4768]: I0217 13:36:42.613785 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/742e6df8-2a68-426e-982c-ef825c6efca3-host-kubelet\") pod \"ovnkube-node-5cplg\" (UID: \"742e6df8-2a68-426e-982c-ef825c6efca3\") " pod="openshift-ovn-kubernetes/ovnkube-node-5cplg" Feb 17 13:36:42 crc kubenswrapper[4768]: I0217 13:36:42.614049 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/68ae92b3-aced-409b-901b-252d2364cc01-tuning-conf-dir\") pod \"multus-additional-cni-plugins-6xvnz\" (UID: \"68ae92b3-aced-409b-901b-252d2364cc01\") " pod="openshift-multus/multus-additional-cni-plugins-6xvnz" Feb 17 13:36:42 crc kubenswrapper[4768]: I0217 13:36:42.614435 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/742e6df8-2a68-426e-982c-ef825c6efca3-systemd-units\") pod \"ovnkube-node-5cplg\" (UID: \"742e6df8-2a68-426e-982c-ef825c6efca3\") " pod="openshift-ovn-kubernetes/ovnkube-node-5cplg" Feb 17 13:36:42 crc kubenswrapper[4768]: I0217 13:36:42.614526 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/742e6df8-2a68-426e-982c-ef825c6efca3-host-run-netns\") pod \"ovnkube-node-5cplg\" (UID: \"742e6df8-2a68-426e-982c-ef825c6efca3\") " pod="openshift-ovn-kubernetes/ovnkube-node-5cplg" Feb 17 13:36:42 crc kubenswrapper[4768]: I0217 13:36:42.614619 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tg6ql\" (UniqueName: \"kubernetes.io/projected/742e6df8-2a68-426e-982c-ef825c6efca3-kube-api-access-tg6ql\") pod \"ovnkube-node-5cplg\" (UID: \"742e6df8-2a68-426e-982c-ef825c6efca3\") " pod="openshift-ovn-kubernetes/ovnkube-node-5cplg" Feb 17 13:36:42 crc kubenswrapper[4768]: I0217 13:36:42.614533 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/742e6df8-2a68-426e-982c-ef825c6efca3-systemd-units\") pod \"ovnkube-node-5cplg\" (UID: \"742e6df8-2a68-426e-982c-ef825c6efca3\") " pod="openshift-ovn-kubernetes/ovnkube-node-5cplg" Feb 17 13:36:42 crc kubenswrapper[4768]: I0217 13:36:42.614572 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/742e6df8-2a68-426e-982c-ef825c6efca3-host-run-netns\") pod \"ovnkube-node-5cplg\" (UID: \"742e6df8-2a68-426e-982c-ef825c6efca3\") " pod="openshift-ovn-kubernetes/ovnkube-node-5cplg" Feb 17 13:36:42 crc kubenswrapper[4768]: I0217 13:36:42.614774 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/742e6df8-2a68-426e-982c-ef825c6efca3-host-run-ovn-kubernetes\") pod \"ovnkube-node-5cplg\" (UID: \"742e6df8-2a68-426e-982c-ef825c6efca3\") " pod="openshift-ovn-kubernetes/ovnkube-node-5cplg" Feb 17 13:36:42 crc kubenswrapper[4768]: I0217 13:36:42.614817 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/742e6df8-2a68-426e-982c-ef825c6efca3-ovnkube-script-lib\") pod \"ovnkube-node-5cplg\" (UID: \"742e6df8-2a68-426e-982c-ef825c6efca3\") " pod="openshift-ovn-kubernetes/ovnkube-node-5cplg" Feb 17 13:36:42 crc kubenswrapper[4768]: I0217 13:36:42.614837 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/742e6df8-2a68-426e-982c-ef825c6efca3-ovnkube-config\") pod \"ovnkube-node-5cplg\" (UID: \"742e6df8-2a68-426e-982c-ef825c6efca3\") " pod="openshift-ovn-kubernetes/ovnkube-node-5cplg" Feb 17 13:36:42 crc kubenswrapper[4768]: I0217 13:36:42.614878 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/68ae92b3-aced-409b-901b-252d2364cc01-system-cni-dir\") pod \"multus-additional-cni-plugins-6xvnz\" (UID: \"68ae92b3-aced-409b-901b-252d2364cc01\") " pod="openshift-multus/multus-additional-cni-plugins-6xvnz" Feb 17 13:36:42 crc kubenswrapper[4768]: I0217 13:36:42.614901 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/68ae92b3-aced-409b-901b-252d2364cc01-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-6xvnz\" (UID: \"68ae92b3-aced-409b-901b-252d2364cc01\") " pod="openshift-multus/multus-additional-cni-plugins-6xvnz" Feb 17 13:36:42 crc kubenswrapper[4768]: I0217 13:36:42.614920 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/742e6df8-2a68-426e-982c-ef825c6efca3-host-cni-bin\") pod \"ovnkube-node-5cplg\" (UID: \"742e6df8-2a68-426e-982c-ef825c6efca3\") " pod="openshift-ovn-kubernetes/ovnkube-node-5cplg" Feb 17 13:36:42 crc kubenswrapper[4768]: I0217 13:36:42.614945 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/742e6df8-2a68-426e-982c-ef825c6efca3-ovn-node-metrics-cert\") pod \"ovnkube-node-5cplg\" (UID: \"742e6df8-2a68-426e-982c-ef825c6efca3\") " pod="openshift-ovn-kubernetes/ovnkube-node-5cplg" Feb 17 13:36:42 crc kubenswrapper[4768]: I0217 13:36:42.614968 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/68ae92b3-aced-409b-901b-252d2364cc01-cni-binary-copy\") pod \"multus-additional-cni-plugins-6xvnz\" (UID: \"68ae92b3-aced-409b-901b-252d2364cc01\") " pod="openshift-multus/multus-additional-cni-plugins-6xvnz" Feb 17 13:36:42 crc kubenswrapper[4768]: I0217 13:36:42.614974 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/68ae92b3-aced-409b-901b-252d2364cc01-system-cni-dir\") pod \"multus-additional-cni-plugins-6xvnz\" (UID: \"68ae92b3-aced-409b-901b-252d2364cc01\") " pod="openshift-multus/multus-additional-cni-plugins-6xvnz" Feb 17 13:36:42 crc kubenswrapper[4768]: I0217 13:36:42.614994 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/742e6df8-2a68-426e-982c-ef825c6efca3-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-5cplg\" (UID: \"742e6df8-2a68-426e-982c-ef825c6efca3\") " pod="openshift-ovn-kubernetes/ovnkube-node-5cplg" Feb 17 13:36:42 crc kubenswrapper[4768]: I0217 13:36:42.615016 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/742e6df8-2a68-426e-982c-ef825c6efca3-var-lib-openvswitch\") pod \"ovnkube-node-5cplg\" (UID: \"742e6df8-2a68-426e-982c-ef825c6efca3\") " pod="openshift-ovn-kubernetes/ovnkube-node-5cplg" Feb 17 13:36:42 crc kubenswrapper[4768]: I0217 13:36:42.615034 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/742e6df8-2a68-426e-982c-ef825c6efca3-etc-openvswitch\") pod \"ovnkube-node-5cplg\" (UID: \"742e6df8-2a68-426e-982c-ef825c6efca3\") " pod="openshift-ovn-kubernetes/ovnkube-node-5cplg" Feb 17 13:36:42 crc kubenswrapper[4768]: I0217 13:36:42.615065 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/742e6df8-2a68-426e-982c-ef825c6efca3-run-systemd\") pod \"ovnkube-node-5cplg\" (UID: \"742e6df8-2a68-426e-982c-ef825c6efca3\") " pod="openshift-ovn-kubernetes/ovnkube-node-5cplg" Feb 17 13:36:42 crc kubenswrapper[4768]: I0217 13:36:42.615082 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/742e6df8-2a68-426e-982c-ef825c6efca3-run-ovn\") pod \"ovnkube-node-5cplg\" (UID: \"742e6df8-2a68-426e-982c-ef825c6efca3\") " pod="openshift-ovn-kubernetes/ovnkube-node-5cplg" Feb 17 13:36:42 crc kubenswrapper[4768]: I0217 13:36:42.615129 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/68ae92b3-aced-409b-901b-252d2364cc01-cnibin\") pod \"multus-additional-cni-plugins-6xvnz\" (UID: \"68ae92b3-aced-409b-901b-252d2364cc01\") " pod="openshift-multus/multus-additional-cni-plugins-6xvnz" Feb 17 13:36:42 crc kubenswrapper[4768]: I0217 13:36:42.615150 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/742e6df8-2a68-426e-982c-ef825c6efca3-env-overrides\") pod \"ovnkube-node-5cplg\" (UID: \"742e6df8-2a68-426e-982c-ef825c6efca3\") " pod="openshift-ovn-kubernetes/ovnkube-node-5cplg" Feb 17 13:36:42 crc kubenswrapper[4768]: I0217 13:36:42.615170 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/742e6df8-2a68-426e-982c-ef825c6efca3-log-socket\") pod \"ovnkube-node-5cplg\" (UID: \"742e6df8-2a68-426e-982c-ef825c6efca3\") " pod="openshift-ovn-kubernetes/ovnkube-node-5cplg" Feb 17 13:36:42 crc kubenswrapper[4768]: I0217 13:36:42.615191 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/742e6df8-2a68-426e-982c-ef825c6efca3-host-slash\") pod \"ovnkube-node-5cplg\" (UID: \"742e6df8-2a68-426e-982c-ef825c6efca3\") " pod="openshift-ovn-kubernetes/ovnkube-node-5cplg" Feb 17 13:36:42 crc kubenswrapper[4768]: I0217 13:36:42.615207 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/742e6df8-2a68-426e-982c-ef825c6efca3-node-log\") pod \"ovnkube-node-5cplg\" (UID: \"742e6df8-2a68-426e-982c-ef825c6efca3\") " pod="openshift-ovn-kubernetes/ovnkube-node-5cplg" Feb 17 13:36:42 crc kubenswrapper[4768]: I0217 13:36:42.615222 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/742e6df8-2a68-426e-982c-ef825c6efca3-host-cni-netd\") pod \"ovnkube-node-5cplg\" (UID: \"742e6df8-2a68-426e-982c-ef825c6efca3\") " pod="openshift-ovn-kubernetes/ovnkube-node-5cplg" Feb 17 13:36:42 crc kubenswrapper[4768]: I0217 13:36:42.615242 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/68ae92b3-aced-409b-901b-252d2364cc01-os-release\") pod \"multus-additional-cni-plugins-6xvnz\" (UID: \"68ae92b3-aced-409b-901b-252d2364cc01\") " pod="openshift-multus/multus-additional-cni-plugins-6xvnz" Feb 17 13:36:42 crc kubenswrapper[4768]: I0217 13:36:42.615259 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-42dxs\" (UniqueName: \"kubernetes.io/projected/68ae92b3-aced-409b-901b-252d2364cc01-kube-api-access-42dxs\") pod \"multus-additional-cni-plugins-6xvnz\" (UID: \"68ae92b3-aced-409b-901b-252d2364cc01\") " pod="openshift-multus/multus-additional-cni-plugins-6xvnz" Feb 17 13:36:42 crc kubenswrapper[4768]: I0217 13:36:42.615560 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/742e6df8-2a68-426e-982c-ef825c6efca3-ovnkube-script-lib\") pod \"ovnkube-node-5cplg\" (UID: \"742e6df8-2a68-426e-982c-ef825c6efca3\") " pod="openshift-ovn-kubernetes/ovnkube-node-5cplg" Feb 17 13:36:42 crc kubenswrapper[4768]: I0217 13:36:42.615671 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/742e6df8-2a68-426e-982c-ef825c6efca3-host-run-ovn-kubernetes\") pod \"ovnkube-node-5cplg\" (UID: \"742e6df8-2a68-426e-982c-ef825c6efca3\") " pod="openshift-ovn-kubernetes/ovnkube-node-5cplg" Feb 17 13:36:42 crc kubenswrapper[4768]: I0217 13:36:42.615766 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/742e6df8-2a68-426e-982c-ef825c6efca3-run-ovn\") pod \"ovnkube-node-5cplg\" (UID: \"742e6df8-2a68-426e-982c-ef825c6efca3\") " pod="openshift-ovn-kubernetes/ovnkube-node-5cplg" Feb 17 13:36:42 crc kubenswrapper[4768]: I0217 13:36:42.615844 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/742e6df8-2a68-426e-982c-ef825c6efca3-host-cni-bin\") pod \"ovnkube-node-5cplg\" (UID: \"742e6df8-2a68-426e-982c-ef825c6efca3\") " pod="openshift-ovn-kubernetes/ovnkube-node-5cplg" Feb 17 13:36:42 crc kubenswrapper[4768]: I0217 13:36:42.616014 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/742e6df8-2a68-426e-982c-ef825c6efca3-ovnkube-config\") pod \"ovnkube-node-5cplg\" (UID: \"742e6df8-2a68-426e-982c-ef825c6efca3\") " pod="openshift-ovn-kubernetes/ovnkube-node-5cplg" Feb 17 13:36:42 crc kubenswrapper[4768]: I0217 13:36:42.616062 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/742e6df8-2a68-426e-982c-ef825c6efca3-var-lib-openvswitch\") pod \"ovnkube-node-5cplg\" (UID: \"742e6df8-2a68-426e-982c-ef825c6efca3\") " pod="openshift-ovn-kubernetes/ovnkube-node-5cplg" Feb 17 13:36:42 crc kubenswrapper[4768]: I0217 13:36:42.616147 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/68ae92b3-aced-409b-901b-252d2364cc01-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-6xvnz\" (UID: \"68ae92b3-aced-409b-901b-252d2364cc01\") " pod="openshift-multus/multus-additional-cni-plugins-6xvnz" Feb 17 13:36:42 crc kubenswrapper[4768]: I0217 13:36:42.616192 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/68ae92b3-aced-409b-901b-252d2364cc01-cnibin\") pod \"multus-additional-cni-plugins-6xvnz\" (UID: \"68ae92b3-aced-409b-901b-252d2364cc01\") " pod="openshift-multus/multus-additional-cni-plugins-6xvnz" Feb 17 13:36:42 crc kubenswrapper[4768]: I0217 13:36:42.616209 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/742e6df8-2a68-426e-982c-ef825c6efca3-etc-openvswitch\") pod \"ovnkube-node-5cplg\" (UID: \"742e6df8-2a68-426e-982c-ef825c6efca3\") " pod="openshift-ovn-kubernetes/ovnkube-node-5cplg" Feb 17 13:36:42 crc kubenswrapper[4768]: I0217 13:36:42.616274 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/742e6df8-2a68-426e-982c-ef825c6efca3-run-systemd\") pod \"ovnkube-node-5cplg\" (UID: \"742e6df8-2a68-426e-982c-ef825c6efca3\") " pod="openshift-ovn-kubernetes/ovnkube-node-5cplg" Feb 17 13:36:42 crc kubenswrapper[4768]: I0217 13:36:42.616310 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/742e6df8-2a68-426e-982c-ef825c6efca3-node-log\") pod \"ovnkube-node-5cplg\" (UID: \"742e6df8-2a68-426e-982c-ef825c6efca3\") " pod="openshift-ovn-kubernetes/ovnkube-node-5cplg" Feb 17 13:36:42 crc kubenswrapper[4768]: I0217 13:36:42.616331 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/742e6df8-2a68-426e-982c-ef825c6efca3-log-socket\") pod \"ovnkube-node-5cplg\" (UID: \"742e6df8-2a68-426e-982c-ef825c6efca3\") " pod="openshift-ovn-kubernetes/ovnkube-node-5cplg" Feb 17 13:36:42 crc kubenswrapper[4768]: I0217 13:36:42.616354 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/742e6df8-2a68-426e-982c-ef825c6efca3-host-slash\") pod \"ovnkube-node-5cplg\" (UID: \"742e6df8-2a68-426e-982c-ef825c6efca3\") " pod="openshift-ovn-kubernetes/ovnkube-node-5cplg" Feb 17 13:36:42 crc kubenswrapper[4768]: I0217 13:36:42.616376 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/742e6df8-2a68-426e-982c-ef825c6efca3-host-cni-netd\") pod \"ovnkube-node-5cplg\" (UID: \"742e6df8-2a68-426e-982c-ef825c6efca3\") " pod="openshift-ovn-kubernetes/ovnkube-node-5cplg" Feb 17 13:36:42 crc kubenswrapper[4768]: I0217 13:36:42.616399 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/742e6df8-2a68-426e-982c-ef825c6efca3-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-5cplg\" (UID: \"742e6df8-2a68-426e-982c-ef825c6efca3\") " pod="openshift-ovn-kubernetes/ovnkube-node-5cplg" Feb 17 13:36:42 crc kubenswrapper[4768]: I0217 13:36:42.616484 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/68ae92b3-aced-409b-901b-252d2364cc01-cni-binary-copy\") pod \"multus-additional-cni-plugins-6xvnz\" (UID: \"68ae92b3-aced-409b-901b-252d2364cc01\") " pod="openshift-multus/multus-additional-cni-plugins-6xvnz" Feb 17 13:36:42 crc kubenswrapper[4768]: I0217 13:36:42.616516 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/742e6df8-2a68-426e-982c-ef825c6efca3-env-overrides\") pod \"ovnkube-node-5cplg\" (UID: \"742e6df8-2a68-426e-982c-ef825c6efca3\") " pod="openshift-ovn-kubernetes/ovnkube-node-5cplg" Feb 17 13:36:42 crc kubenswrapper[4768]: I0217 13:36:42.616530 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/68ae92b3-aced-409b-901b-252d2364cc01-os-release\") pod \"multus-additional-cni-plugins-6xvnz\" (UID: \"68ae92b3-aced-409b-901b-252d2364cc01\") " pod="openshift-multus/multus-additional-cni-plugins-6xvnz" Feb 17 13:36:42 crc kubenswrapper[4768]: I0217 13:36:42.621278 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/742e6df8-2a68-426e-982c-ef825c6efca3-ovn-node-metrics-cert\") pod \"ovnkube-node-5cplg\" (UID: \"742e6df8-2a68-426e-982c-ef825c6efca3\") " pod="openshift-ovn-kubernetes/ovnkube-node-5cplg" Feb 17 13:36:42 crc kubenswrapper[4768]: I0217 13:36:42.628674 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-jjjqj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e044bf1f-26b2-4a39-86e6-0440eff3eaa9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jlkgb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:36:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-jjjqj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 17 13:36:42 crc kubenswrapper[4768]: I0217 13:36:42.633737 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tg6ql\" (UniqueName: \"kubernetes.io/projected/742e6df8-2a68-426e-982c-ef825c6efca3-kube-api-access-tg6ql\") pod \"ovnkube-node-5cplg\" (UID: \"742e6df8-2a68-426e-982c-ef825c6efca3\") " pod="openshift-ovn-kubernetes/ovnkube-node-5cplg" Feb 17 13:36:42 crc kubenswrapper[4768]: I0217 13:36:42.633985 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-42dxs\" (UniqueName: \"kubernetes.io/projected/68ae92b3-aced-409b-901b-252d2364cc01-kube-api-access-42dxs\") pod \"multus-additional-cni-plugins-6xvnz\" (UID: \"68ae92b3-aced-409b-901b-252d2364cc01\") " pod="openshift-multus/multus-additional-cni-plugins-6xvnz" Feb 17 13:36:42 crc kubenswrapper[4768]: I0217 13:36:42.669710 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-jjjqj" event={"ID":"e044bf1f-26b2-4a39-86e6-0440eff3eaa9","Type":"ContainerStarted","Data":"19bc9ea99c2b3b0015849f2a4c730a14e048c35cf69f5b4025a4d51d707cf9f1"} Feb 17 13:36:42 crc kubenswrapper[4768]: I0217 13:36:42.669771 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-jjjqj" event={"ID":"e044bf1f-26b2-4a39-86e6-0440eff3eaa9","Type":"ContainerStarted","Data":"12e9d90ccabc6f0baf15b0d4d1aa71c1a5e8358a8065ddd386023d20fb0c962e"} Feb 17 13:36:42 crc kubenswrapper[4768]: I0217 13:36:42.670932 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-6l7rv" event={"ID":"dac64b2f-4b0b-454c-96e0-fc7d563d300f","Type":"ContainerStarted","Data":"903f9d31bd5f357a5a1f439a158788c1930d0f63cd50e809028689cfa7884e96"} Feb 17 13:36:42 crc kubenswrapper[4768]: I0217 13:36:42.670974 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-6l7rv" event={"ID":"dac64b2f-4b0b-454c-96e0-fc7d563d300f","Type":"ContainerStarted","Data":"9328779c9e624c2995e50e1d7181ad2b9b27a476ad5da32105b3a225d98848c5"} Feb 17 13:36:42 crc kubenswrapper[4768]: I0217 13:36:42.672317 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"5f9b2d1282c12fcb8597c7b4c60dc1bab1049afc179eaddbc8a72203b392f78d"} Feb 17 13:36:42 crc kubenswrapper[4768]: I0217 13:36:42.672461 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"70f045ffbdab8eb2711c0c5150aec23350851f7dfa5013df94abaaad607ef8c8"} Feb 17 13:36:42 crc kubenswrapper[4768]: I0217 13:36:42.673889 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Feb 17 13:36:42 crc kubenswrapper[4768]: I0217 13:36:42.676210 4768 scope.go:117] "RemoveContainer" containerID="9ecebe409455190192bdd157c1aa1c3eaec5d7930ccfaa6c456a1d04be7fff2f" Feb 17 13:36:42 crc kubenswrapper[4768]: E0217 13:36:42.676404 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Feb 17 13:36:42 crc kubenswrapper[4768]: I0217 13:36:42.677597 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"7c0c5a46beaa5e6c9df638b288967eaff1605591faafc373623753e3c434c6ed"} Feb 17 13:36:42 crc kubenswrapper[4768]: I0217 13:36:42.677674 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"a28c91cfb782b4976153d2afced3117498aaabaafa5bba655b020c13aebb1dc5"} Feb 17 13:36:42 crc kubenswrapper[4768]: I0217 13:36:42.677689 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"d71860d9865fc73a5d4ad2c9a9ca3398ae02953939b30668e3ddac3e75841e59"} Feb 17 13:36:42 crc kubenswrapper[4768]: I0217 13:36:42.678714 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"6dcae2465dc490b25166ad9989dc2f5c917fa986fb65599b0218bf846a4fc0a5"} Feb 17 13:36:42 crc kubenswrapper[4768]: I0217 13:36:42.680064 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-p97z4" event={"ID":"10c685ba-8fe0-425c-958c-3fb6754d3d84","Type":"ContainerStarted","Data":"d56cb97a69888610458dcdfe5fa47e01b6641b0ab8ce5b11c01042001017d633"} Feb 17 13:36:42 crc kubenswrapper[4768]: I0217 13:36:42.680261 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-p97z4" event={"ID":"10c685ba-8fe0-425c-958c-3fb6754d3d84","Type":"ContainerStarted","Data":"69b662556c0f67ed5b73c21d8dda6de43cd7b0e7c3dc7a57578f3ce94a4be7cd"} Feb 17 13:36:42 crc kubenswrapper[4768]: I0217 13:36:42.680379 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-p97z4" event={"ID":"10c685ba-8fe0-425c-958c-3fb6754d3d84","Type":"ContainerStarted","Data":"34f0f125cccda39844ccffcd96f6c46f3f28c663ab132f5c61ef36289f9272aa"} Feb 17 13:36:42 crc kubenswrapper[4768]: I0217 13:36:42.689467 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5cplg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"742e6df8-2a68-426e-982c-ef825c6efca3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg6ql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg6ql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg6ql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg6ql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg6ql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg6ql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg6ql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg6ql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg6ql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:36:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-5cplg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:36:42Z is after 2025-08-24T17:21:41Z" Feb 17 13:36:42 crc kubenswrapper[4768]: I0217 13:36:42.715872 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-6xvnz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"68ae92b3-aced-409b-901b-252d2364cc01\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42dxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42dxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42dxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42dxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42dxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42dxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42dxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:36:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-6xvnz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:36:42Z is after 2025-08-24T17:21:41Z" Feb 17 13:36:42 crc kubenswrapper[4768]: I0217 13:36:42.753514 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:41Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:36:42Z is after 2025-08-24T17:21:41Z" Feb 17 13:36:42 crc kubenswrapper[4768]: I0217 13:36:42.784670 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-6xvnz" Feb 17 13:36:42 crc kubenswrapper[4768]: I0217 13:36:42.793076 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-5cplg" Feb 17 13:36:42 crc kubenswrapper[4768]: I0217 13:36:42.800877 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:36:42Z is after 2025-08-24T17:21:41Z" Feb 17 13:36:42 crc kubenswrapper[4768]: W0217 13:36:42.804970 4768 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod68ae92b3_aced_409b_901b_252d2364cc01.slice/crio-73dc4dd86cf191ffb7e34b329f51aac5807173b57486d9f902079c78ca098533 WatchSource:0}: Error finding container 73dc4dd86cf191ffb7e34b329f51aac5807173b57486d9f902079c78ca098533: Status 404 returned error can't find the container with id 73dc4dd86cf191ffb7e34b329f51aac5807173b57486d9f902079c78ca098533 Feb 17 13:36:42 crc kubenswrapper[4768]: W0217 13:36:42.807011 4768 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod742e6df8_2a68_426e_982c_ef825c6efca3.slice/crio-4233bdbe1d37ecf7f91ff245413808d8b89fdabf69050a2c99a687098eff401f WatchSource:0}: Error finding container 4233bdbe1d37ecf7f91ff245413808d8b89fdabf69050a2c99a687098eff401f: Status 404 returned error can't find the container with id 4233bdbe1d37ecf7f91ff245413808d8b89fdabf69050a2c99a687098eff401f Feb 17 13:36:42 crc kubenswrapper[4768]: I0217 13:36:42.833956 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"88441663-1346-4328-a433-63cb4d9b6722\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1dd90501c9cf3ad0b27fefe4999afdfeaf0a8262728c8d47cd4714b0ef71d340\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://52dd97c46a2e371a68dcd9558be0f94d3bdd20c7a6ba6a65178fc09cdceea903\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4f0320f35632987c0e1f7eb6655c7221d7c159f1473088fa143a7c69a60f5b0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ecebe409455190192bdd157c1aa1c3eaec5d7930ccfaa6c456a1d04be7fff2f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://56007909b34b10d26eb61969c4c981eb15672cbfc95eeb424d47068a11f2d69f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-17T13:36:25Z\\\",\\\"message\\\":\\\"W0217 13:36:24.674264 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0217 13:36:24.676596 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771335384 cert, and key in /tmp/serving-cert-2323124499/serving-signer.crt, /tmp/serving-cert-2323124499/serving-signer.key\\\\nI0217 13:36:25.041663 1 observer_polling.go:159] Starting file observer\\\\nW0217 13:36:25.046527 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0217 13:36:25.046685 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0217 13:36:25.048335 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2323124499/tls.crt::/tmp/serving-cert-2323124499/tls.key\\\\\\\"\\\\nF0217 13:36:25.371205 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T13:36:24Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9ecebe409455190192bdd157c1aa1c3eaec5d7930ccfaa6c456a1d04be7fff2f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-17T13:36:41Z\\\",\\\"message\\\":\\\"le observer\\\\nW0217 13:36:40.862072 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0217 13:36:40.862247 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0217 13:36:40.862863 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1936381186/tls.crt::/tmp/serving-cert-1936381186/tls.key\\\\\\\"\\\\nI0217 13:36:41.350711 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0217 13:36:41.368959 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0217 13:36:41.368982 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0217 13:36:41.369003 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0217 13:36:41.369008 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0217 13:36:41.374960 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0217 13:36:41.374983 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 13:36:41.374989 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0217 13:36:41.374990 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0217 13:36:41.374994 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0217 13:36:41.375023 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0217 13:36:41.375028 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0217 13:36:41.375032 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0217 13:36:41.377284 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T13:36:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7641c2023d65064decc69606567dff6eff07116a6cc62f9d1b1cddb34a39a407\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:24Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9dffd0f3cc3b89ababe812ea2e550f319242913da2908f466cbb8d9ac541e4b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9dffd0f3cc3b89ababe812ea2e550f319242913da2908f466cbb8d9ac541e4b9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:36:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:36:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:36:21Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:36:42Z is after 2025-08-24T17:21:41Z" Feb 17 13:36:42 crc kubenswrapper[4768]: I0217 13:36:42.872702 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:41Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:36:42Z is after 2025-08-24T17:21:41Z" Feb 17 13:36:42 crc kubenswrapper[4768]: I0217 13:36:42.912367 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:41Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:36:42Z is after 2025-08-24T17:21:41Z" Feb 17 13:36:42 crc kubenswrapper[4768]: I0217 13:36:42.952828 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:41Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:36:42Z is after 2025-08-24T17:21:41Z" Feb 17 13:36:42 crc kubenswrapper[4768]: I0217 13:36:42.991932 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6l7rv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dac64b2f-4b0b-454c-96e0-fc7d563d300f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:41Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:41Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7njgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:36:41Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6l7rv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:36:42Z is after 2025-08-24T17:21:41Z" Feb 17 13:36:43 crc kubenswrapper[4768]: I0217 13:36:43.034407 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-6xvnz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"68ae92b3-aced-409b-901b-252d2364cc01\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42dxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42dxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42dxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42dxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42dxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42dxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42dxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:36:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-6xvnz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:36:43Z is after 2025-08-24T17:21:41Z" Feb 17 13:36:43 crc kubenswrapper[4768]: I0217 13:36:43.072340 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"88441663-1346-4328-a433-63cb4d9b6722\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1dd90501c9cf3ad0b27fefe4999afdfeaf0a8262728c8d47cd4714b0ef71d340\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://52dd97c46a2e371a68dcd9558be0f94d3bdd20c7a6ba6a65178fc09cdceea903\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4f0320f35632987c0e1f7eb6655c7221d7c159f1473088fa143a7c69a60f5b0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ecebe409455190192bdd157c1aa1c3eaec5d7930ccfaa6c456a1d04be7fff2f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9ecebe409455190192bdd157c1aa1c3eaec5d7930ccfaa6c456a1d04be7fff2f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-17T13:36:41Z\\\",\\\"message\\\":\\\"le observer\\\\nW0217 13:36:40.862072 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0217 13:36:40.862247 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0217 13:36:40.862863 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1936381186/tls.crt::/tmp/serving-cert-1936381186/tls.key\\\\\\\"\\\\nI0217 13:36:41.350711 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0217 13:36:41.368959 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0217 13:36:41.368982 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0217 13:36:41.369003 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0217 13:36:41.369008 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0217 13:36:41.374960 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0217 13:36:41.374983 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 13:36:41.374989 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0217 13:36:41.374990 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0217 13:36:41.374994 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0217 13:36:41.375023 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0217 13:36:41.375028 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0217 13:36:41.375032 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0217 13:36:41.377284 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T13:36:25Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7641c2023d65064decc69606567dff6eff07116a6cc62f9d1b1cddb34a39a407\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:24Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9dffd0f3cc3b89ababe812ea2e550f319242913da2908f466cbb8d9ac541e4b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9dffd0f3cc3b89ababe812ea2e550f319242913da2908f466cbb8d9ac541e4b9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:36:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:36:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:36:21Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:36:43Z is after 2025-08-24T17:21:41Z" Feb 17 13:36:43 crc kubenswrapper[4768]: I0217 13:36:43.112552 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c0c5a46beaa5e6c9df638b288967eaff1605591faafc373623753e3c434c6ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a28c91cfb782b4976153d2afced3117498aaabaafa5bba655b020c13aebb1dc5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:36:43Z is after 2025-08-24T17:21:41Z" Feb 17 13:36:43 crc kubenswrapper[4768]: I0217 13:36:43.151914 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:36:43Z is after 2025-08-24T17:21:41Z" Feb 17 13:36:43 crc kubenswrapper[4768]: I0217 13:36:43.192232 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:41Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:36:43Z is after 2025-08-24T17:21:41Z" Feb 17 13:36:43 crc kubenswrapper[4768]: I0217 13:36:43.221122 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 13:36:43 crc kubenswrapper[4768]: E0217 13:36:43.221266 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 13:36:45.22123364 +0000 UTC m=+24.500620072 (durationBeforeRetry 2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 13:36:43 crc kubenswrapper[4768]: I0217 13:36:43.221323 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 13:36:43 crc kubenswrapper[4768]: I0217 13:36:43.221430 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 13:36:43 crc kubenswrapper[4768]: I0217 13:36:43.221465 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 13:36:43 crc kubenswrapper[4768]: E0217 13:36:43.221553 4768 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 17 13:36:43 crc kubenswrapper[4768]: E0217 13:36:43.221601 4768 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 17 13:36:43 crc kubenswrapper[4768]: E0217 13:36:43.221620 4768 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 17 13:36:43 crc kubenswrapper[4768]: E0217 13:36:43.221634 4768 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 17 13:36:43 crc kubenswrapper[4768]: E0217 13:36:43.221675 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-17 13:36:45.221650343 +0000 UTC m=+24.501036835 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 17 13:36:43 crc kubenswrapper[4768]: E0217 13:36:43.221686 4768 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 17 13:36:43 crc kubenswrapper[4768]: E0217 13:36:43.221702 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-17 13:36:45.221690934 +0000 UTC m=+24.501077476 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 17 13:36:43 crc kubenswrapper[4768]: E0217 13:36:43.221780 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-17 13:36:45.221746836 +0000 UTC m=+24.501133338 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 17 13:36:43 crc kubenswrapper[4768]: I0217 13:36:43.233937 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:41Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:36:43Z is after 2025-08-24T17:21:41Z" Feb 17 13:36:43 crc kubenswrapper[4768]: I0217 13:36:43.280191 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:41Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:36:43Z is after 2025-08-24T17:21:41Z" Feb 17 13:36:43 crc kubenswrapper[4768]: I0217 13:36:43.315188 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6l7rv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dac64b2f-4b0b-454c-96e0-fc7d563d300f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://903f9d31bd5f357a5a1f439a158788c1930d0f63cd50e809028689cfa7884e96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7njgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:36:41Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6l7rv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:36:43Z is after 2025-08-24T17:21:41Z" Feb 17 13:36:43 crc kubenswrapper[4768]: I0217 13:36:43.321804 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 13:36:43 crc kubenswrapper[4768]: E0217 13:36:43.321922 4768 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 17 13:36:43 crc kubenswrapper[4768]: E0217 13:36:43.321941 4768 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 17 13:36:43 crc kubenswrapper[4768]: E0217 13:36:43.321952 4768 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 17 13:36:43 crc kubenswrapper[4768]: E0217 13:36:43.321997 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-17 13:36:45.321983852 +0000 UTC m=+24.601370294 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 17 13:36:43 crc kubenswrapper[4768]: I0217 13:36:43.357706 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-jjjqj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e044bf1f-26b2-4a39-86e6-0440eff3eaa9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://19bc9ea99c2b3b0015849f2a4c730a14e048c35cf69f5b4025a4d51d707cf9f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jlkgb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:36:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-jjjqj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:36:43Z is after 2025-08-24T17:21:41Z" Feb 17 13:36:43 crc kubenswrapper[4768]: I0217 13:36:43.396826 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5cplg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"742e6df8-2a68-426e-982c-ef825c6efca3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg6ql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg6ql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg6ql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg6ql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg6ql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg6ql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg6ql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg6ql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg6ql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:36:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-5cplg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:36:43Z is after 2025-08-24T17:21:41Z" Feb 17 13:36:43 crc kubenswrapper[4768]: I0217 13:36:43.433849 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f9b2d1282c12fcb8597c7b4c60dc1bab1049afc179eaddbc8a72203b392f78d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:36:43Z is after 2025-08-24T17:21:41Z" Feb 17 13:36:43 crc kubenswrapper[4768]: I0217 13:36:43.471628 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-p97z4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"10c685ba-8fe0-425c-958c-3fb6754d3d84\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d56cb97a69888610458dcdfe5fa47e01b6641b0ab8ce5b11c01042001017d633\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b2h62\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://69b662556c0f67ed5b73c21d8dda6de43cd7b0e7c3dc7a57578f3ce94a4be7cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b2h62\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:36:42Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-p97z4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:36:43Z is after 2025-08-24T17:21:41Z" Feb 17 13:36:43 crc kubenswrapper[4768]: I0217 13:36:43.487907 4768 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-30 14:03:49.157343044 +0000 UTC Feb 17 13:36:43 crc kubenswrapper[4768]: I0217 13:36:43.533684 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 13:36:43 crc kubenswrapper[4768]: I0217 13:36:43.533747 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 13:36:43 crc kubenswrapper[4768]: I0217 13:36:43.533815 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 13:36:43 crc kubenswrapper[4768]: E0217 13:36:43.533824 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 13:36:43 crc kubenswrapper[4768]: E0217 13:36:43.533909 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 13:36:43 crc kubenswrapper[4768]: E0217 13:36:43.533974 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 13:36:43 crc kubenswrapper[4768]: I0217 13:36:43.538159 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01ab3dd5-8196-46d0-ad33-122e2ca51def" path="/var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes" Feb 17 13:36:43 crc kubenswrapper[4768]: I0217 13:36:43.538976 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" path="/var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes" Feb 17 13:36:43 crc kubenswrapper[4768]: I0217 13:36:43.539869 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09efc573-dbb6-4249-bd59-9b87aba8dd28" path="/var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes" Feb 17 13:36:43 crc kubenswrapper[4768]: I0217 13:36:43.540704 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b574797-001e-440a-8f4e-c0be86edad0f" path="/var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes" Feb 17 13:36:43 crc kubenswrapper[4768]: I0217 13:36:43.541467 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b78653f-4ff9-4508-8672-245ed9b561e3" path="/var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes" Feb 17 13:36:43 crc kubenswrapper[4768]: I0217 13:36:43.542130 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1386a44e-36a2-460c-96d0-0359d2b6f0f5" path="/var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes" Feb 17 13:36:43 crc kubenswrapper[4768]: I0217 13:36:43.542925 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1bf7eb37-55a3-4c65-b768-a94c82151e69" path="/var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes" Feb 17 13:36:43 crc kubenswrapper[4768]: I0217 13:36:43.543706 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d611f23-29be-4491-8495-bee1670e935f" path="/var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes" Feb 17 13:36:43 crc kubenswrapper[4768]: I0217 13:36:43.544521 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20b0d48f-5fd6-431c-a545-e3c800c7b866" path="/var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/volumes" Feb 17 13:36:43 crc kubenswrapper[4768]: I0217 13:36:43.545168 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" path="/var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes" Feb 17 13:36:43 crc kubenswrapper[4768]: I0217 13:36:43.545757 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22c825df-677d-4ca6-82db-3454ed06e783" path="/var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes" Feb 17 13:36:43 crc kubenswrapper[4768]: I0217 13:36:43.548648 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25e176fe-21b4-4974-b1ed-c8b94f112a7f" path="/var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes" Feb 17 13:36:43 crc kubenswrapper[4768]: I0217 13:36:43.549287 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" path="/var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes" Feb 17 13:36:43 crc kubenswrapper[4768]: I0217 13:36:43.550399 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31d8b7a1-420e-4252-a5b7-eebe8a111292" path="/var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes" Feb 17 13:36:43 crc kubenswrapper[4768]: I0217 13:36:43.550935 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ab1a177-2de0-46d9-b765-d0d0649bb42e" path="/var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/volumes" Feb 17 13:36:43 crc kubenswrapper[4768]: I0217 13:36:43.552001 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" path="/var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes" Feb 17 13:36:43 crc kubenswrapper[4768]: I0217 13:36:43.552686 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43509403-f426-496e-be36-56cef71462f5" path="/var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes" Feb 17 13:36:43 crc kubenswrapper[4768]: I0217 13:36:43.553215 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44663579-783b-4372-86d6-acf235a62d72" path="/var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/volumes" Feb 17 13:36:43 crc kubenswrapper[4768]: I0217 13:36:43.554225 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="496e6271-fb68-4057-954e-a0d97a4afa3f" path="/var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes" Feb 17 13:36:43 crc kubenswrapper[4768]: I0217 13:36:43.554941 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" path="/var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes" Feb 17 13:36:43 crc kubenswrapper[4768]: I0217 13:36:43.555456 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49ef4625-1d3a-4a9f-b595-c2433d32326d" path="/var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/volumes" Feb 17 13:36:43 crc kubenswrapper[4768]: I0217 13:36:43.556518 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bb40260-dbaa-4fb0-84df-5e680505d512" path="/var/lib/kubelet/pods/4bb40260-dbaa-4fb0-84df-5e680505d512/volumes" Feb 17 13:36:43 crc kubenswrapper[4768]: I0217 13:36:43.557014 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5225d0e4-402f-4861-b410-819f433b1803" path="/var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes" Feb 17 13:36:43 crc kubenswrapper[4768]: I0217 13:36:43.558284 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5441d097-087c-4d9a-baa8-b210afa90fc9" path="/var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes" Feb 17 13:36:43 crc kubenswrapper[4768]: I0217 13:36:43.558764 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57a731c4-ef35-47a8-b875-bfb08a7f8011" path="/var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes" Feb 17 13:36:43 crc kubenswrapper[4768]: I0217 13:36:43.559818 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b88f790-22fa-440e-b583-365168c0b23d" path="/var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/volumes" Feb 17 13:36:43 crc kubenswrapper[4768]: I0217 13:36:43.560520 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fe579f8-e8a6-4643-bce5-a661393c4dde" path="/var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/volumes" Feb 17 13:36:43 crc kubenswrapper[4768]: I0217 13:36:43.561514 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6402fda4-df10-493c-b4e5-d0569419652d" path="/var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes" Feb 17 13:36:43 crc kubenswrapper[4768]: I0217 13:36:43.562306 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6509e943-70c6-444c-bc41-48a544e36fbd" path="/var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes" Feb 17 13:36:43 crc kubenswrapper[4768]: I0217 13:36:43.562914 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6731426b-95fe-49ff-bb5f-40441049fde2" path="/var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/volumes" Feb 17 13:36:43 crc kubenswrapper[4768]: I0217 13:36:43.563941 4768 kubelet_volumes.go:152] "Cleaned up orphaned volume subpath from pod" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volume-subpaths/run-systemd/ovnkube-controller/6" Feb 17 13:36:43 crc kubenswrapper[4768]: I0217 13:36:43.564073 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volumes" Feb 17 13:36:43 crc kubenswrapper[4768]: I0217 13:36:43.565757 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7539238d-5fe0-46ed-884e-1c3b566537ec" path="/var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes" Feb 17 13:36:43 crc kubenswrapper[4768]: I0217 13:36:43.566811 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7583ce53-e0fe-4a16-9e4d-50516596a136" path="/var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes" Feb 17 13:36:43 crc kubenswrapper[4768]: I0217 13:36:43.567383 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7bb08738-c794-4ee8-9972-3a62ca171029" path="/var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes" Feb 17 13:36:43 crc kubenswrapper[4768]: I0217 13:36:43.568936 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87cf06ed-a83f-41a7-828d-70653580a8cb" path="/var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes" Feb 17 13:36:43 crc kubenswrapper[4768]: I0217 13:36:43.570377 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" path="/var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes" Feb 17 13:36:43 crc kubenswrapper[4768]: I0217 13:36:43.571019 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="925f1c65-6136-48ba-85aa-3a3b50560753" path="/var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes" Feb 17 13:36:43 crc kubenswrapper[4768]: I0217 13:36:43.572123 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" path="/var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/volumes" Feb 17 13:36:43 crc kubenswrapper[4768]: I0217 13:36:43.572938 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d4552c7-cd75-42dd-8880-30dd377c49a4" path="/var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes" Feb 17 13:36:43 crc kubenswrapper[4768]: I0217 13:36:43.573557 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" path="/var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/volumes" Feb 17 13:36:43 crc kubenswrapper[4768]: I0217 13:36:43.574825 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a31745f5-9847-4afe-82a5-3161cc66ca93" path="/var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes" Feb 17 13:36:43 crc kubenswrapper[4768]: I0217 13:36:43.576033 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" path="/var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes" Feb 17 13:36:43 crc kubenswrapper[4768]: I0217 13:36:43.576783 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6312bbd-5731-4ea0-a20f-81d5a57df44a" path="/var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/volumes" Feb 17 13:36:43 crc kubenswrapper[4768]: I0217 13:36:43.577808 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" path="/var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes" Feb 17 13:36:43 crc kubenswrapper[4768]: I0217 13:36:43.578689 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" path="/var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes" Feb 17 13:36:43 crc kubenswrapper[4768]: I0217 13:36:43.579854 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" path="/var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/volumes" Feb 17 13:36:43 crc kubenswrapper[4768]: I0217 13:36:43.580767 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf126b07-da06-4140-9a57-dfd54fc6b486" path="/var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes" Feb 17 13:36:43 crc kubenswrapper[4768]: I0217 13:36:43.581982 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c03ee662-fb2f-4fc4-a2c1-af487c19d254" path="/var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes" Feb 17 13:36:43 crc kubenswrapper[4768]: I0217 13:36:43.582731 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" path="/var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/volumes" Feb 17 13:36:43 crc kubenswrapper[4768]: I0217 13:36:43.583316 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7e6199b-1264-4501-8953-767f51328d08" path="/var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes" Feb 17 13:36:43 crc kubenswrapper[4768]: I0217 13:36:43.584451 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="efdd0498-1daa-4136-9a4a-3b948c2293fc" path="/var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/volumes" Feb 17 13:36:43 crc kubenswrapper[4768]: I0217 13:36:43.585205 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" path="/var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/volumes" Feb 17 13:36:43 crc kubenswrapper[4768]: I0217 13:36:43.586358 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fda69060-fa79-4696-b1a6-7980f124bf7c" path="/var/lib/kubelet/pods/fda69060-fa79-4696-b1a6-7980f124bf7c/volumes" Feb 17 13:36:43 crc kubenswrapper[4768]: I0217 13:36:43.683433 4768 generic.go:334] "Generic (PLEG): container finished" podID="742e6df8-2a68-426e-982c-ef825c6efca3" containerID="6fcb3dc45db5b2c74f9bdd4c7c453815a392de1f2afe91ebed618327cfe5cb7c" exitCode=0 Feb 17 13:36:43 crc kubenswrapper[4768]: I0217 13:36:43.683497 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5cplg" event={"ID":"742e6df8-2a68-426e-982c-ef825c6efca3","Type":"ContainerDied","Data":"6fcb3dc45db5b2c74f9bdd4c7c453815a392de1f2afe91ebed618327cfe5cb7c"} Feb 17 13:36:43 crc kubenswrapper[4768]: I0217 13:36:43.683528 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5cplg" event={"ID":"742e6df8-2a68-426e-982c-ef825c6efca3","Type":"ContainerStarted","Data":"4233bdbe1d37ecf7f91ff245413808d8b89fdabf69050a2c99a687098eff401f"} Feb 17 13:36:43 crc kubenswrapper[4768]: I0217 13:36:43.686166 4768 generic.go:334] "Generic (PLEG): container finished" podID="68ae92b3-aced-409b-901b-252d2364cc01" containerID="3ccce43f2773621b91f6782909e74d6c1cb29f3f2e0d4280753e2719f256d694" exitCode=0 Feb 17 13:36:43 crc kubenswrapper[4768]: I0217 13:36:43.686460 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-6xvnz" event={"ID":"68ae92b3-aced-409b-901b-252d2364cc01","Type":"ContainerDied","Data":"3ccce43f2773621b91f6782909e74d6c1cb29f3f2e0d4280753e2719f256d694"} Feb 17 13:36:43 crc kubenswrapper[4768]: I0217 13:36:43.686513 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-6xvnz" event={"ID":"68ae92b3-aced-409b-901b-252d2364cc01","Type":"ContainerStarted","Data":"73dc4dd86cf191ffb7e34b329f51aac5807173b57486d9f902079c78ca098533"} Feb 17 13:36:43 crc kubenswrapper[4768]: I0217 13:36:43.706488 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-6xvnz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"68ae92b3-aced-409b-901b-252d2364cc01\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42dxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42dxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42dxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42dxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42dxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42dxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42dxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:36:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-6xvnz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:36:43Z is after 2025-08-24T17:21:41Z" Feb 17 13:36:43 crc kubenswrapper[4768]: I0217 13:36:43.730896 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:36:43Z is after 2025-08-24T17:21:41Z" Feb 17 13:36:43 crc kubenswrapper[4768]: I0217 13:36:43.761194 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"88441663-1346-4328-a433-63cb4d9b6722\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1dd90501c9cf3ad0b27fefe4999afdfeaf0a8262728c8d47cd4714b0ef71d340\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://52dd97c46a2e371a68dcd9558be0f94d3bdd20c7a6ba6a65178fc09cdceea903\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4f0320f35632987c0e1f7eb6655c7221d7c159f1473088fa143a7c69a60f5b0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ecebe409455190192bdd157c1aa1c3eaec5d7930ccfaa6c456a1d04be7fff2f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9ecebe409455190192bdd157c1aa1c3eaec5d7930ccfaa6c456a1d04be7fff2f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-17T13:36:41Z\\\",\\\"message\\\":\\\"le observer\\\\nW0217 13:36:40.862072 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0217 13:36:40.862247 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0217 13:36:40.862863 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1936381186/tls.crt::/tmp/serving-cert-1936381186/tls.key\\\\\\\"\\\\nI0217 13:36:41.350711 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0217 13:36:41.368959 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0217 13:36:41.368982 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0217 13:36:41.369003 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0217 13:36:41.369008 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0217 13:36:41.374960 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0217 13:36:41.374983 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 13:36:41.374989 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0217 13:36:41.374990 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0217 13:36:41.374994 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0217 13:36:41.375023 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0217 13:36:41.375028 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0217 13:36:41.375032 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0217 13:36:41.377284 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T13:36:25Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7641c2023d65064decc69606567dff6eff07116a6cc62f9d1b1cddb34a39a407\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:24Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9dffd0f3cc3b89ababe812ea2e550f319242913da2908f466cbb8d9ac541e4b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9dffd0f3cc3b89ababe812ea2e550f319242913da2908f466cbb8d9ac541e4b9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:36:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:36:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:36:21Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:36:43Z is after 2025-08-24T17:21:41Z" Feb 17 13:36:43 crc kubenswrapper[4768]: I0217 13:36:43.779641 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c0c5a46beaa5e6c9df638b288967eaff1605591faafc373623753e3c434c6ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a28c91cfb782b4976153d2afced3117498aaabaafa5bba655b020c13aebb1dc5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:36:43Z is after 2025-08-24T17:21:41Z" Feb 17 13:36:43 crc kubenswrapper[4768]: I0217 13:36:43.794606 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:41Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:36:43Z is after 2025-08-24T17:21:41Z" Feb 17 13:36:43 crc kubenswrapper[4768]: I0217 13:36:43.810936 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:41Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:36:43Z is after 2025-08-24T17:21:41Z" Feb 17 13:36:43 crc kubenswrapper[4768]: I0217 13:36:43.825685 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6l7rv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dac64b2f-4b0b-454c-96e0-fc7d563d300f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://903f9d31bd5f357a5a1f439a158788c1930d0f63cd50e809028689cfa7884e96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7njgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:36:41Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6l7rv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:36:43Z is after 2025-08-24T17:21:41Z" Feb 17 13:36:43 crc kubenswrapper[4768]: I0217 13:36:43.854752 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:41Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:36:43Z is after 2025-08-24T17:21:41Z" Feb 17 13:36:43 crc kubenswrapper[4768]: I0217 13:36:43.875016 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f9b2d1282c12fcb8597c7b4c60dc1bab1049afc179eaddbc8a72203b392f78d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:36:43Z is after 2025-08-24T17:21:41Z" Feb 17 13:36:43 crc kubenswrapper[4768]: I0217 13:36:43.893589 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-p97z4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"10c685ba-8fe0-425c-958c-3fb6754d3d84\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d56cb97a69888610458dcdfe5fa47e01b6641b0ab8ce5b11c01042001017d633\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b2h62\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://69b662556c0f67ed5b73c21d8dda6de43cd7b0e7c3dc7a57578f3ce94a4be7cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b2h62\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:36:42Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-p97z4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:36:43Z is after 2025-08-24T17:21:41Z" Feb 17 13:36:43 crc kubenswrapper[4768]: I0217 13:36:43.914856 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-jjjqj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e044bf1f-26b2-4a39-86e6-0440eff3eaa9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://19bc9ea99c2b3b0015849f2a4c730a14e048c35cf69f5b4025a4d51d707cf9f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jlkgb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:36:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-jjjqj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:36:43Z is after 2025-08-24T17:21:41Z" Feb 17 13:36:43 crc kubenswrapper[4768]: I0217 13:36:43.965765 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5cplg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"742e6df8-2a68-426e-982c-ef825c6efca3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg6ql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg6ql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg6ql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg6ql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg6ql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg6ql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg6ql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg6ql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6fcb3dc45db5b2c74f9bdd4c7c453815a392de1f2afe91ebed618327cfe5cb7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6fcb3dc45db5b2c74f9bdd4c7c453815a392de1f2afe91ebed618327cfe5cb7c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:36:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg6ql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:36:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-5cplg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:36:43Z is after 2025-08-24T17:21:41Z" Feb 17 13:36:43 crc kubenswrapper[4768]: I0217 13:36:43.998484 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5cplg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"742e6df8-2a68-426e-982c-ef825c6efca3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg6ql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg6ql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg6ql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg6ql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg6ql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg6ql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg6ql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg6ql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6fcb3dc45db5b2c74f9bdd4c7c453815a392de1f2afe91ebed618327cfe5cb7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6fcb3dc45db5b2c74f9bdd4c7c453815a392de1f2afe91ebed618327cfe5cb7c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:36:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg6ql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:36:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-5cplg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:36:43Z is after 2025-08-24T17:21:41Z" Feb 17 13:36:44 crc kubenswrapper[4768]: I0217 13:36:44.032886 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f9b2d1282c12fcb8597c7b4c60dc1bab1049afc179eaddbc8a72203b392f78d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:36:44Z is after 2025-08-24T17:21:41Z" Feb 17 13:36:44 crc kubenswrapper[4768]: I0217 13:36:44.070328 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-p97z4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"10c685ba-8fe0-425c-958c-3fb6754d3d84\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d56cb97a69888610458dcdfe5fa47e01b6641b0ab8ce5b11c01042001017d633\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b2h62\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://69b662556c0f67ed5b73c21d8dda6de43cd7b0e7c3dc7a57578f3ce94a4be7cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b2h62\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:36:42Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-p97z4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:36:44Z is after 2025-08-24T17:21:41Z" Feb 17 13:36:44 crc kubenswrapper[4768]: I0217 13:36:44.110887 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-jjjqj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e044bf1f-26b2-4a39-86e6-0440eff3eaa9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://19bc9ea99c2b3b0015849f2a4c730a14e048c35cf69f5b4025a4d51d707cf9f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jlkgb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:36:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-jjjqj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:36:44Z is after 2025-08-24T17:21:41Z" Feb 17 13:36:44 crc kubenswrapper[4768]: I0217 13:36:44.152534 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-6xvnz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"68ae92b3-aced-409b-901b-252d2364cc01\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42dxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ccce43f2773621b91f6782909e74d6c1cb29f3f2e0d4280753e2719f256d694\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3ccce43f2773621b91f6782909e74d6c1cb29f3f2e0d4280753e2719f256d694\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:36:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:36:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42dxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42dxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42dxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42dxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42dxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42dxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:36:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-6xvnz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:36:44Z is after 2025-08-24T17:21:41Z" Feb 17 13:36:44 crc kubenswrapper[4768]: I0217 13:36:44.195468 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"88441663-1346-4328-a433-63cb4d9b6722\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1dd90501c9cf3ad0b27fefe4999afdfeaf0a8262728c8d47cd4714b0ef71d340\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://52dd97c46a2e371a68dcd9558be0f94d3bdd20c7a6ba6a65178fc09cdceea903\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4f0320f35632987c0e1f7eb6655c7221d7c159f1473088fa143a7c69a60f5b0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ecebe409455190192bdd157c1aa1c3eaec5d7930ccfaa6c456a1d04be7fff2f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9ecebe409455190192bdd157c1aa1c3eaec5d7930ccfaa6c456a1d04be7fff2f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-17T13:36:41Z\\\",\\\"message\\\":\\\"le observer\\\\nW0217 13:36:40.862072 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0217 13:36:40.862247 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0217 13:36:40.862863 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1936381186/tls.crt::/tmp/serving-cert-1936381186/tls.key\\\\\\\"\\\\nI0217 13:36:41.350711 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0217 13:36:41.368959 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0217 13:36:41.368982 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0217 13:36:41.369003 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0217 13:36:41.369008 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0217 13:36:41.374960 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0217 13:36:41.374983 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 13:36:41.374989 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0217 13:36:41.374990 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0217 13:36:41.374994 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0217 13:36:41.375023 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0217 13:36:41.375028 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0217 13:36:41.375032 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0217 13:36:41.377284 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T13:36:25Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7641c2023d65064decc69606567dff6eff07116a6cc62f9d1b1cddb34a39a407\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:24Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9dffd0f3cc3b89ababe812ea2e550f319242913da2908f466cbb8d9ac541e4b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9dffd0f3cc3b89ababe812ea2e550f319242913da2908f466cbb8d9ac541e4b9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:36:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:36:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:36:21Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:36:44Z is after 2025-08-24T17:21:41Z" Feb 17 13:36:44 crc kubenswrapper[4768]: I0217 13:36:44.232960 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c0c5a46beaa5e6c9df638b288967eaff1605591faafc373623753e3c434c6ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a28c91cfb782b4976153d2afced3117498aaabaafa5bba655b020c13aebb1dc5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:36:44Z is after 2025-08-24T17:21:41Z" Feb 17 13:36:44 crc kubenswrapper[4768]: I0217 13:36:44.254597 4768 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 13:36:44 crc kubenswrapper[4768]: I0217 13:36:44.255168 4768 scope.go:117] "RemoveContainer" containerID="9ecebe409455190192bdd157c1aa1c3eaec5d7930ccfaa6c456a1d04be7fff2f" Feb 17 13:36:44 crc kubenswrapper[4768]: E0217 13:36:44.255304 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Feb 17 13:36:44 crc kubenswrapper[4768]: I0217 13:36:44.276716 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:36:44Z is after 2025-08-24T17:21:41Z" Feb 17 13:36:44 crc kubenswrapper[4768]: I0217 13:36:44.310630 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:41Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:36:44Z is after 2025-08-24T17:21:41Z" Feb 17 13:36:44 crc kubenswrapper[4768]: I0217 13:36:44.353761 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:41Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:36:44Z is after 2025-08-24T17:21:41Z" Feb 17 13:36:44 crc kubenswrapper[4768]: I0217 13:36:44.394958 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:41Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:36:44Z is after 2025-08-24T17:21:41Z" Feb 17 13:36:44 crc kubenswrapper[4768]: I0217 13:36:44.433049 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6l7rv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dac64b2f-4b0b-454c-96e0-fc7d563d300f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://903f9d31bd5f357a5a1f439a158788c1930d0f63cd50e809028689cfa7884e96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7njgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:36:41Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6l7rv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:36:44Z is after 2025-08-24T17:21:41Z" Feb 17 13:36:44 crc kubenswrapper[4768]: I0217 13:36:44.519685 4768 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-07 11:29:15.122928437 +0000 UTC Feb 17 13:36:44 crc kubenswrapper[4768]: I0217 13:36:44.692044 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5cplg" event={"ID":"742e6df8-2a68-426e-982c-ef825c6efca3","Type":"ContainerStarted","Data":"2c29effffff4df037300ce8f32cae5337e3c60634d26fb82fde99ad7994a4fd4"} Feb 17 13:36:44 crc kubenswrapper[4768]: I0217 13:36:44.692092 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5cplg" event={"ID":"742e6df8-2a68-426e-982c-ef825c6efca3","Type":"ContainerStarted","Data":"88f4574f4bb95ef3e580f5e3e98feb55cd1381df1cc14a3797f040f3a8d92e86"} Feb 17 13:36:44 crc kubenswrapper[4768]: I0217 13:36:44.692123 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5cplg" event={"ID":"742e6df8-2a68-426e-982c-ef825c6efca3","Type":"ContainerStarted","Data":"2e14dfa7f4c26b9ae731392907658bf0a72b811243137b32320a1be70a334a1f"} Feb 17 13:36:44 crc kubenswrapper[4768]: I0217 13:36:44.692135 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5cplg" event={"ID":"742e6df8-2a68-426e-982c-ef825c6efca3","Type":"ContainerStarted","Data":"d0849644593ce9a9b0ce3191eb62e47c5986c4b20f93313dd9fa721c7879ede5"} Feb 17 13:36:44 crc kubenswrapper[4768]: I0217 13:36:44.694154 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-6xvnz" event={"ID":"68ae92b3-aced-409b-901b-252d2364cc01","Type":"ContainerStarted","Data":"8fa44e73498e2987dca421d954e3c6b71c36138496d25e3498934a0ae03c25de"} Feb 17 13:36:44 crc kubenswrapper[4768]: I0217 13:36:44.695488 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"07aa34038d6332ca2ddf610df510586d343655b1e38a2912ea1513cca35f4dcc"} Feb 17 13:36:44 crc kubenswrapper[4768]: I0217 13:36:44.712146 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"88441663-1346-4328-a433-63cb4d9b6722\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1dd90501c9cf3ad0b27fefe4999afdfeaf0a8262728c8d47cd4714b0ef71d340\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://52dd97c46a2e371a68dcd9558be0f94d3bdd20c7a6ba6a65178fc09cdceea903\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4f0320f35632987c0e1f7eb6655c7221d7c159f1473088fa143a7c69a60f5b0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ecebe409455190192bdd157c1aa1c3eaec5d7930ccfaa6c456a1d04be7fff2f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9ecebe409455190192bdd157c1aa1c3eaec5d7930ccfaa6c456a1d04be7fff2f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-17T13:36:41Z\\\",\\\"message\\\":\\\"le observer\\\\nW0217 13:36:40.862072 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0217 13:36:40.862247 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0217 13:36:40.862863 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1936381186/tls.crt::/tmp/serving-cert-1936381186/tls.key\\\\\\\"\\\\nI0217 13:36:41.350711 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0217 13:36:41.368959 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0217 13:36:41.368982 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0217 13:36:41.369003 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0217 13:36:41.369008 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0217 13:36:41.374960 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0217 13:36:41.374983 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 13:36:41.374989 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0217 13:36:41.374990 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0217 13:36:41.374994 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0217 13:36:41.375023 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0217 13:36:41.375028 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0217 13:36:41.375032 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0217 13:36:41.377284 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T13:36:25Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7641c2023d65064decc69606567dff6eff07116a6cc62f9d1b1cddb34a39a407\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:24Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9dffd0f3cc3b89ababe812ea2e550f319242913da2908f466cbb8d9ac541e4b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9dffd0f3cc3b89ababe812ea2e550f319242913da2908f466cbb8d9ac541e4b9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:36:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:36:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:36:21Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:36:44Z is after 2025-08-24T17:21:41Z" Feb 17 13:36:44 crc kubenswrapper[4768]: I0217 13:36:44.733162 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c0c5a46beaa5e6c9df638b288967eaff1605591faafc373623753e3c434c6ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a28c91cfb782b4976153d2afced3117498aaabaafa5bba655b020c13aebb1dc5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:36:44Z is after 2025-08-24T17:21:41Z" Feb 17 13:36:44 crc kubenswrapper[4768]: I0217 13:36:44.744607 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:36:44Z is after 2025-08-24T17:21:41Z" Feb 17 13:36:44 crc kubenswrapper[4768]: I0217 13:36:44.760606 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:41Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:36:44Z is after 2025-08-24T17:21:41Z" Feb 17 13:36:44 crc kubenswrapper[4768]: I0217 13:36:44.771032 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:41Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:36:44Z is after 2025-08-24T17:21:41Z" Feb 17 13:36:44 crc kubenswrapper[4768]: I0217 13:36:44.781131 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:41Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:36:44Z is after 2025-08-24T17:21:41Z" Feb 17 13:36:44 crc kubenswrapper[4768]: I0217 13:36:44.790210 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6l7rv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dac64b2f-4b0b-454c-96e0-fc7d563d300f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://903f9d31bd5f357a5a1f439a158788c1930d0f63cd50e809028689cfa7884e96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7njgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:36:41Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6l7rv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:36:44Z is after 2025-08-24T17:21:41Z" Feb 17 13:36:44 crc kubenswrapper[4768]: I0217 13:36:44.807547 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5cplg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"742e6df8-2a68-426e-982c-ef825c6efca3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg6ql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg6ql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg6ql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg6ql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg6ql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg6ql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg6ql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg6ql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6fcb3dc45db5b2c74f9bdd4c7c453815a392de1f2afe91ebed618327cfe5cb7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6fcb3dc45db5b2c74f9bdd4c7c453815a392de1f2afe91ebed618327cfe5cb7c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:36:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg6ql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:36:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-5cplg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:36:44Z is after 2025-08-24T17:21:41Z" Feb 17 13:36:44 crc kubenswrapper[4768]: I0217 13:36:44.819891 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f9b2d1282c12fcb8597c7b4c60dc1bab1049afc179eaddbc8a72203b392f78d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:36:44Z is after 2025-08-24T17:21:41Z" Feb 17 13:36:44 crc kubenswrapper[4768]: I0217 13:36:44.830426 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-p97z4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"10c685ba-8fe0-425c-958c-3fb6754d3d84\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d56cb97a69888610458dcdfe5fa47e01b6641b0ab8ce5b11c01042001017d633\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b2h62\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://69b662556c0f67ed5b73c21d8dda6de43cd7b0e7c3dc7a57578f3ce94a4be7cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b2h62\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:36:42Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-p97z4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:36:44Z is after 2025-08-24T17:21:41Z" Feb 17 13:36:44 crc kubenswrapper[4768]: I0217 13:36:44.871676 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-jjjqj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e044bf1f-26b2-4a39-86e6-0440eff3eaa9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://19bc9ea99c2b3b0015849f2a4c730a14e048c35cf69f5b4025a4d51d707cf9f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jlkgb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:36:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-jjjqj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:36:44Z is after 2025-08-24T17:21:41Z" Feb 17 13:36:44 crc kubenswrapper[4768]: I0217 13:36:44.914514 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-6xvnz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"68ae92b3-aced-409b-901b-252d2364cc01\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42dxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ccce43f2773621b91f6782909e74d6c1cb29f3f2e0d4280753e2719f256d694\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3ccce43f2773621b91f6782909e74d6c1cb29f3f2e0d4280753e2719f256d694\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:36:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:36:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42dxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8fa44e73498e2987dca421d954e3c6b71c36138496d25e3498934a0ae03c25de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42dxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42dxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42dxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42dxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42dxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:36:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-6xvnz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:36:44Z is after 2025-08-24T17:21:41Z" Feb 17 13:36:44 crc kubenswrapper[4768]: I0217 13:36:44.952535 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-jjjqj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e044bf1f-26b2-4a39-86e6-0440eff3eaa9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://19bc9ea99c2b3b0015849f2a4c730a14e048c35cf69f5b4025a4d51d707cf9f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jlkgb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:36:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-jjjqj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:36:44Z is after 2025-08-24T17:21:41Z" Feb 17 13:36:45 crc kubenswrapper[4768]: I0217 13:36:45.001132 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5cplg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"742e6df8-2a68-426e-982c-ef825c6efca3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg6ql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg6ql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg6ql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg6ql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg6ql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg6ql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg6ql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg6ql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6fcb3dc45db5b2c74f9bdd4c7c453815a392de1f2afe91ebed618327cfe5cb7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6fcb3dc45db5b2c74f9bdd4c7c453815a392de1f2afe91ebed618327cfe5cb7c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:36:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg6ql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:36:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-5cplg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:36:44Z is after 2025-08-24T17:21:41Z" Feb 17 13:36:45 crc kubenswrapper[4768]: I0217 13:36:45.035871 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f9b2d1282c12fcb8597c7b4c60dc1bab1049afc179eaddbc8a72203b392f78d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:36:45Z is after 2025-08-24T17:21:41Z" Feb 17 13:36:45 crc kubenswrapper[4768]: I0217 13:36:45.092350 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-p97z4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"10c685ba-8fe0-425c-958c-3fb6754d3d84\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d56cb97a69888610458dcdfe5fa47e01b6641b0ab8ce5b11c01042001017d633\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b2h62\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://69b662556c0f67ed5b73c21d8dda6de43cd7b0e7c3dc7a57578f3ce94a4be7cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b2h62\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:36:42Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-p97z4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:36:45Z is after 2025-08-24T17:21:41Z" Feb 17 13:36:45 crc kubenswrapper[4768]: I0217 13:36:45.132516 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 17 13:36:45 crc kubenswrapper[4768]: I0217 13:36:45.132494 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-6xvnz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"68ae92b3-aced-409b-901b-252d2364cc01\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42dxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ccce43f2773621b91f6782909e74d6c1cb29f3f2e0d4280753e2719f256d694\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3ccce43f2773621b91f6782909e74d6c1cb29f3f2e0d4280753e2719f256d694\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:36:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:36:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42dxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8fa44e73498e2987dca421d954e3c6b71c36138496d25e3498934a0ae03c25de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42dxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42dxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42dxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42dxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42dxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:36:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-6xvnz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:36:45Z is after 2025-08-24T17:21:41Z" Feb 17 13:36:45 crc kubenswrapper[4768]: I0217 13:36:45.135481 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 17 13:36:45 crc kubenswrapper[4768]: I0217 13:36:45.148711 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-crc"] Feb 17 13:36:45 crc kubenswrapper[4768]: I0217 13:36:45.172085 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"88441663-1346-4328-a433-63cb4d9b6722\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1dd90501c9cf3ad0b27fefe4999afdfeaf0a8262728c8d47cd4714b0ef71d340\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://52dd97c46a2e371a68dcd9558be0f94d3bdd20c7a6ba6a65178fc09cdceea903\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4f0320f35632987c0e1f7eb6655c7221d7c159f1473088fa143a7c69a60f5b0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ecebe409455190192bdd157c1aa1c3eaec5d7930ccfaa6c456a1d04be7fff2f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9ecebe409455190192bdd157c1aa1c3eaec5d7930ccfaa6c456a1d04be7fff2f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-17T13:36:41Z\\\",\\\"message\\\":\\\"le observer\\\\nW0217 13:36:40.862072 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0217 13:36:40.862247 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0217 13:36:40.862863 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1936381186/tls.crt::/tmp/serving-cert-1936381186/tls.key\\\\\\\"\\\\nI0217 13:36:41.350711 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0217 13:36:41.368959 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0217 13:36:41.368982 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0217 13:36:41.369003 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0217 13:36:41.369008 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0217 13:36:41.374960 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0217 13:36:41.374983 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 13:36:41.374989 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0217 13:36:41.374990 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0217 13:36:41.374994 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0217 13:36:41.375023 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0217 13:36:41.375028 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0217 13:36:41.375032 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0217 13:36:41.377284 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T13:36:25Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7641c2023d65064decc69606567dff6eff07116a6cc62f9d1b1cddb34a39a407\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:24Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9dffd0f3cc3b89ababe812ea2e550f319242913da2908f466cbb8d9ac541e4b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9dffd0f3cc3b89ababe812ea2e550f319242913da2908f466cbb8d9ac541e4b9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:36:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:36:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:36:21Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:36:45Z is after 2025-08-24T17:21:41Z" Feb 17 13:36:45 crc kubenswrapper[4768]: I0217 13:36:45.211829 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c0c5a46beaa5e6c9df638b288967eaff1605591faafc373623753e3c434c6ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a28c91cfb782b4976153d2afced3117498aaabaafa5bba655b020c13aebb1dc5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:36:45Z is after 2025-08-24T17:21:41Z" Feb 17 13:36:45 crc kubenswrapper[4768]: I0217 13:36:45.243032 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 13:36:45 crc kubenswrapper[4768]: I0217 13:36:45.243160 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 13:36:45 crc kubenswrapper[4768]: I0217 13:36:45.243192 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 13:36:45 crc kubenswrapper[4768]: I0217 13:36:45.243220 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 13:36:45 crc kubenswrapper[4768]: E0217 13:36:45.243332 4768 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 17 13:36:45 crc kubenswrapper[4768]: E0217 13:36:45.243390 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-17 13:36:49.243375107 +0000 UTC m=+28.522761549 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 17 13:36:45 crc kubenswrapper[4768]: E0217 13:36:45.243882 4768 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 17 13:36:45 crc kubenswrapper[4768]: E0217 13:36:45.243904 4768 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 17 13:36:45 crc kubenswrapper[4768]: E0217 13:36:45.243918 4768 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 17 13:36:45 crc kubenswrapper[4768]: E0217 13:36:45.243951 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-17 13:36:49.243940974 +0000 UTC m=+28.523327416 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 17 13:36:45 crc kubenswrapper[4768]: E0217 13:36:45.243991 4768 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 17 13:36:45 crc kubenswrapper[4768]: E0217 13:36:45.244018 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-17 13:36:49.244009696 +0000 UTC m=+28.523396148 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 17 13:36:45 crc kubenswrapper[4768]: E0217 13:36:45.244156 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 13:36:49.24414496 +0000 UTC m=+28.523531402 (durationBeforeRetry 4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 13:36:45 crc kubenswrapper[4768]: I0217 13:36:45.251943 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:36:45Z is after 2025-08-24T17:21:41Z" Feb 17 13:36:45 crc kubenswrapper[4768]: I0217 13:36:45.291065 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://07aa34038d6332ca2ddf610df510586d343655b1e38a2912ea1513cca35f4dcc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:36:45Z is after 2025-08-24T17:21:41Z" Feb 17 13:36:45 crc kubenswrapper[4768]: I0217 13:36:45.331200 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:41Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:36:45Z is after 2025-08-24T17:21:41Z" Feb 17 13:36:45 crc kubenswrapper[4768]: I0217 13:36:45.344265 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 13:36:45 crc kubenswrapper[4768]: E0217 13:36:45.344468 4768 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 17 13:36:45 crc kubenswrapper[4768]: E0217 13:36:45.344496 4768 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 17 13:36:45 crc kubenswrapper[4768]: E0217 13:36:45.344511 4768 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 17 13:36:45 crc kubenswrapper[4768]: E0217 13:36:45.344575 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-17 13:36:49.344560201 +0000 UTC m=+28.623946643 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 17 13:36:45 crc kubenswrapper[4768]: I0217 13:36:45.372552 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:41Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:36:45Z is after 2025-08-24T17:21:41Z" Feb 17 13:36:45 crc kubenswrapper[4768]: I0217 13:36:45.411878 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6l7rv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dac64b2f-4b0b-454c-96e0-fc7d563d300f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://903f9d31bd5f357a5a1f439a158788c1930d0f63cd50e809028689cfa7884e96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7njgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:36:41Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6l7rv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:36:45Z is after 2025-08-24T17:21:41Z" Feb 17 13:36:45 crc kubenswrapper[4768]: I0217 13:36:45.456478 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-6xvnz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"68ae92b3-aced-409b-901b-252d2364cc01\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42dxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ccce43f2773621b91f6782909e74d6c1cb29f3f2e0d4280753e2719f256d694\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3ccce43f2773621b91f6782909e74d6c1cb29f3f2e0d4280753e2719f256d694\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:36:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:36:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42dxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8fa44e73498e2987dca421d954e3c6b71c36138496d25e3498934a0ae03c25de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42dxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42dxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42dxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42dxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42dxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:36:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-6xvnz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:36:45Z is after 2025-08-24T17:21:41Z" Feb 17 13:36:45 crc kubenswrapper[4768]: I0217 13:36:45.495451 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"88441663-1346-4328-a433-63cb4d9b6722\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1dd90501c9cf3ad0b27fefe4999afdfeaf0a8262728c8d47cd4714b0ef71d340\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://52dd97c46a2e371a68dcd9558be0f94d3bdd20c7a6ba6a65178fc09cdceea903\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4f0320f35632987c0e1f7eb6655c7221d7c159f1473088fa143a7c69a60f5b0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ecebe409455190192bdd157c1aa1c3eaec5d7930ccfaa6c456a1d04be7fff2f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9ecebe409455190192bdd157c1aa1c3eaec5d7930ccfaa6c456a1d04be7fff2f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-17T13:36:41Z\\\",\\\"message\\\":\\\"le observer\\\\nW0217 13:36:40.862072 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0217 13:36:40.862247 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0217 13:36:40.862863 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1936381186/tls.crt::/tmp/serving-cert-1936381186/tls.key\\\\\\\"\\\\nI0217 13:36:41.350711 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0217 13:36:41.368959 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0217 13:36:41.368982 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0217 13:36:41.369003 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0217 13:36:41.369008 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0217 13:36:41.374960 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0217 13:36:41.374983 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 13:36:41.374989 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0217 13:36:41.374990 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0217 13:36:41.374994 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0217 13:36:41.375023 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0217 13:36:41.375028 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0217 13:36:41.375032 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0217 13:36:41.377284 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T13:36:25Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7641c2023d65064decc69606567dff6eff07116a6cc62f9d1b1cddb34a39a407\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:24Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9dffd0f3cc3b89ababe812ea2e550f319242913da2908f466cbb8d9ac541e4b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9dffd0f3cc3b89ababe812ea2e550f319242913da2908f466cbb8d9ac541e4b9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:36:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:36:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:36:21Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:36:45Z is after 2025-08-24T17:21:41Z" Feb 17 13:36:45 crc kubenswrapper[4768]: I0217 13:36:45.520497 4768 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-15 08:33:17.386121436 +0000 UTC Feb 17 13:36:45 crc kubenswrapper[4768]: I0217 13:36:45.532986 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c0c5a46beaa5e6c9df638b288967eaff1605591faafc373623753e3c434c6ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a28c91cfb782b4976153d2afced3117498aaabaafa5bba655b020c13aebb1dc5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:36:45Z is after 2025-08-24T17:21:41Z" Feb 17 13:36:45 crc kubenswrapper[4768]: I0217 13:36:45.533285 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 13:36:45 crc kubenswrapper[4768]: I0217 13:36:45.533310 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 13:36:45 crc kubenswrapper[4768]: I0217 13:36:45.533311 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 13:36:45 crc kubenswrapper[4768]: E0217 13:36:45.533419 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 13:36:45 crc kubenswrapper[4768]: E0217 13:36:45.533522 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 13:36:45 crc kubenswrapper[4768]: E0217 13:36:45.533619 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 13:36:45 crc kubenswrapper[4768]: I0217 13:36:45.570388 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:36:45Z is after 2025-08-24T17:21:41Z" Feb 17 13:36:45 crc kubenswrapper[4768]: I0217 13:36:45.609817 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://07aa34038d6332ca2ddf610df510586d343655b1e38a2912ea1513cca35f4dcc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:36:45Z is after 2025-08-24T17:21:41Z" Feb 17 13:36:45 crc kubenswrapper[4768]: I0217 13:36:45.651867 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:41Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:36:45Z is after 2025-08-24T17:21:41Z" Feb 17 13:36:45 crc kubenswrapper[4768]: I0217 13:36:45.692509 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:41Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:36:45Z is after 2025-08-24T17:21:41Z" Feb 17 13:36:45 crc kubenswrapper[4768]: I0217 13:36:45.702463 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5cplg" event={"ID":"742e6df8-2a68-426e-982c-ef825c6efca3","Type":"ContainerStarted","Data":"bca83097ac9d4b621e9bffa8f2f815a6060db34cb0272281b765aa7705d3ab5e"} Feb 17 13:36:45 crc kubenswrapper[4768]: I0217 13:36:45.702501 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5cplg" event={"ID":"742e6df8-2a68-426e-982c-ef825c6efca3","Type":"ContainerStarted","Data":"aaa535fc7f87d47d5bb54797f60b4adcabe1adc1d5d79720acc7a9ba7bb355c2"} Feb 17 13:36:45 crc kubenswrapper[4768]: I0217 13:36:45.704906 4768 generic.go:334] "Generic (PLEG): container finished" podID="68ae92b3-aced-409b-901b-252d2364cc01" containerID="8fa44e73498e2987dca421d954e3c6b71c36138496d25e3498934a0ae03c25de" exitCode=0 Feb 17 13:36:45 crc kubenswrapper[4768]: I0217 13:36:45.704974 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-6xvnz" event={"ID":"68ae92b3-aced-409b-901b-252d2364cc01","Type":"ContainerDied","Data":"8fa44e73498e2987dca421d954e3c6b71c36138496d25e3498934a0ae03c25de"} Feb 17 13:36:45 crc kubenswrapper[4768]: I0217 13:36:45.734638 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6l7rv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dac64b2f-4b0b-454c-96e0-fc7d563d300f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://903f9d31bd5f357a5a1f439a158788c1930d0f63cd50e809028689cfa7884e96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7njgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:36:41Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6l7rv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:36:45Z is after 2025-08-24T17:21:41Z" Feb 17 13:36:45 crc kubenswrapper[4768]: E0217 13:36:45.746249 4768 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-crc\" already exists" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 17 13:36:45 crc kubenswrapper[4768]: I0217 13:36:45.796210 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"99a4349d-f5a6-431b-b8d6-9de1f9bbe63a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f45aa99773693c07f27add72aa8305d47c48deeb24382aa9ac8bca232fd26ff3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://47db5ef94c07ecce7b368bb809a736c9532fe3e47081cfa7e248b8b40d1d243f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0adb0161ea24422b8b1443a81f00e9c38f60bf8d0718075ce189158200b28a78\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22e1c78ff231c7a76ebb06589f5f14caed8c0b72475083230c80b4512ae2a048\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:36:21Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:36:45Z is after 2025-08-24T17:21:41Z" Feb 17 13:36:45 crc kubenswrapper[4768]: I0217 13:36:45.836224 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f9b2d1282c12fcb8597c7b4c60dc1bab1049afc179eaddbc8a72203b392f78d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:36:45Z is after 2025-08-24T17:21:41Z" Feb 17 13:36:45 crc kubenswrapper[4768]: I0217 13:36:45.871337 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-p97z4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"10c685ba-8fe0-425c-958c-3fb6754d3d84\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d56cb97a69888610458dcdfe5fa47e01b6641b0ab8ce5b11c01042001017d633\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b2h62\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://69b662556c0f67ed5b73c21d8dda6de43cd7b0e7c3dc7a57578f3ce94a4be7cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b2h62\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:36:42Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-p97z4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:36:45Z is after 2025-08-24T17:21:41Z" Feb 17 13:36:45 crc kubenswrapper[4768]: I0217 13:36:45.912562 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-jjjqj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e044bf1f-26b2-4a39-86e6-0440eff3eaa9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://19bc9ea99c2b3b0015849f2a4c730a14e048c35cf69f5b4025a4d51d707cf9f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jlkgb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:36:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-jjjqj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:36:45Z is after 2025-08-24T17:21:41Z" Feb 17 13:36:45 crc kubenswrapper[4768]: I0217 13:36:45.960652 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5cplg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"742e6df8-2a68-426e-982c-ef825c6efca3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg6ql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg6ql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg6ql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg6ql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg6ql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg6ql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg6ql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg6ql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6fcb3dc45db5b2c74f9bdd4c7c453815a392de1f2afe91ebed618327cfe5cb7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6fcb3dc45db5b2c74f9bdd4c7c453815a392de1f2afe91ebed618327cfe5cb7c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:36:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg6ql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:36:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-5cplg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:36:45Z is after 2025-08-24T17:21:41Z" Feb 17 13:36:45 crc kubenswrapper[4768]: I0217 13:36:45.995495 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:41Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:36:45Z is after 2025-08-24T17:21:41Z" Feb 17 13:36:46 crc kubenswrapper[4768]: I0217 13:36:46.030315 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6l7rv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dac64b2f-4b0b-454c-96e0-fc7d563d300f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://903f9d31bd5f357a5a1f439a158788c1930d0f63cd50e809028689cfa7884e96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7njgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:36:41Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6l7rv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:36:46Z is after 2025-08-24T17:21:41Z" Feb 17 13:36:46 crc kubenswrapper[4768]: I0217 13:36:46.073803 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://07aa34038d6332ca2ddf610df510586d343655b1e38a2912ea1513cca35f4dcc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:36:46Z is after 2025-08-24T17:21:41Z" Feb 17 13:36:46 crc kubenswrapper[4768]: I0217 13:36:46.114389 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:41Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:36:46Z is after 2025-08-24T17:21:41Z" Feb 17 13:36:46 crc kubenswrapper[4768]: I0217 13:36:46.152828 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f9b2d1282c12fcb8597c7b4c60dc1bab1049afc179eaddbc8a72203b392f78d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:36:46Z is after 2025-08-24T17:21:41Z" Feb 17 13:36:46 crc kubenswrapper[4768]: I0217 13:36:46.191242 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-p97z4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"10c685ba-8fe0-425c-958c-3fb6754d3d84\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d56cb97a69888610458dcdfe5fa47e01b6641b0ab8ce5b11c01042001017d633\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b2h62\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://69b662556c0f67ed5b73c21d8dda6de43cd7b0e7c3dc7a57578f3ce94a4be7cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b2h62\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:36:42Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-p97z4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:36:46Z is after 2025-08-24T17:21:41Z" Feb 17 13:36:46 crc kubenswrapper[4768]: I0217 13:36:46.231228 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-jjjqj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e044bf1f-26b2-4a39-86e6-0440eff3eaa9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://19bc9ea99c2b3b0015849f2a4c730a14e048c35cf69f5b4025a4d51d707cf9f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jlkgb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:36:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-jjjqj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:36:46Z is after 2025-08-24T17:21:41Z" Feb 17 13:36:46 crc kubenswrapper[4768]: I0217 13:36:46.276497 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5cplg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"742e6df8-2a68-426e-982c-ef825c6efca3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg6ql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg6ql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg6ql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg6ql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg6ql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg6ql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg6ql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg6ql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6fcb3dc45db5b2c74f9bdd4c7c453815a392de1f2afe91ebed618327cfe5cb7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6fcb3dc45db5b2c74f9bdd4c7c453815a392de1f2afe91ebed618327cfe5cb7c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:36:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg6ql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:36:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-5cplg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:36:46Z is after 2025-08-24T17:21:41Z" Feb 17 13:36:46 crc kubenswrapper[4768]: I0217 13:36:46.310804 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"99a4349d-f5a6-431b-b8d6-9de1f9bbe63a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f45aa99773693c07f27add72aa8305d47c48deeb24382aa9ac8bca232fd26ff3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://47db5ef94c07ecce7b368bb809a736c9532fe3e47081cfa7e248b8b40d1d243f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0adb0161ea24422b8b1443a81f00e9c38f60bf8d0718075ce189158200b28a78\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22e1c78ff231c7a76ebb06589f5f14caed8c0b72475083230c80b4512ae2a048\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:36:21Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:36:46Z is after 2025-08-24T17:21:41Z" Feb 17 13:36:46 crc kubenswrapper[4768]: I0217 13:36:46.353675 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-6xvnz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"68ae92b3-aced-409b-901b-252d2364cc01\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42dxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ccce43f2773621b91f6782909e74d6c1cb29f3f2e0d4280753e2719f256d694\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3ccce43f2773621b91f6782909e74d6c1cb29f3f2e0d4280753e2719f256d694\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:36:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:36:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42dxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8fa44e73498e2987dca421d954e3c6b71c36138496d25e3498934a0ae03c25de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8fa44e73498e2987dca421d954e3c6b71c36138496d25e3498934a0ae03c25de\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:36:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:36:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42dxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42dxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42dxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42dxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42dxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:36:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-6xvnz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:36:46Z is after 2025-08-24T17:21:41Z" Feb 17 13:36:46 crc kubenswrapper[4768]: I0217 13:36:46.392416 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"88441663-1346-4328-a433-63cb4d9b6722\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1dd90501c9cf3ad0b27fefe4999afdfeaf0a8262728c8d47cd4714b0ef71d340\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://52dd97c46a2e371a68dcd9558be0f94d3bdd20c7a6ba6a65178fc09cdceea903\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4f0320f35632987c0e1f7eb6655c7221d7c159f1473088fa143a7c69a60f5b0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ecebe409455190192bdd157c1aa1c3eaec5d7930ccfaa6c456a1d04be7fff2f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9ecebe409455190192bdd157c1aa1c3eaec5d7930ccfaa6c456a1d04be7fff2f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-17T13:36:41Z\\\",\\\"message\\\":\\\"le observer\\\\nW0217 13:36:40.862072 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0217 13:36:40.862247 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0217 13:36:40.862863 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1936381186/tls.crt::/tmp/serving-cert-1936381186/tls.key\\\\\\\"\\\\nI0217 13:36:41.350711 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0217 13:36:41.368959 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0217 13:36:41.368982 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0217 13:36:41.369003 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0217 13:36:41.369008 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0217 13:36:41.374960 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0217 13:36:41.374983 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 13:36:41.374989 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0217 13:36:41.374990 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0217 13:36:41.374994 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0217 13:36:41.375023 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0217 13:36:41.375028 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0217 13:36:41.375032 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0217 13:36:41.377284 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T13:36:25Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7641c2023d65064decc69606567dff6eff07116a6cc62f9d1b1cddb34a39a407\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:24Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9dffd0f3cc3b89ababe812ea2e550f319242913da2908f466cbb8d9ac541e4b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9dffd0f3cc3b89ababe812ea2e550f319242913da2908f466cbb8d9ac541e4b9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:36:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:36:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:36:21Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:36:46Z is after 2025-08-24T17:21:41Z" Feb 17 13:36:46 crc kubenswrapper[4768]: I0217 13:36:46.430903 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c0c5a46beaa5e6c9df638b288967eaff1605591faafc373623753e3c434c6ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a28c91cfb782b4976153d2afced3117498aaabaafa5bba655b020c13aebb1dc5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:36:46Z is after 2025-08-24T17:21:41Z" Feb 17 13:36:46 crc kubenswrapper[4768]: I0217 13:36:46.470376 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:36:46Z is after 2025-08-24T17:21:41Z" Feb 17 13:36:46 crc kubenswrapper[4768]: I0217 13:36:46.521680 4768 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-21 17:45:11.6374531 +0000 UTC Feb 17 13:36:46 crc kubenswrapper[4768]: I0217 13:36:46.711345 4768 generic.go:334] "Generic (PLEG): container finished" podID="68ae92b3-aced-409b-901b-252d2364cc01" containerID="8eee5a030051c479dc3e7d690a338260f3a886f6c8b6764037de0a4b3dcea101" exitCode=0 Feb 17 13:36:46 crc kubenswrapper[4768]: I0217 13:36:46.711462 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-6xvnz" event={"ID":"68ae92b3-aced-409b-901b-252d2364cc01","Type":"ContainerDied","Data":"8eee5a030051c479dc3e7d690a338260f3a886f6c8b6764037de0a4b3dcea101"} Feb 17 13:36:46 crc kubenswrapper[4768]: I0217 13:36:46.731843 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-6xvnz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"68ae92b3-aced-409b-901b-252d2364cc01\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42dxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ccce43f2773621b91f6782909e74d6c1cb29f3f2e0d4280753e2719f256d694\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3ccce43f2773621b91f6782909e74d6c1cb29f3f2e0d4280753e2719f256d694\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:36:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:36:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42dxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8fa44e73498e2987dca421d954e3c6b71c36138496d25e3498934a0ae03c25de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8fa44e73498e2987dca421d954e3c6b71c36138496d25e3498934a0ae03c25de\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:36:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:36:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42dxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8eee5a030051c479dc3e7d690a338260f3a886f6c8b6764037de0a4b3dcea101\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8eee5a030051c479dc3e7d690a338260f3a886f6c8b6764037de0a4b3dcea101\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:36:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:36:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42dxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42dxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42dxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42dxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:36:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-6xvnz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:36:46Z is after 2025-08-24T17:21:41Z" Feb 17 13:36:46 crc kubenswrapper[4768]: I0217 13:36:46.747574 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"88441663-1346-4328-a433-63cb4d9b6722\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1dd90501c9cf3ad0b27fefe4999afdfeaf0a8262728c8d47cd4714b0ef71d340\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://52dd97c46a2e371a68dcd9558be0f94d3bdd20c7a6ba6a65178fc09cdceea903\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4f0320f35632987c0e1f7eb6655c7221d7c159f1473088fa143a7c69a60f5b0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ecebe409455190192bdd157c1aa1c3eaec5d7930ccfaa6c456a1d04be7fff2f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9ecebe409455190192bdd157c1aa1c3eaec5d7930ccfaa6c456a1d04be7fff2f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-17T13:36:41Z\\\",\\\"message\\\":\\\"le observer\\\\nW0217 13:36:40.862072 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0217 13:36:40.862247 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0217 13:36:40.862863 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1936381186/tls.crt::/tmp/serving-cert-1936381186/tls.key\\\\\\\"\\\\nI0217 13:36:41.350711 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0217 13:36:41.368959 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0217 13:36:41.368982 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0217 13:36:41.369003 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0217 13:36:41.369008 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0217 13:36:41.374960 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0217 13:36:41.374983 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 13:36:41.374989 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0217 13:36:41.374990 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0217 13:36:41.374994 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0217 13:36:41.375023 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0217 13:36:41.375028 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0217 13:36:41.375032 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0217 13:36:41.377284 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T13:36:25Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7641c2023d65064decc69606567dff6eff07116a6cc62f9d1b1cddb34a39a407\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:24Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9dffd0f3cc3b89ababe812ea2e550f319242913da2908f466cbb8d9ac541e4b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9dffd0f3cc3b89ababe812ea2e550f319242913da2908f466cbb8d9ac541e4b9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:36:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:36:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:36:21Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:36:46Z is after 2025-08-24T17:21:41Z" Feb 17 13:36:46 crc kubenswrapper[4768]: I0217 13:36:46.763479 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c0c5a46beaa5e6c9df638b288967eaff1605591faafc373623753e3c434c6ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a28c91cfb782b4976153d2afced3117498aaabaafa5bba655b020c13aebb1dc5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:36:46Z is after 2025-08-24T17:21:41Z" Feb 17 13:36:46 crc kubenswrapper[4768]: I0217 13:36:46.776126 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:36:46Z is after 2025-08-24T17:21:41Z" Feb 17 13:36:46 crc kubenswrapper[4768]: I0217 13:36:46.788257 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://07aa34038d6332ca2ddf610df510586d343655b1e38a2912ea1513cca35f4dcc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:36:46Z is after 2025-08-24T17:21:41Z" Feb 17 13:36:46 crc kubenswrapper[4768]: I0217 13:36:46.799527 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:41Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:36:46Z is after 2025-08-24T17:21:41Z" Feb 17 13:36:46 crc kubenswrapper[4768]: I0217 13:36:46.810704 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:41Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:36:46Z is after 2025-08-24T17:21:41Z" Feb 17 13:36:46 crc kubenswrapper[4768]: I0217 13:36:46.820889 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6l7rv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dac64b2f-4b0b-454c-96e0-fc7d563d300f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://903f9d31bd5f357a5a1f439a158788c1930d0f63cd50e809028689cfa7884e96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7njgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:36:41Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6l7rv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:36:46Z is after 2025-08-24T17:21:41Z" Feb 17 13:36:46 crc kubenswrapper[4768]: I0217 13:36:46.834416 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"99a4349d-f5a6-431b-b8d6-9de1f9bbe63a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f45aa99773693c07f27add72aa8305d47c48deeb24382aa9ac8bca232fd26ff3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://47db5ef94c07ecce7b368bb809a736c9532fe3e47081cfa7e248b8b40d1d243f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0adb0161ea24422b8b1443a81f00e9c38f60bf8d0718075ce189158200b28a78\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22e1c78ff231c7a76ebb06589f5f14caed8c0b72475083230c80b4512ae2a048\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:36:21Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:36:46Z is after 2025-08-24T17:21:41Z" Feb 17 13:36:46 crc kubenswrapper[4768]: I0217 13:36:46.873831 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f9b2d1282c12fcb8597c7b4c60dc1bab1049afc179eaddbc8a72203b392f78d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:36:46Z is after 2025-08-24T17:21:41Z" Feb 17 13:36:46 crc kubenswrapper[4768]: I0217 13:36:46.913495 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-p97z4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"10c685ba-8fe0-425c-958c-3fb6754d3d84\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d56cb97a69888610458dcdfe5fa47e01b6641b0ab8ce5b11c01042001017d633\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b2h62\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://69b662556c0f67ed5b73c21d8dda6de43cd7b0e7c3dc7a57578f3ce94a4be7cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b2h62\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:36:42Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-p97z4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:36:46Z is after 2025-08-24T17:21:41Z" Feb 17 13:36:46 crc kubenswrapper[4768]: I0217 13:36:46.926542 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/node-ca-hngsc"] Feb 17 13:36:46 crc kubenswrapper[4768]: I0217 13:36:46.926966 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-hngsc" Feb 17 13:36:46 crc kubenswrapper[4768]: I0217 13:36:46.953145 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-jjjqj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e044bf1f-26b2-4a39-86e6-0440eff3eaa9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://19bc9ea99c2b3b0015849f2a4c730a14e048c35cf69f5b4025a4d51d707cf9f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jlkgb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:36:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-jjjqj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:36:46Z is after 2025-08-24T17:21:41Z" Feb 17 13:36:46 crc kubenswrapper[4768]: I0217 13:36:46.957476 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/37a9682c-5dd5-49ce-bd8c-60e91527ec2a-host\") pod \"node-ca-hngsc\" (UID: \"37a9682c-5dd5-49ce-bd8c-60e91527ec2a\") " pod="openshift-image-registry/node-ca-hngsc" Feb 17 13:36:46 crc kubenswrapper[4768]: I0217 13:36:46.957560 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/37a9682c-5dd5-49ce-bd8c-60e91527ec2a-serviceca\") pod \"node-ca-hngsc\" (UID: \"37a9682c-5dd5-49ce-bd8c-60e91527ec2a\") " pod="openshift-image-registry/node-ca-hngsc" Feb 17 13:36:46 crc kubenswrapper[4768]: I0217 13:36:46.957586 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fbjbb\" (UniqueName: \"kubernetes.io/projected/37a9682c-5dd5-49ce-bd8c-60e91527ec2a-kube-api-access-fbjbb\") pod \"node-ca-hngsc\" (UID: \"37a9682c-5dd5-49ce-bd8c-60e91527ec2a\") " pod="openshift-image-registry/node-ca-hngsc" Feb 17 13:36:46 crc kubenswrapper[4768]: I0217 13:36:46.962758 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Feb 17 13:36:46 crc kubenswrapper[4768]: I0217 13:36:46.983500 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Feb 17 13:36:47 crc kubenswrapper[4768]: I0217 13:36:47.003576 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Feb 17 13:36:47 crc kubenswrapper[4768]: I0217 13:36:47.023279 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Feb 17 13:36:47 crc kubenswrapper[4768]: I0217 13:36:47.058601 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/37a9682c-5dd5-49ce-bd8c-60e91527ec2a-serviceca\") pod \"node-ca-hngsc\" (UID: \"37a9682c-5dd5-49ce-bd8c-60e91527ec2a\") " pod="openshift-image-registry/node-ca-hngsc" Feb 17 13:36:47 crc kubenswrapper[4768]: I0217 13:36:47.058645 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fbjbb\" (UniqueName: \"kubernetes.io/projected/37a9682c-5dd5-49ce-bd8c-60e91527ec2a-kube-api-access-fbjbb\") pod \"node-ca-hngsc\" (UID: \"37a9682c-5dd5-49ce-bd8c-60e91527ec2a\") " pod="openshift-image-registry/node-ca-hngsc" Feb 17 13:36:47 crc kubenswrapper[4768]: I0217 13:36:47.058680 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/37a9682c-5dd5-49ce-bd8c-60e91527ec2a-host\") pod \"node-ca-hngsc\" (UID: \"37a9682c-5dd5-49ce-bd8c-60e91527ec2a\") " pod="openshift-image-registry/node-ca-hngsc" Feb 17 13:36:47 crc kubenswrapper[4768]: I0217 13:36:47.058739 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/37a9682c-5dd5-49ce-bd8c-60e91527ec2a-host\") pod \"node-ca-hngsc\" (UID: \"37a9682c-5dd5-49ce-bd8c-60e91527ec2a\") " pod="openshift-image-registry/node-ca-hngsc" Feb 17 13:36:47 crc kubenswrapper[4768]: I0217 13:36:47.059790 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/37a9682c-5dd5-49ce-bd8c-60e91527ec2a-serviceca\") pod \"node-ca-hngsc\" (UID: \"37a9682c-5dd5-49ce-bd8c-60e91527ec2a\") " pod="openshift-image-registry/node-ca-hngsc" Feb 17 13:36:47 crc kubenswrapper[4768]: I0217 13:36:47.077933 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5cplg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"742e6df8-2a68-426e-982c-ef825c6efca3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg6ql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg6ql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg6ql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg6ql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg6ql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg6ql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg6ql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg6ql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6fcb3dc45db5b2c74f9bdd4c7c453815a392de1f2afe91ebed618327cfe5cb7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6fcb3dc45db5b2c74f9bdd4c7c453815a392de1f2afe91ebed618327cfe5cb7c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:36:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg6ql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:36:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-5cplg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:36:47Z is after 2025-08-24T17:21:41Z" Feb 17 13:36:47 crc kubenswrapper[4768]: I0217 13:36:47.099805 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fbjbb\" (UniqueName: \"kubernetes.io/projected/37a9682c-5dd5-49ce-bd8c-60e91527ec2a-kube-api-access-fbjbb\") pod \"node-ca-hngsc\" (UID: \"37a9682c-5dd5-49ce-bd8c-60e91527ec2a\") " pod="openshift-image-registry/node-ca-hngsc" Feb 17 13:36:47 crc kubenswrapper[4768]: I0217 13:36:47.133959 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-6xvnz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"68ae92b3-aced-409b-901b-252d2364cc01\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42dxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ccce43f2773621b91f6782909e74d6c1cb29f3f2e0d4280753e2719f256d694\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3ccce43f2773621b91f6782909e74d6c1cb29f3f2e0d4280753e2719f256d694\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:36:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:36:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42dxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8fa44e73498e2987dca421d954e3c6b71c36138496d25e3498934a0ae03c25de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8fa44e73498e2987dca421d954e3c6b71c36138496d25e3498934a0ae03c25de\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:36:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:36:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42dxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8eee5a030051c479dc3e7d690a338260f3a886f6c8b6764037de0a4b3dcea101\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8eee5a030051c479dc3e7d690a338260f3a886f6c8b6764037de0a4b3dcea101\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:36:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:36:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42dxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42dxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42dxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42dxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:36:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-6xvnz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:36:47Z is after 2025-08-24T17:21:41Z" Feb 17 13:36:47 crc kubenswrapper[4768]: I0217 13:36:47.172671 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"88441663-1346-4328-a433-63cb4d9b6722\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1dd90501c9cf3ad0b27fefe4999afdfeaf0a8262728c8d47cd4714b0ef71d340\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://52dd97c46a2e371a68dcd9558be0f94d3bdd20c7a6ba6a65178fc09cdceea903\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4f0320f35632987c0e1f7eb6655c7221d7c159f1473088fa143a7c69a60f5b0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ecebe409455190192bdd157c1aa1c3eaec5d7930ccfaa6c456a1d04be7fff2f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9ecebe409455190192bdd157c1aa1c3eaec5d7930ccfaa6c456a1d04be7fff2f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-17T13:36:41Z\\\",\\\"message\\\":\\\"le observer\\\\nW0217 13:36:40.862072 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0217 13:36:40.862247 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0217 13:36:40.862863 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1936381186/tls.crt::/tmp/serving-cert-1936381186/tls.key\\\\\\\"\\\\nI0217 13:36:41.350711 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0217 13:36:41.368959 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0217 13:36:41.368982 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0217 13:36:41.369003 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0217 13:36:41.369008 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0217 13:36:41.374960 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0217 13:36:41.374983 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 13:36:41.374989 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0217 13:36:41.374990 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0217 13:36:41.374994 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0217 13:36:41.375023 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0217 13:36:41.375028 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0217 13:36:41.375032 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0217 13:36:41.377284 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T13:36:25Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7641c2023d65064decc69606567dff6eff07116a6cc62f9d1b1cddb34a39a407\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:24Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9dffd0f3cc3b89ababe812ea2e550f319242913da2908f466cbb8d9ac541e4b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9dffd0f3cc3b89ababe812ea2e550f319242913da2908f466cbb8d9ac541e4b9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:36:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:36:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:36:21Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:36:47Z is after 2025-08-24T17:21:41Z" Feb 17 13:36:47 crc kubenswrapper[4768]: I0217 13:36:47.211889 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c0c5a46beaa5e6c9df638b288967eaff1605591faafc373623753e3c434c6ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a28c91cfb782b4976153d2afced3117498aaabaafa5bba655b020c13aebb1dc5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:36:47Z is after 2025-08-24T17:21:41Z" Feb 17 13:36:47 crc kubenswrapper[4768]: I0217 13:36:47.241536 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-hngsc" Feb 17 13:36:47 crc kubenswrapper[4768]: I0217 13:36:47.247872 4768 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 13:36:47 crc kubenswrapper[4768]: I0217 13:36:47.249400 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:36:47 crc kubenswrapper[4768]: I0217 13:36:47.249445 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:36:47 crc kubenswrapper[4768]: I0217 13:36:47.249456 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:36:47 crc kubenswrapper[4768]: I0217 13:36:47.249604 4768 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 17 13:36:47 crc kubenswrapper[4768]: I0217 13:36:47.250630 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:36:47Z is after 2025-08-24T17:21:41Z" Feb 17 13:36:47 crc kubenswrapper[4768]: W0217 13:36:47.252933 4768 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod37a9682c_5dd5_49ce_bd8c_60e91527ec2a.slice/crio-a90a92e1cf857acf00ae8cd22c8a7b36a9eb5070e6df1ecef651f95718039e56 WatchSource:0}: Error finding container a90a92e1cf857acf00ae8cd22c8a7b36a9eb5070e6df1ecef651f95718039e56: Status 404 returned error can't find the container with id a90a92e1cf857acf00ae8cd22c8a7b36a9eb5070e6df1ecef651f95718039e56 Feb 17 13:36:47 crc kubenswrapper[4768]: I0217 13:36:47.305088 4768 kubelet_node_status.go:115] "Node was previously registered" node="crc" Feb 17 13:36:47 crc kubenswrapper[4768]: I0217 13:36:47.305566 4768 kubelet_node_status.go:79] "Successfully registered node" node="crc" Feb 17 13:36:47 crc kubenswrapper[4768]: I0217 13:36:47.307027 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:36:47 crc kubenswrapper[4768]: I0217 13:36:47.307061 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:36:47 crc kubenswrapper[4768]: I0217 13:36:47.307072 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:36:47 crc kubenswrapper[4768]: I0217 13:36:47.307091 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:36:47 crc kubenswrapper[4768]: I0217 13:36:47.307120 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:36:47Z","lastTransitionTime":"2026-02-17T13:36:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:36:47 crc kubenswrapper[4768]: E0217 13:36:47.322467 4768 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T13:36:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:47Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T13:36:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:47Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T13:36:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:47Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T13:36:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:47Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"72b2b2e1-552d-4984-900d-b4db18ea60be\\\",\\\"systemUUID\\\":\\\"85ded5cb-c1f6-4d1b-b23e-ee8660dcd6ef\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:36:47Z is after 2025-08-24T17:21:41Z" Feb 17 13:36:47 crc kubenswrapper[4768]: I0217 13:36:47.326678 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:36:47 crc kubenswrapper[4768]: I0217 13:36:47.326729 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:36:47 crc kubenswrapper[4768]: I0217 13:36:47.326741 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:36:47 crc kubenswrapper[4768]: I0217 13:36:47.326759 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:36:47 crc kubenswrapper[4768]: I0217 13:36:47.326771 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:36:47Z","lastTransitionTime":"2026-02-17T13:36:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:36:47 crc kubenswrapper[4768]: I0217 13:36:47.330808 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://07aa34038d6332ca2ddf610df510586d343655b1e38a2912ea1513cca35f4dcc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:36:47Z is after 2025-08-24T17:21:41Z" Feb 17 13:36:47 crc kubenswrapper[4768]: E0217 13:36:47.341460 4768 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T13:36:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:47Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T13:36:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:47Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T13:36:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:47Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T13:36:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:47Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"72b2b2e1-552d-4984-900d-b4db18ea60be\\\",\\\"systemUUID\\\":\\\"85ded5cb-c1f6-4d1b-b23e-ee8660dcd6ef\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:36:47Z is after 2025-08-24T17:21:41Z" Feb 17 13:36:47 crc kubenswrapper[4768]: I0217 13:36:47.345216 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:36:47 crc kubenswrapper[4768]: I0217 13:36:47.345256 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:36:47 crc kubenswrapper[4768]: I0217 13:36:47.345267 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:36:47 crc kubenswrapper[4768]: I0217 13:36:47.345284 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:36:47 crc kubenswrapper[4768]: I0217 13:36:47.345295 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:36:47Z","lastTransitionTime":"2026-02-17T13:36:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:36:47 crc kubenswrapper[4768]: E0217 13:36:47.358223 4768 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T13:36:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:47Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T13:36:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:47Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T13:36:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:47Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T13:36:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:47Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"72b2b2e1-552d-4984-900d-b4db18ea60be\\\",\\\"systemUUID\\\":\\\"85ded5cb-c1f6-4d1b-b23e-ee8660dcd6ef\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:36:47Z is after 2025-08-24T17:21:41Z" Feb 17 13:36:47 crc kubenswrapper[4768]: I0217 13:36:47.361433 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:36:47 crc kubenswrapper[4768]: I0217 13:36:47.361468 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:36:47 crc kubenswrapper[4768]: I0217 13:36:47.361481 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:36:47 crc kubenswrapper[4768]: I0217 13:36:47.361498 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:36:47 crc kubenswrapper[4768]: I0217 13:36:47.361511 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:36:47Z","lastTransitionTime":"2026-02-17T13:36:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:36:47 crc kubenswrapper[4768]: I0217 13:36:47.373333 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:41Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:36:47Z is after 2025-08-24T17:21:41Z" Feb 17 13:36:47 crc kubenswrapper[4768]: E0217 13:36:47.374880 4768 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T13:36:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:47Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T13:36:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:47Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T13:36:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:47Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T13:36:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:47Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"72b2b2e1-552d-4984-900d-b4db18ea60be\\\",\\\"systemUUID\\\":\\\"85ded5cb-c1f6-4d1b-b23e-ee8660dcd6ef\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:36:47Z is after 2025-08-24T17:21:41Z" Feb 17 13:36:47 crc kubenswrapper[4768]: I0217 13:36:47.378559 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:36:47 crc kubenswrapper[4768]: I0217 13:36:47.378592 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:36:47 crc kubenswrapper[4768]: I0217 13:36:47.378602 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:36:47 crc kubenswrapper[4768]: I0217 13:36:47.378617 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:36:47 crc kubenswrapper[4768]: I0217 13:36:47.378628 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:36:47Z","lastTransitionTime":"2026-02-17T13:36:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:36:47 crc kubenswrapper[4768]: E0217 13:36:47.389574 4768 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T13:36:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:47Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T13:36:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:47Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T13:36:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:47Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T13:36:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:47Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"72b2b2e1-552d-4984-900d-b4db18ea60be\\\",\\\"systemUUID\\\":\\\"85ded5cb-c1f6-4d1b-b23e-ee8660dcd6ef\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:36:47Z is after 2025-08-24T17:21:41Z" Feb 17 13:36:47 crc kubenswrapper[4768]: E0217 13:36:47.389739 4768 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 17 13:36:47 crc kubenswrapper[4768]: I0217 13:36:47.391839 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:36:47 crc kubenswrapper[4768]: I0217 13:36:47.391874 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:36:47 crc kubenswrapper[4768]: I0217 13:36:47.391884 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:36:47 crc kubenswrapper[4768]: I0217 13:36:47.391903 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:36:47 crc kubenswrapper[4768]: I0217 13:36:47.391915 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:36:47Z","lastTransitionTime":"2026-02-17T13:36:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:36:47 crc kubenswrapper[4768]: I0217 13:36:47.411779 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:41Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:36:47Z is after 2025-08-24T17:21:41Z" Feb 17 13:36:47 crc kubenswrapper[4768]: I0217 13:36:47.451124 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6l7rv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dac64b2f-4b0b-454c-96e0-fc7d563d300f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://903f9d31bd5f357a5a1f439a158788c1930d0f63cd50e809028689cfa7884e96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7njgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:36:41Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6l7rv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:36:47Z is after 2025-08-24T17:21:41Z" Feb 17 13:36:47 crc kubenswrapper[4768]: I0217 13:36:47.489846 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-hngsc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a9682c-5dd5-49ce-bd8c-60e91527ec2a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:46Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:46Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fbjbb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:36:46Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-hngsc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:36:47Z is after 2025-08-24T17:21:41Z" Feb 17 13:36:47 crc kubenswrapper[4768]: I0217 13:36:47.494429 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:36:47 crc kubenswrapper[4768]: I0217 13:36:47.494478 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:36:47 crc kubenswrapper[4768]: I0217 13:36:47.494490 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:36:47 crc kubenswrapper[4768]: I0217 13:36:47.494507 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:36:47 crc kubenswrapper[4768]: I0217 13:36:47.494520 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:36:47Z","lastTransitionTime":"2026-02-17T13:36:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:36:47 crc kubenswrapper[4768]: I0217 13:36:47.522764 4768 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-13 04:13:30.758276969 +0000 UTC Feb 17 13:36:47 crc kubenswrapper[4768]: I0217 13:36:47.534330 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 13:36:47 crc kubenswrapper[4768]: I0217 13:36:47.534385 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 13:36:47 crc kubenswrapper[4768]: I0217 13:36:47.534330 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 13:36:47 crc kubenswrapper[4768]: E0217 13:36:47.534474 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 13:36:47 crc kubenswrapper[4768]: E0217 13:36:47.534609 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 13:36:47 crc kubenswrapper[4768]: E0217 13:36:47.534692 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 13:36:47 crc kubenswrapper[4768]: I0217 13:36:47.535629 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"99a4349d-f5a6-431b-b8d6-9de1f9bbe63a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f45aa99773693c07f27add72aa8305d47c48deeb24382aa9ac8bca232fd26ff3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://47db5ef94c07ecce7b368bb809a736c9532fe3e47081cfa7e248b8b40d1d243f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0adb0161ea24422b8b1443a81f00e9c38f60bf8d0718075ce189158200b28a78\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22e1c78ff231c7a76ebb06589f5f14caed8c0b72475083230c80b4512ae2a048\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:36:21Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:36:47Z is after 2025-08-24T17:21:41Z" Feb 17 13:36:47 crc kubenswrapper[4768]: I0217 13:36:47.573366 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f9b2d1282c12fcb8597c7b4c60dc1bab1049afc179eaddbc8a72203b392f78d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:36:47Z is after 2025-08-24T17:21:41Z" Feb 17 13:36:47 crc kubenswrapper[4768]: I0217 13:36:47.596982 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:36:47 crc kubenswrapper[4768]: I0217 13:36:47.597028 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:36:47 crc kubenswrapper[4768]: I0217 13:36:47.597040 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:36:47 crc kubenswrapper[4768]: I0217 13:36:47.597059 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:36:47 crc kubenswrapper[4768]: I0217 13:36:47.597071 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:36:47Z","lastTransitionTime":"2026-02-17T13:36:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:36:47 crc kubenswrapper[4768]: I0217 13:36:47.612377 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-p97z4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"10c685ba-8fe0-425c-958c-3fb6754d3d84\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d56cb97a69888610458dcdfe5fa47e01b6641b0ab8ce5b11c01042001017d633\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b2h62\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://69b662556c0f67ed5b73c21d8dda6de43cd7b0e7c3dc7a57578f3ce94a4be7cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b2h62\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:36:42Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-p97z4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:36:47Z is after 2025-08-24T17:21:41Z" Feb 17 13:36:47 crc kubenswrapper[4768]: I0217 13:36:47.650446 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-jjjqj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e044bf1f-26b2-4a39-86e6-0440eff3eaa9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://19bc9ea99c2b3b0015849f2a4c730a14e048c35cf69f5b4025a4d51d707cf9f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jlkgb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:36:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-jjjqj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:36:47Z is after 2025-08-24T17:21:41Z" Feb 17 13:36:47 crc kubenswrapper[4768]: I0217 13:36:47.695002 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5cplg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"742e6df8-2a68-426e-982c-ef825c6efca3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg6ql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg6ql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg6ql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg6ql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg6ql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg6ql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg6ql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg6ql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6fcb3dc45db5b2c74f9bdd4c7c453815a392de1f2afe91ebed618327cfe5cb7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6fcb3dc45db5b2c74f9bdd4c7c453815a392de1f2afe91ebed618327cfe5cb7c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:36:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg6ql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:36:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-5cplg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:36:47Z is after 2025-08-24T17:21:41Z" Feb 17 13:36:47 crc kubenswrapper[4768]: I0217 13:36:47.699091 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:36:47 crc kubenswrapper[4768]: I0217 13:36:47.699145 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:36:47 crc kubenswrapper[4768]: I0217 13:36:47.699164 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:36:47 crc kubenswrapper[4768]: I0217 13:36:47.699182 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:36:47 crc kubenswrapper[4768]: I0217 13:36:47.699194 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:36:47Z","lastTransitionTime":"2026-02-17T13:36:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:36:47 crc kubenswrapper[4768]: I0217 13:36:47.715819 4768 generic.go:334] "Generic (PLEG): container finished" podID="68ae92b3-aced-409b-901b-252d2364cc01" containerID="473b18f574df0c69d4f70ded0dee7c42ee64daab422dff19c6a58a0cde52d045" exitCode=0 Feb 17 13:36:47 crc kubenswrapper[4768]: I0217 13:36:47.715862 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-6xvnz" event={"ID":"68ae92b3-aced-409b-901b-252d2364cc01","Type":"ContainerDied","Data":"473b18f574df0c69d4f70ded0dee7c42ee64daab422dff19c6a58a0cde52d045"} Feb 17 13:36:47 crc kubenswrapper[4768]: I0217 13:36:47.718048 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-hngsc" event={"ID":"37a9682c-5dd5-49ce-bd8c-60e91527ec2a","Type":"ContainerStarted","Data":"db596eafde85d9ad74007393b8576b424cd8561f68f694af82cde80f997a5688"} Feb 17 13:36:47 crc kubenswrapper[4768]: I0217 13:36:47.718083 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-hngsc" event={"ID":"37a9682c-5dd5-49ce-bd8c-60e91527ec2a","Type":"ContainerStarted","Data":"a90a92e1cf857acf00ae8cd22c8a7b36a9eb5070e6df1ecef651f95718039e56"} Feb 17 13:36:47 crc kubenswrapper[4768]: I0217 13:36:47.722433 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5cplg" event={"ID":"742e6df8-2a68-426e-982c-ef825c6efca3","Type":"ContainerStarted","Data":"a8e5ca206927b1e2a6968ce42ecdb1441da4c68ad11b65c657a68889ac73af37"} Feb 17 13:36:47 crc kubenswrapper[4768]: I0217 13:36:47.735783 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"88441663-1346-4328-a433-63cb4d9b6722\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1dd90501c9cf3ad0b27fefe4999afdfeaf0a8262728c8d47cd4714b0ef71d340\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://52dd97c46a2e371a68dcd9558be0f94d3bdd20c7a6ba6a65178fc09cdceea903\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4f0320f35632987c0e1f7eb6655c7221d7c159f1473088fa143a7c69a60f5b0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ecebe409455190192bdd157c1aa1c3eaec5d7930ccfaa6c456a1d04be7fff2f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9ecebe409455190192bdd157c1aa1c3eaec5d7930ccfaa6c456a1d04be7fff2f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-17T13:36:41Z\\\",\\\"message\\\":\\\"le observer\\\\nW0217 13:36:40.862072 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0217 13:36:40.862247 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0217 13:36:40.862863 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1936381186/tls.crt::/tmp/serving-cert-1936381186/tls.key\\\\\\\"\\\\nI0217 13:36:41.350711 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0217 13:36:41.368959 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0217 13:36:41.368982 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0217 13:36:41.369003 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0217 13:36:41.369008 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0217 13:36:41.374960 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0217 13:36:41.374983 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 13:36:41.374989 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0217 13:36:41.374990 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0217 13:36:41.374994 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0217 13:36:41.375023 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0217 13:36:41.375028 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0217 13:36:41.375032 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0217 13:36:41.377284 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T13:36:25Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7641c2023d65064decc69606567dff6eff07116a6cc62f9d1b1cddb34a39a407\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:24Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9dffd0f3cc3b89ababe812ea2e550f319242913da2908f466cbb8d9ac541e4b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9dffd0f3cc3b89ababe812ea2e550f319242913da2908f466cbb8d9ac541e4b9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:36:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:36:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:36:21Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:36:47Z is after 2025-08-24T17:21:41Z" Feb 17 13:36:47 crc kubenswrapper[4768]: I0217 13:36:47.775392 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c0c5a46beaa5e6c9df638b288967eaff1605591faafc373623753e3c434c6ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a28c91cfb782b4976153d2afced3117498aaabaafa5bba655b020c13aebb1dc5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:36:47Z is after 2025-08-24T17:21:41Z" Feb 17 13:36:47 crc kubenswrapper[4768]: I0217 13:36:47.801068 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:36:47 crc kubenswrapper[4768]: I0217 13:36:47.801090 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:36:47 crc kubenswrapper[4768]: I0217 13:36:47.801282 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:36:47 crc kubenswrapper[4768]: I0217 13:36:47.801305 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:36:47 crc kubenswrapper[4768]: I0217 13:36:47.801314 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:36:47Z","lastTransitionTime":"2026-02-17T13:36:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:36:47 crc kubenswrapper[4768]: I0217 13:36:47.812053 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:36:47Z is after 2025-08-24T17:21:41Z" Feb 17 13:36:47 crc kubenswrapper[4768]: I0217 13:36:47.850155 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://07aa34038d6332ca2ddf610df510586d343655b1e38a2912ea1513cca35f4dcc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:36:47Z is after 2025-08-24T17:21:41Z" Feb 17 13:36:47 crc kubenswrapper[4768]: I0217 13:36:47.891540 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:41Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:36:47Z is after 2025-08-24T17:21:41Z" Feb 17 13:36:47 crc kubenswrapper[4768]: I0217 13:36:47.904019 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:36:47 crc kubenswrapper[4768]: I0217 13:36:47.904068 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:36:47 crc kubenswrapper[4768]: I0217 13:36:47.904079 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:36:47 crc kubenswrapper[4768]: I0217 13:36:47.904113 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:36:47 crc kubenswrapper[4768]: I0217 13:36:47.904127 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:36:47Z","lastTransitionTime":"2026-02-17T13:36:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:36:47 crc kubenswrapper[4768]: I0217 13:36:47.932836 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:41Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:36:47Z is after 2025-08-24T17:21:41Z" Feb 17 13:36:47 crc kubenswrapper[4768]: I0217 13:36:47.969795 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6l7rv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dac64b2f-4b0b-454c-96e0-fc7d563d300f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://903f9d31bd5f357a5a1f439a158788c1930d0f63cd50e809028689cfa7884e96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7njgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:36:41Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6l7rv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:36:47Z is after 2025-08-24T17:21:41Z" Feb 17 13:36:48 crc kubenswrapper[4768]: I0217 13:36:48.006881 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:36:48 crc kubenswrapper[4768]: I0217 13:36:48.006943 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:36:48 crc kubenswrapper[4768]: I0217 13:36:48.006970 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:36:48 crc kubenswrapper[4768]: I0217 13:36:48.007003 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:36:48 crc kubenswrapper[4768]: I0217 13:36:48.007026 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:36:48Z","lastTransitionTime":"2026-02-17T13:36:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:36:48 crc kubenswrapper[4768]: I0217 13:36:48.022929 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5cplg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"742e6df8-2a68-426e-982c-ef825c6efca3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg6ql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg6ql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg6ql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg6ql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg6ql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg6ql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg6ql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg6ql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6fcb3dc45db5b2c74f9bdd4c7c453815a392de1f2afe91ebed618327cfe5cb7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6fcb3dc45db5b2c74f9bdd4c7c453815a392de1f2afe91ebed618327cfe5cb7c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:36:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg6ql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:36:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-5cplg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:36:48Z is after 2025-08-24T17:21:41Z" Feb 17 13:36:48 crc kubenswrapper[4768]: I0217 13:36:48.051546 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-hngsc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a9682c-5dd5-49ce-bd8c-60e91527ec2a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:46Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:46Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fbjbb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:36:46Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-hngsc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:36:48Z is after 2025-08-24T17:21:41Z" Feb 17 13:36:48 crc kubenswrapper[4768]: I0217 13:36:48.091287 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"99a4349d-f5a6-431b-b8d6-9de1f9bbe63a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f45aa99773693c07f27add72aa8305d47c48deeb24382aa9ac8bca232fd26ff3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://47db5ef94c07ecce7b368bb809a736c9532fe3e47081cfa7e248b8b40d1d243f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0adb0161ea24422b8b1443a81f00e9c38f60bf8d0718075ce189158200b28a78\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22e1c78ff231c7a76ebb06589f5f14caed8c0b72475083230c80b4512ae2a048\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:36:21Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:36:48Z is after 2025-08-24T17:21:41Z" Feb 17 13:36:48 crc kubenswrapper[4768]: I0217 13:36:48.109037 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:36:48 crc kubenswrapper[4768]: I0217 13:36:48.109063 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:36:48 crc kubenswrapper[4768]: I0217 13:36:48.109073 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:36:48 crc kubenswrapper[4768]: I0217 13:36:48.109088 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:36:48 crc kubenswrapper[4768]: I0217 13:36:48.109121 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:36:48Z","lastTransitionTime":"2026-02-17T13:36:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:36:48 crc kubenswrapper[4768]: I0217 13:36:48.131911 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f9b2d1282c12fcb8597c7b4c60dc1bab1049afc179eaddbc8a72203b392f78d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:36:48Z is after 2025-08-24T17:21:41Z" Feb 17 13:36:48 crc kubenswrapper[4768]: I0217 13:36:48.172367 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-p97z4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"10c685ba-8fe0-425c-958c-3fb6754d3d84\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d56cb97a69888610458dcdfe5fa47e01b6641b0ab8ce5b11c01042001017d633\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b2h62\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://69b662556c0f67ed5b73c21d8dda6de43cd7b0e7c3dc7a57578f3ce94a4be7cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b2h62\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:36:42Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-p97z4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:36:48Z is after 2025-08-24T17:21:41Z" Feb 17 13:36:48 crc kubenswrapper[4768]: I0217 13:36:48.212024 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:36:48 crc kubenswrapper[4768]: I0217 13:36:48.212054 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:36:48 crc kubenswrapper[4768]: I0217 13:36:48.212062 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:36:48 crc kubenswrapper[4768]: I0217 13:36:48.212077 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:36:48 crc kubenswrapper[4768]: I0217 13:36:48.212087 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:36:48Z","lastTransitionTime":"2026-02-17T13:36:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:36:48 crc kubenswrapper[4768]: I0217 13:36:48.212802 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-jjjqj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e044bf1f-26b2-4a39-86e6-0440eff3eaa9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://19bc9ea99c2b3b0015849f2a4c730a14e048c35cf69f5b4025a4d51d707cf9f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jlkgb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:36:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-jjjqj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:36:48Z is after 2025-08-24T17:21:41Z" Feb 17 13:36:48 crc kubenswrapper[4768]: I0217 13:36:48.258343 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-6xvnz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"68ae92b3-aced-409b-901b-252d2364cc01\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42dxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ccce43f2773621b91f6782909e74d6c1cb29f3f2e0d4280753e2719f256d694\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3ccce43f2773621b91f6782909e74d6c1cb29f3f2e0d4280753e2719f256d694\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:36:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:36:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42dxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8fa44e73498e2987dca421d954e3c6b71c36138496d25e3498934a0ae03c25de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8fa44e73498e2987dca421d954e3c6b71c36138496d25e3498934a0ae03c25de\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:36:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:36:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42dxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8eee5a030051c479dc3e7d690a338260f3a886f6c8b6764037de0a4b3dcea101\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8eee5a030051c479dc3e7d690a338260f3a886f6c8b6764037de0a4b3dcea101\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:36:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:36:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42dxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://473b18f574df0c69d4f70ded0dee7c42ee64daab422dff19c6a58a0cde52d045\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://473b18f574df0c69d4f70ded0dee7c42ee64daab422dff19c6a58a0cde52d045\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:36:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:36:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42dxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42dxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42dxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:36:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-6xvnz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:36:48Z is after 2025-08-24T17:21:41Z" Feb 17 13:36:48 crc kubenswrapper[4768]: I0217 13:36:48.294432 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-6xvnz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"68ae92b3-aced-409b-901b-252d2364cc01\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42dxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ccce43f2773621b91f6782909e74d6c1cb29f3f2e0d4280753e2719f256d694\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3ccce43f2773621b91f6782909e74d6c1cb29f3f2e0d4280753e2719f256d694\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:36:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:36:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42dxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8fa44e73498e2987dca421d954e3c6b71c36138496d25e3498934a0ae03c25de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8fa44e73498e2987dca421d954e3c6b71c36138496d25e3498934a0ae03c25de\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:36:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:36:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42dxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8eee5a030051c479dc3e7d690a338260f3a886f6c8b6764037de0a4b3dcea101\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8eee5a030051c479dc3e7d690a338260f3a886f6c8b6764037de0a4b3dcea101\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:36:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:36:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42dxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://473b18f574df0c69d4f70ded0dee7c42ee64daab422dff19c6a58a0cde52d045\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://473b18f574df0c69d4f70ded0dee7c42ee64daab422dff19c6a58a0cde52d045\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:36:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:36:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42dxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42dxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42dxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:36:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-6xvnz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:36:48Z is after 2025-08-24T17:21:41Z" Feb 17 13:36:48 crc kubenswrapper[4768]: I0217 13:36:48.314803 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:36:48 crc kubenswrapper[4768]: I0217 13:36:48.314839 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:36:48 crc kubenswrapper[4768]: I0217 13:36:48.314847 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:36:48 crc kubenswrapper[4768]: I0217 13:36:48.314862 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:36:48 crc kubenswrapper[4768]: I0217 13:36:48.314871 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:36:48Z","lastTransitionTime":"2026-02-17T13:36:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:36:48 crc kubenswrapper[4768]: I0217 13:36:48.350548 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"88441663-1346-4328-a433-63cb4d9b6722\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1dd90501c9cf3ad0b27fefe4999afdfeaf0a8262728c8d47cd4714b0ef71d340\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://52dd97c46a2e371a68dcd9558be0f94d3bdd20c7a6ba6a65178fc09cdceea903\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4f0320f35632987c0e1f7eb6655c7221d7c159f1473088fa143a7c69a60f5b0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ecebe409455190192bdd157c1aa1c3eaec5d7930ccfaa6c456a1d04be7fff2f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9ecebe409455190192bdd157c1aa1c3eaec5d7930ccfaa6c456a1d04be7fff2f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-17T13:36:41Z\\\",\\\"message\\\":\\\"le observer\\\\nW0217 13:36:40.862072 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0217 13:36:40.862247 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0217 13:36:40.862863 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1936381186/tls.crt::/tmp/serving-cert-1936381186/tls.key\\\\\\\"\\\\nI0217 13:36:41.350711 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0217 13:36:41.368959 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0217 13:36:41.368982 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0217 13:36:41.369003 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0217 13:36:41.369008 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0217 13:36:41.374960 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0217 13:36:41.374983 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 13:36:41.374989 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0217 13:36:41.374990 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0217 13:36:41.374994 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0217 13:36:41.375023 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0217 13:36:41.375028 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0217 13:36:41.375032 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0217 13:36:41.377284 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T13:36:25Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7641c2023d65064decc69606567dff6eff07116a6cc62f9d1b1cddb34a39a407\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:24Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9dffd0f3cc3b89ababe812ea2e550f319242913da2908f466cbb8d9ac541e4b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9dffd0f3cc3b89ababe812ea2e550f319242913da2908f466cbb8d9ac541e4b9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:36:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:36:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:36:21Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:36:48Z is after 2025-08-24T17:21:41Z" Feb 17 13:36:48 crc kubenswrapper[4768]: I0217 13:36:48.376960 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c0c5a46beaa5e6c9df638b288967eaff1605591faafc373623753e3c434c6ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a28c91cfb782b4976153d2afced3117498aaabaafa5bba655b020c13aebb1dc5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:36:48Z is after 2025-08-24T17:21:41Z" Feb 17 13:36:48 crc kubenswrapper[4768]: I0217 13:36:48.411435 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:36:48Z is after 2025-08-24T17:21:41Z" Feb 17 13:36:48 crc kubenswrapper[4768]: I0217 13:36:48.417557 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:36:48 crc kubenswrapper[4768]: I0217 13:36:48.417588 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:36:48 crc kubenswrapper[4768]: I0217 13:36:48.417598 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:36:48 crc kubenswrapper[4768]: I0217 13:36:48.417613 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:36:48 crc kubenswrapper[4768]: I0217 13:36:48.417623 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:36:48Z","lastTransitionTime":"2026-02-17T13:36:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:36:48 crc kubenswrapper[4768]: I0217 13:36:48.450282 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://07aa34038d6332ca2ddf610df510586d343655b1e38a2912ea1513cca35f4dcc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:36:48Z is after 2025-08-24T17:21:41Z" Feb 17 13:36:48 crc kubenswrapper[4768]: I0217 13:36:48.494961 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:41Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:36:48Z is after 2025-08-24T17:21:41Z" Feb 17 13:36:48 crc kubenswrapper[4768]: I0217 13:36:48.520530 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:36:48 crc kubenswrapper[4768]: I0217 13:36:48.520574 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:36:48 crc kubenswrapper[4768]: I0217 13:36:48.520585 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:36:48 crc kubenswrapper[4768]: I0217 13:36:48.520603 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:36:48 crc kubenswrapper[4768]: I0217 13:36:48.520620 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:36:48Z","lastTransitionTime":"2026-02-17T13:36:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:36:48 crc kubenswrapper[4768]: I0217 13:36:48.523313 4768 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-25 05:00:14.733399675 +0000 UTC Feb 17 13:36:48 crc kubenswrapper[4768]: I0217 13:36:48.531357 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:41Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:36:48Z is after 2025-08-24T17:21:41Z" Feb 17 13:36:48 crc kubenswrapper[4768]: I0217 13:36:48.568388 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6l7rv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dac64b2f-4b0b-454c-96e0-fc7d563d300f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://903f9d31bd5f357a5a1f439a158788c1930d0f63cd50e809028689cfa7884e96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7njgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:36:41Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6l7rv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:36:48Z is after 2025-08-24T17:21:41Z" Feb 17 13:36:48 crc kubenswrapper[4768]: I0217 13:36:48.665145 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:36:48 crc kubenswrapper[4768]: I0217 13:36:48.665186 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:36:48 crc kubenswrapper[4768]: I0217 13:36:48.665195 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:36:48 crc kubenswrapper[4768]: I0217 13:36:48.665214 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:36:48 crc kubenswrapper[4768]: I0217 13:36:48.665224 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:36:48Z","lastTransitionTime":"2026-02-17T13:36:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:36:48 crc kubenswrapper[4768]: I0217 13:36:48.719813 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"99a4349d-f5a6-431b-b8d6-9de1f9bbe63a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f45aa99773693c07f27add72aa8305d47c48deeb24382aa9ac8bca232fd26ff3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://47db5ef94c07ecce7b368bb809a736c9532fe3e47081cfa7e248b8b40d1d243f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0adb0161ea24422b8b1443a81f00e9c38f60bf8d0718075ce189158200b28a78\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22e1c78ff231c7a76ebb06589f5f14caed8c0b72475083230c80b4512ae2a048\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:36:21Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:36:48Z is after 2025-08-24T17:21:41Z" Feb 17 13:36:48 crc kubenswrapper[4768]: I0217 13:36:48.728264 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-6xvnz" event={"ID":"68ae92b3-aced-409b-901b-252d2364cc01","Type":"ContainerStarted","Data":"3a84122e995b4744d706f85250dc149036364dc1e476c38b54f08494e0cb41ec"} Feb 17 13:36:48 crc kubenswrapper[4768]: I0217 13:36:48.741144 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f9b2d1282c12fcb8597c7b4c60dc1bab1049afc179eaddbc8a72203b392f78d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:36:48Z is after 2025-08-24T17:21:41Z" Feb 17 13:36:48 crc kubenswrapper[4768]: I0217 13:36:48.741206 4768 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Feb 17 13:36:48 crc kubenswrapper[4768]: I0217 13:36:48.751452 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-p97z4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"10c685ba-8fe0-425c-958c-3fb6754d3d84\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d56cb97a69888610458dcdfe5fa47e01b6641b0ab8ce5b11c01042001017d633\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b2h62\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://69b662556c0f67ed5b73c21d8dda6de43cd7b0e7c3dc7a57578f3ce94a4be7cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b2h62\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:36:42Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-p97z4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:36:48Z is after 2025-08-24T17:21:41Z" Feb 17 13:36:48 crc kubenswrapper[4768]: I0217 13:36:48.762561 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-jjjqj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e044bf1f-26b2-4a39-86e6-0440eff3eaa9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://19bc9ea99c2b3b0015849f2a4c730a14e048c35cf69f5b4025a4d51d707cf9f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jlkgb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:36:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-jjjqj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:36:48Z is after 2025-08-24T17:21:41Z" Feb 17 13:36:48 crc kubenswrapper[4768]: I0217 13:36:48.766842 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:36:48 crc kubenswrapper[4768]: I0217 13:36:48.766877 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:36:48 crc kubenswrapper[4768]: I0217 13:36:48.766889 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:36:48 crc kubenswrapper[4768]: I0217 13:36:48.766908 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:36:48 crc kubenswrapper[4768]: I0217 13:36:48.766922 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:36:48Z","lastTransitionTime":"2026-02-17T13:36:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:36:48 crc kubenswrapper[4768]: I0217 13:36:48.796059 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5cplg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"742e6df8-2a68-426e-982c-ef825c6efca3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg6ql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg6ql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg6ql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg6ql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg6ql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg6ql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg6ql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg6ql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6fcb3dc45db5b2c74f9bdd4c7c453815a392de1f2afe91ebed618327cfe5cb7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6fcb3dc45db5b2c74f9bdd4c7c453815a392de1f2afe91ebed618327cfe5cb7c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:36:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg6ql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:36:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-5cplg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:36:48Z is after 2025-08-24T17:21:41Z" Feb 17 13:36:48 crc kubenswrapper[4768]: I0217 13:36:48.828714 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-hngsc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a9682c-5dd5-49ce-bd8c-60e91527ec2a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db596eafde85d9ad74007393b8576b424cd8561f68f694af82cde80f997a5688\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fbjbb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:36:46Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-hngsc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:36:48Z is after 2025-08-24T17:21:41Z" Feb 17 13:36:48 crc kubenswrapper[4768]: I0217 13:36:48.869674 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:36:48 crc kubenswrapper[4768]: I0217 13:36:48.869718 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:36:48 crc kubenswrapper[4768]: I0217 13:36:48.869729 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:36:48 crc kubenswrapper[4768]: I0217 13:36:48.869750 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:36:48 crc kubenswrapper[4768]: I0217 13:36:48.869762 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:36:48Z","lastTransitionTime":"2026-02-17T13:36:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:36:48 crc kubenswrapper[4768]: I0217 13:36:48.871311 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c0c5a46beaa5e6c9df638b288967eaff1605591faafc373623753e3c434c6ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a28c91cfb782b4976153d2afced3117498aaabaafa5bba655b020c13aebb1dc5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:36:48Z is after 2025-08-24T17:21:41Z" Feb 17 13:36:48 crc kubenswrapper[4768]: I0217 13:36:48.917479 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:36:48Z is after 2025-08-24T17:21:41Z" Feb 17 13:36:48 crc kubenswrapper[4768]: I0217 13:36:48.955406 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"88441663-1346-4328-a433-63cb4d9b6722\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1dd90501c9cf3ad0b27fefe4999afdfeaf0a8262728c8d47cd4714b0ef71d340\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://52dd97c46a2e371a68dcd9558be0f94d3bdd20c7a6ba6a65178fc09cdceea903\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4f0320f35632987c0e1f7eb6655c7221d7c159f1473088fa143a7c69a60f5b0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ecebe409455190192bdd157c1aa1c3eaec5d7930ccfaa6c456a1d04be7fff2f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9ecebe409455190192bdd157c1aa1c3eaec5d7930ccfaa6c456a1d04be7fff2f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-17T13:36:41Z\\\",\\\"message\\\":\\\"le observer\\\\nW0217 13:36:40.862072 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0217 13:36:40.862247 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0217 13:36:40.862863 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1936381186/tls.crt::/tmp/serving-cert-1936381186/tls.key\\\\\\\"\\\\nI0217 13:36:41.350711 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0217 13:36:41.368959 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0217 13:36:41.368982 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0217 13:36:41.369003 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0217 13:36:41.369008 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0217 13:36:41.374960 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0217 13:36:41.374983 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 13:36:41.374989 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0217 13:36:41.374990 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0217 13:36:41.374994 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0217 13:36:41.375023 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0217 13:36:41.375028 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0217 13:36:41.375032 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0217 13:36:41.377284 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T13:36:25Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7641c2023d65064decc69606567dff6eff07116a6cc62f9d1b1cddb34a39a407\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:24Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9dffd0f3cc3b89ababe812ea2e550f319242913da2908f466cbb8d9ac541e4b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9dffd0f3cc3b89ababe812ea2e550f319242913da2908f466cbb8d9ac541e4b9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:36:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:36:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:36:21Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:36:48Z is after 2025-08-24T17:21:41Z" Feb 17 13:36:48 crc kubenswrapper[4768]: I0217 13:36:48.972413 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:36:48 crc kubenswrapper[4768]: I0217 13:36:48.972444 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:36:48 crc kubenswrapper[4768]: I0217 13:36:48.972456 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:36:48 crc kubenswrapper[4768]: I0217 13:36:48.972473 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:36:48 crc kubenswrapper[4768]: I0217 13:36:48.972490 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:36:48Z","lastTransitionTime":"2026-02-17T13:36:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:36:48 crc kubenswrapper[4768]: I0217 13:36:48.989686 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://07aa34038d6332ca2ddf610df510586d343655b1e38a2912ea1513cca35f4dcc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:36:48Z is after 2025-08-24T17:21:41Z" Feb 17 13:36:49 crc kubenswrapper[4768]: I0217 13:36:49.034266 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:41Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:36:49Z is after 2025-08-24T17:21:41Z" Feb 17 13:36:49 crc kubenswrapper[4768]: I0217 13:36:49.074654 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:36:49 crc kubenswrapper[4768]: I0217 13:36:49.074690 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:36:49 crc kubenswrapper[4768]: I0217 13:36:49.074699 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:36:49 crc kubenswrapper[4768]: I0217 13:36:49.074672 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:41Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:36:49Z is after 2025-08-24T17:21:41Z" Feb 17 13:36:49 crc kubenswrapper[4768]: I0217 13:36:49.074714 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:36:49 crc kubenswrapper[4768]: I0217 13:36:49.074786 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:36:49Z","lastTransitionTime":"2026-02-17T13:36:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:36:49 crc kubenswrapper[4768]: I0217 13:36:49.113268 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6l7rv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dac64b2f-4b0b-454c-96e0-fc7d563d300f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://903f9d31bd5f357a5a1f439a158788c1930d0f63cd50e809028689cfa7884e96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7njgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:36:41Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6l7rv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:36:49Z is after 2025-08-24T17:21:41Z" Feb 17 13:36:49 crc kubenswrapper[4768]: I0217 13:36:49.153432 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f9b2d1282c12fcb8597c7b4c60dc1bab1049afc179eaddbc8a72203b392f78d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:36:49Z is after 2025-08-24T17:21:41Z" Feb 17 13:36:49 crc kubenswrapper[4768]: I0217 13:36:49.177249 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:36:49 crc kubenswrapper[4768]: I0217 13:36:49.177521 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:36:49 crc kubenswrapper[4768]: I0217 13:36:49.177639 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:36:49 crc kubenswrapper[4768]: I0217 13:36:49.177723 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:36:49 crc kubenswrapper[4768]: I0217 13:36:49.177811 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:36:49Z","lastTransitionTime":"2026-02-17T13:36:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:36:49 crc kubenswrapper[4768]: I0217 13:36:49.194710 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-p97z4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"10c685ba-8fe0-425c-958c-3fb6754d3d84\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d56cb97a69888610458dcdfe5fa47e01b6641b0ab8ce5b11c01042001017d633\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b2h62\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://69b662556c0f67ed5b73c21d8dda6de43cd7b0e7c3dc7a57578f3ce94a4be7cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b2h62\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:36:42Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-p97z4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:36:49Z is after 2025-08-24T17:21:41Z" Feb 17 13:36:49 crc kubenswrapper[4768]: I0217 13:36:49.231421 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-jjjqj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e044bf1f-26b2-4a39-86e6-0440eff3eaa9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://19bc9ea99c2b3b0015849f2a4c730a14e048c35cf69f5b4025a4d51d707cf9f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jlkgb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:36:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-jjjqj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:36:49Z is after 2025-08-24T17:21:41Z" Feb 17 13:36:49 crc kubenswrapper[4768]: I0217 13:36:49.281110 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:36:49 crc kubenswrapper[4768]: I0217 13:36:49.281150 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:36:49 crc kubenswrapper[4768]: I0217 13:36:49.281158 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:36:49 crc kubenswrapper[4768]: I0217 13:36:49.281176 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:36:49 crc kubenswrapper[4768]: I0217 13:36:49.281190 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:36:49Z","lastTransitionTime":"2026-02-17T13:36:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:36:49 crc kubenswrapper[4768]: I0217 13:36:49.283522 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5cplg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"742e6df8-2a68-426e-982c-ef825c6efca3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg6ql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg6ql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg6ql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg6ql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg6ql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg6ql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg6ql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg6ql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6fcb3dc45db5b2c74f9bdd4c7c453815a392de1f2afe91ebed618327cfe5cb7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6fcb3dc45db5b2c74f9bdd4c7c453815a392de1f2afe91ebed618327cfe5cb7c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:36:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg6ql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:36:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-5cplg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:36:49Z is after 2025-08-24T17:21:41Z" Feb 17 13:36:49 crc kubenswrapper[4768]: I0217 13:36:49.287432 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 13:36:49 crc kubenswrapper[4768]: E0217 13:36:49.287643 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 13:36:57.28761098 +0000 UTC m=+36.566997422 (durationBeforeRetry 8s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 13:36:49 crc kubenswrapper[4768]: I0217 13:36:49.287725 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 13:36:49 crc kubenswrapper[4768]: I0217 13:36:49.287762 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 13:36:49 crc kubenswrapper[4768]: I0217 13:36:49.287808 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 13:36:49 crc kubenswrapper[4768]: E0217 13:36:49.287960 4768 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 17 13:36:49 crc kubenswrapper[4768]: E0217 13:36:49.287959 4768 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 17 13:36:49 crc kubenswrapper[4768]: E0217 13:36:49.287978 4768 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 17 13:36:49 crc kubenswrapper[4768]: E0217 13:36:49.288019 4768 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 17 13:36:49 crc kubenswrapper[4768]: E0217 13:36:49.288035 4768 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 17 13:36:49 crc kubenswrapper[4768]: E0217 13:36:49.287999 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-17 13:36:57.287991891 +0000 UTC m=+36.567378333 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 17 13:36:49 crc kubenswrapper[4768]: E0217 13:36:49.288133 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-17 13:36:57.288094204 +0000 UTC m=+36.567480736 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 17 13:36:49 crc kubenswrapper[4768]: E0217 13:36:49.288148 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-17 13:36:57.288140585 +0000 UTC m=+36.567527137 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 17 13:36:49 crc kubenswrapper[4768]: I0217 13:36:49.308694 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-hngsc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a9682c-5dd5-49ce-bd8c-60e91527ec2a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db596eafde85d9ad74007393b8576b424cd8561f68f694af82cde80f997a5688\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fbjbb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:36:46Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-hngsc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:36:49Z is after 2025-08-24T17:21:41Z" Feb 17 13:36:49 crc kubenswrapper[4768]: I0217 13:36:49.356886 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"99a4349d-f5a6-431b-b8d6-9de1f9bbe63a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f45aa99773693c07f27add72aa8305d47c48deeb24382aa9ac8bca232fd26ff3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://47db5ef94c07ecce7b368bb809a736c9532fe3e47081cfa7e248b8b40d1d243f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0adb0161ea24422b8b1443a81f00e9c38f60bf8d0718075ce189158200b28a78\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22e1c78ff231c7a76ebb06589f5f14caed8c0b72475083230c80b4512ae2a048\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:36:21Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:36:49Z is after 2025-08-24T17:21:41Z" Feb 17 13:36:49 crc kubenswrapper[4768]: I0217 13:36:49.383739 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:36:49 crc kubenswrapper[4768]: I0217 13:36:49.383976 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:36:49 crc kubenswrapper[4768]: I0217 13:36:49.384069 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:36:49 crc kubenswrapper[4768]: I0217 13:36:49.384183 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:36:49 crc kubenswrapper[4768]: I0217 13:36:49.384265 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:36:49Z","lastTransitionTime":"2026-02-17T13:36:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:36:49 crc kubenswrapper[4768]: I0217 13:36:49.389156 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 13:36:49 crc kubenswrapper[4768]: E0217 13:36:49.389300 4768 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 17 13:36:49 crc kubenswrapper[4768]: E0217 13:36:49.389320 4768 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 17 13:36:49 crc kubenswrapper[4768]: E0217 13:36:49.389333 4768 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 17 13:36:49 crc kubenswrapper[4768]: E0217 13:36:49.389376 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-17 13:36:57.38936202 +0000 UTC m=+36.668748472 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 17 13:36:49 crc kubenswrapper[4768]: I0217 13:36:49.394045 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-6xvnz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"68ae92b3-aced-409b-901b-252d2364cc01\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42dxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ccce43f2773621b91f6782909e74d6c1cb29f3f2e0d4280753e2719f256d694\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3ccce43f2773621b91f6782909e74d6c1cb29f3f2e0d4280753e2719f256d694\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:36:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:36:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42dxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8fa44e73498e2987dca421d954e3c6b71c36138496d25e3498934a0ae03c25de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8fa44e73498e2987dca421d954e3c6b71c36138496d25e3498934a0ae03c25de\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:36:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:36:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42dxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8eee5a030051c479dc3e7d690a338260f3a886f6c8b6764037de0a4b3dcea101\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8eee5a030051c479dc3e7d690a338260f3a886f6c8b6764037de0a4b3dcea101\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:36:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:36:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42dxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://473b18f574df0c69d4f70ded0dee7c42ee64daab422dff19c6a58a0cde52d045\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://473b18f574df0c69d4f70ded0dee7c42ee64daab422dff19c6a58a0cde52d045\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:36:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:36:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42dxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a84122e995b4744d706f85250dc149036364dc1e476c38b54f08494e0cb41ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42dxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42dxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:36:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-6xvnz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:36:49Z is after 2025-08-24T17:21:41Z" Feb 17 13:36:49 crc kubenswrapper[4768]: I0217 13:36:49.486620 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:36:49 crc kubenswrapper[4768]: I0217 13:36:49.486656 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:36:49 crc kubenswrapper[4768]: I0217 13:36:49.486672 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:36:49 crc kubenswrapper[4768]: I0217 13:36:49.486690 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:36:49 crc kubenswrapper[4768]: I0217 13:36:49.486701 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:36:49Z","lastTransitionTime":"2026-02-17T13:36:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:36:49 crc kubenswrapper[4768]: I0217 13:36:49.524466 4768 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-23 23:08:51.724287278 +0000 UTC Feb 17 13:36:49 crc kubenswrapper[4768]: I0217 13:36:49.533345 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 13:36:49 crc kubenswrapper[4768]: I0217 13:36:49.533358 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 13:36:49 crc kubenswrapper[4768]: E0217 13:36:49.533725 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 13:36:49 crc kubenswrapper[4768]: E0217 13:36:49.533768 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 13:36:49 crc kubenswrapper[4768]: I0217 13:36:49.533390 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 13:36:49 crc kubenswrapper[4768]: E0217 13:36:49.533830 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 13:36:49 crc kubenswrapper[4768]: I0217 13:36:49.589855 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:36:49 crc kubenswrapper[4768]: I0217 13:36:49.589904 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:36:49 crc kubenswrapper[4768]: I0217 13:36:49.589917 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:36:49 crc kubenswrapper[4768]: I0217 13:36:49.589948 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:36:49 crc kubenswrapper[4768]: I0217 13:36:49.589967 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:36:49Z","lastTransitionTime":"2026-02-17T13:36:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:36:49 crc kubenswrapper[4768]: I0217 13:36:49.692796 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:36:49 crc kubenswrapper[4768]: I0217 13:36:49.692863 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:36:49 crc kubenswrapper[4768]: I0217 13:36:49.692873 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:36:49 crc kubenswrapper[4768]: I0217 13:36:49.692890 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:36:49 crc kubenswrapper[4768]: I0217 13:36:49.692901 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:36:49Z","lastTransitionTime":"2026-02-17T13:36:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:36:49 crc kubenswrapper[4768]: I0217 13:36:49.735506 4768 generic.go:334] "Generic (PLEG): container finished" podID="68ae92b3-aced-409b-901b-252d2364cc01" containerID="3a84122e995b4744d706f85250dc149036364dc1e476c38b54f08494e0cb41ec" exitCode=0 Feb 17 13:36:49 crc kubenswrapper[4768]: I0217 13:36:49.735554 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-6xvnz" event={"ID":"68ae92b3-aced-409b-901b-252d2364cc01","Type":"ContainerDied","Data":"3a84122e995b4744d706f85250dc149036364dc1e476c38b54f08494e0cb41ec"} Feb 17 13:36:49 crc kubenswrapper[4768]: I0217 13:36:49.755320 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-6xvnz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"68ae92b3-aced-409b-901b-252d2364cc01\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42dxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ccce43f2773621b91f6782909e74d6c1cb29f3f2e0d4280753e2719f256d694\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3ccce43f2773621b91f6782909e74d6c1cb29f3f2e0d4280753e2719f256d694\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:36:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:36:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42dxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8fa44e73498e2987dca421d954e3c6b71c36138496d25e3498934a0ae03c25de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8fa44e73498e2987dca421d954e3c6b71c36138496d25e3498934a0ae03c25de\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:36:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:36:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42dxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8eee5a030051c479dc3e7d690a338260f3a886f6c8b6764037de0a4b3dcea101\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8eee5a030051c479dc3e7d690a338260f3a886f6c8b6764037de0a4b3dcea101\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:36:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:36:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42dxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://473b18f574df0c69d4f70ded0dee7c42ee64daab422dff19c6a58a0cde52d045\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://473b18f574df0c69d4f70ded0dee7c42ee64daab422dff19c6a58a0cde52d045\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:36:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:36:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42dxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a84122e995b4744d706f85250dc149036364dc1e476c38b54f08494e0cb41ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3a84122e995b4744d706f85250dc149036364dc1e476c38b54f08494e0cb41ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:36:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:36:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42dxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42dxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:36:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-6xvnz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:36:49Z is after 2025-08-24T17:21:41Z" Feb 17 13:36:49 crc kubenswrapper[4768]: I0217 13:36:49.782157 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"88441663-1346-4328-a433-63cb4d9b6722\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1dd90501c9cf3ad0b27fefe4999afdfeaf0a8262728c8d47cd4714b0ef71d340\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://52dd97c46a2e371a68dcd9558be0f94d3bdd20c7a6ba6a65178fc09cdceea903\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4f0320f35632987c0e1f7eb6655c7221d7c159f1473088fa143a7c69a60f5b0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ecebe409455190192bdd157c1aa1c3eaec5d7930ccfaa6c456a1d04be7fff2f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9ecebe409455190192bdd157c1aa1c3eaec5d7930ccfaa6c456a1d04be7fff2f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-17T13:36:41Z\\\",\\\"message\\\":\\\"le observer\\\\nW0217 13:36:40.862072 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0217 13:36:40.862247 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0217 13:36:40.862863 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1936381186/tls.crt::/tmp/serving-cert-1936381186/tls.key\\\\\\\"\\\\nI0217 13:36:41.350711 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0217 13:36:41.368959 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0217 13:36:41.368982 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0217 13:36:41.369003 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0217 13:36:41.369008 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0217 13:36:41.374960 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0217 13:36:41.374983 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 13:36:41.374989 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0217 13:36:41.374990 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0217 13:36:41.374994 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0217 13:36:41.375023 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0217 13:36:41.375028 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0217 13:36:41.375032 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0217 13:36:41.377284 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T13:36:25Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7641c2023d65064decc69606567dff6eff07116a6cc62f9d1b1cddb34a39a407\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:24Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9dffd0f3cc3b89ababe812ea2e550f319242913da2908f466cbb8d9ac541e4b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9dffd0f3cc3b89ababe812ea2e550f319242913da2908f466cbb8d9ac541e4b9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:36:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:36:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:36:21Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:36:49Z is after 2025-08-24T17:21:41Z" Feb 17 13:36:49 crc kubenswrapper[4768]: I0217 13:36:49.798545 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:36:49 crc kubenswrapper[4768]: I0217 13:36:49.798720 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:36:49 crc kubenswrapper[4768]: I0217 13:36:49.798755 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:36:49 crc kubenswrapper[4768]: I0217 13:36:49.798781 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:36:49 crc kubenswrapper[4768]: I0217 13:36:49.798800 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:36:49Z","lastTransitionTime":"2026-02-17T13:36:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:36:49 crc kubenswrapper[4768]: I0217 13:36:49.803934 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c0c5a46beaa5e6c9df638b288967eaff1605591faafc373623753e3c434c6ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a28c91cfb782b4976153d2afced3117498aaabaafa5bba655b020c13aebb1dc5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:36:49Z is after 2025-08-24T17:21:41Z" Feb 17 13:36:49 crc kubenswrapper[4768]: I0217 13:36:49.820836 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:36:49Z is after 2025-08-24T17:21:41Z" Feb 17 13:36:49 crc kubenswrapper[4768]: I0217 13:36:49.833825 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://07aa34038d6332ca2ddf610df510586d343655b1e38a2912ea1513cca35f4dcc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:36:49Z is after 2025-08-24T17:21:41Z" Feb 17 13:36:49 crc kubenswrapper[4768]: I0217 13:36:49.849019 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:41Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:36:49Z is after 2025-08-24T17:21:41Z" Feb 17 13:36:49 crc kubenswrapper[4768]: I0217 13:36:49.864636 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:41Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:36:49Z is after 2025-08-24T17:21:41Z" Feb 17 13:36:49 crc kubenswrapper[4768]: I0217 13:36:49.875959 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6l7rv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dac64b2f-4b0b-454c-96e0-fc7d563d300f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://903f9d31bd5f357a5a1f439a158788c1930d0f63cd50e809028689cfa7884e96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7njgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:36:41Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6l7rv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:36:49Z is after 2025-08-24T17:21:41Z" Feb 17 13:36:49 crc kubenswrapper[4768]: I0217 13:36:49.888122 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-jjjqj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e044bf1f-26b2-4a39-86e6-0440eff3eaa9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://19bc9ea99c2b3b0015849f2a4c730a14e048c35cf69f5b4025a4d51d707cf9f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jlkgb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:36:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-jjjqj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:36:49Z is after 2025-08-24T17:21:41Z" Feb 17 13:36:49 crc kubenswrapper[4768]: I0217 13:36:49.900743 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:36:49 crc kubenswrapper[4768]: I0217 13:36:49.900799 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:36:49 crc kubenswrapper[4768]: I0217 13:36:49.900810 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:36:49 crc kubenswrapper[4768]: I0217 13:36:49.900824 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:36:49 crc kubenswrapper[4768]: I0217 13:36:49.900833 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:36:49Z","lastTransitionTime":"2026-02-17T13:36:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:36:49 crc kubenswrapper[4768]: I0217 13:36:49.907130 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5cplg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"742e6df8-2a68-426e-982c-ef825c6efca3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg6ql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg6ql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg6ql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg6ql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg6ql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg6ql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg6ql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg6ql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6fcb3dc45db5b2c74f9bdd4c7c453815a392de1f2afe91ebed618327cfe5cb7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6fcb3dc45db5b2c74f9bdd4c7c453815a392de1f2afe91ebed618327cfe5cb7c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:36:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg6ql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:36:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-5cplg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:36:49Z is after 2025-08-24T17:21:41Z" Feb 17 13:36:49 crc kubenswrapper[4768]: I0217 13:36:49.916296 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-hngsc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a9682c-5dd5-49ce-bd8c-60e91527ec2a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db596eafde85d9ad74007393b8576b424cd8561f68f694af82cde80f997a5688\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fbjbb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:36:46Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-hngsc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:36:49Z is after 2025-08-24T17:21:41Z" Feb 17 13:36:49 crc kubenswrapper[4768]: I0217 13:36:49.928020 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"99a4349d-f5a6-431b-b8d6-9de1f9bbe63a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f45aa99773693c07f27add72aa8305d47c48deeb24382aa9ac8bca232fd26ff3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://47db5ef94c07ecce7b368bb809a736c9532fe3e47081cfa7e248b8b40d1d243f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0adb0161ea24422b8b1443a81f00e9c38f60bf8d0718075ce189158200b28a78\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22e1c78ff231c7a76ebb06589f5f14caed8c0b72475083230c80b4512ae2a048\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:36:21Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:36:49Z is after 2025-08-24T17:21:41Z" Feb 17 13:36:49 crc kubenswrapper[4768]: I0217 13:36:49.939055 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f9b2d1282c12fcb8597c7b4c60dc1bab1049afc179eaddbc8a72203b392f78d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:36:49Z is after 2025-08-24T17:21:41Z" Feb 17 13:36:49 crc kubenswrapper[4768]: I0217 13:36:49.949645 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-p97z4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"10c685ba-8fe0-425c-958c-3fb6754d3d84\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d56cb97a69888610458dcdfe5fa47e01b6641b0ab8ce5b11c01042001017d633\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b2h62\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://69b662556c0f67ed5b73c21d8dda6de43cd7b0e7c3dc7a57578f3ce94a4be7cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b2h62\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:36:42Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-p97z4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:36:49Z is after 2025-08-24T17:21:41Z" Feb 17 13:36:50 crc kubenswrapper[4768]: I0217 13:36:50.004472 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:36:50 crc kubenswrapper[4768]: I0217 13:36:50.004512 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:36:50 crc kubenswrapper[4768]: I0217 13:36:50.004520 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:36:50 crc kubenswrapper[4768]: I0217 13:36:50.004537 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:36:50 crc kubenswrapper[4768]: I0217 13:36:50.004546 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:36:50Z","lastTransitionTime":"2026-02-17T13:36:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:36:50 crc kubenswrapper[4768]: I0217 13:36:50.106305 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:36:50 crc kubenswrapper[4768]: I0217 13:36:50.106350 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:36:50 crc kubenswrapper[4768]: I0217 13:36:50.106361 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:36:50 crc kubenswrapper[4768]: I0217 13:36:50.106379 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:36:50 crc kubenswrapper[4768]: I0217 13:36:50.106388 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:36:50Z","lastTransitionTime":"2026-02-17T13:36:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:36:50 crc kubenswrapper[4768]: I0217 13:36:50.208830 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:36:50 crc kubenswrapper[4768]: I0217 13:36:50.208882 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:36:50 crc kubenswrapper[4768]: I0217 13:36:50.208894 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:36:50 crc kubenswrapper[4768]: I0217 13:36:50.208913 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:36:50 crc kubenswrapper[4768]: I0217 13:36:50.208925 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:36:50Z","lastTransitionTime":"2026-02-17T13:36:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:36:50 crc kubenswrapper[4768]: I0217 13:36:50.311802 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:36:50 crc kubenswrapper[4768]: I0217 13:36:50.311837 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:36:50 crc kubenswrapper[4768]: I0217 13:36:50.311845 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:36:50 crc kubenswrapper[4768]: I0217 13:36:50.311860 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:36:50 crc kubenswrapper[4768]: I0217 13:36:50.311871 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:36:50Z","lastTransitionTime":"2026-02-17T13:36:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:36:50 crc kubenswrapper[4768]: I0217 13:36:50.362488 4768 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Feb 17 13:36:50 crc kubenswrapper[4768]: I0217 13:36:50.414982 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:36:50 crc kubenswrapper[4768]: I0217 13:36:50.415032 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:36:50 crc kubenswrapper[4768]: I0217 13:36:50.415043 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:36:50 crc kubenswrapper[4768]: I0217 13:36:50.415059 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:36:50 crc kubenswrapper[4768]: I0217 13:36:50.415070 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:36:50Z","lastTransitionTime":"2026-02-17T13:36:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:36:50 crc kubenswrapper[4768]: I0217 13:36:50.517927 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:36:50 crc kubenswrapper[4768]: I0217 13:36:50.517968 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:36:50 crc kubenswrapper[4768]: I0217 13:36:50.517977 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:36:50 crc kubenswrapper[4768]: I0217 13:36:50.517995 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:36:50 crc kubenswrapper[4768]: I0217 13:36:50.518005 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:36:50Z","lastTransitionTime":"2026-02-17T13:36:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:36:50 crc kubenswrapper[4768]: I0217 13:36:50.525315 4768 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-28 06:43:02.736903805 +0000 UTC Feb 17 13:36:50 crc kubenswrapper[4768]: I0217 13:36:50.619521 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:36:50 crc kubenswrapper[4768]: I0217 13:36:50.619555 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:36:50 crc kubenswrapper[4768]: I0217 13:36:50.619564 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:36:50 crc kubenswrapper[4768]: I0217 13:36:50.619578 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:36:50 crc kubenswrapper[4768]: I0217 13:36:50.619587 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:36:50Z","lastTransitionTime":"2026-02-17T13:36:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:36:50 crc kubenswrapper[4768]: I0217 13:36:50.722022 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:36:50 crc kubenswrapper[4768]: I0217 13:36:50.722054 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:36:50 crc kubenswrapper[4768]: I0217 13:36:50.722062 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:36:50 crc kubenswrapper[4768]: I0217 13:36:50.722078 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:36:50 crc kubenswrapper[4768]: I0217 13:36:50.722089 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:36:50Z","lastTransitionTime":"2026-02-17T13:36:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:36:50 crc kubenswrapper[4768]: I0217 13:36:50.741571 4768 generic.go:334] "Generic (PLEG): container finished" podID="68ae92b3-aced-409b-901b-252d2364cc01" containerID="f77e278dccb14a5e674dad398d194080a57e6949529914ddfa9f8a63dbea8a34" exitCode=0 Feb 17 13:36:50 crc kubenswrapper[4768]: I0217 13:36:50.741648 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-6xvnz" event={"ID":"68ae92b3-aced-409b-901b-252d2364cc01","Type":"ContainerDied","Data":"f77e278dccb14a5e674dad398d194080a57e6949529914ddfa9f8a63dbea8a34"} Feb 17 13:36:50 crc kubenswrapper[4768]: I0217 13:36:50.746077 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5cplg" event={"ID":"742e6df8-2a68-426e-982c-ef825c6efca3","Type":"ContainerStarted","Data":"3442a73d88568604ffac8454d14cbb4b22aa3949cb87e52799c0a9d39faf1bbd"} Feb 17 13:36:50 crc kubenswrapper[4768]: I0217 13:36:50.746380 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-5cplg" Feb 17 13:36:50 crc kubenswrapper[4768]: I0217 13:36:50.756793 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:36:50Z is after 2025-08-24T17:21:41Z" Feb 17 13:36:50 crc kubenswrapper[4768]: I0217 13:36:50.769541 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"88441663-1346-4328-a433-63cb4d9b6722\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1dd90501c9cf3ad0b27fefe4999afdfeaf0a8262728c8d47cd4714b0ef71d340\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://52dd97c46a2e371a68dcd9558be0f94d3bdd20c7a6ba6a65178fc09cdceea903\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4f0320f35632987c0e1f7eb6655c7221d7c159f1473088fa143a7c69a60f5b0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ecebe409455190192bdd157c1aa1c3eaec5d7930ccfaa6c456a1d04be7fff2f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9ecebe409455190192bdd157c1aa1c3eaec5d7930ccfaa6c456a1d04be7fff2f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-17T13:36:41Z\\\",\\\"message\\\":\\\"le observer\\\\nW0217 13:36:40.862072 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0217 13:36:40.862247 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0217 13:36:40.862863 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1936381186/tls.crt::/tmp/serving-cert-1936381186/tls.key\\\\\\\"\\\\nI0217 13:36:41.350711 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0217 13:36:41.368959 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0217 13:36:41.368982 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0217 13:36:41.369003 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0217 13:36:41.369008 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0217 13:36:41.374960 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0217 13:36:41.374983 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 13:36:41.374989 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0217 13:36:41.374990 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0217 13:36:41.374994 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0217 13:36:41.375023 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0217 13:36:41.375028 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0217 13:36:41.375032 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0217 13:36:41.377284 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T13:36:25Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7641c2023d65064decc69606567dff6eff07116a6cc62f9d1b1cddb34a39a407\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:24Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9dffd0f3cc3b89ababe812ea2e550f319242913da2908f466cbb8d9ac541e4b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9dffd0f3cc3b89ababe812ea2e550f319242913da2908f466cbb8d9ac541e4b9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:36:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:36:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:36:21Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:36:50Z is after 2025-08-24T17:21:41Z" Feb 17 13:36:50 crc kubenswrapper[4768]: I0217 13:36:50.781247 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c0c5a46beaa5e6c9df638b288967eaff1605591faafc373623753e3c434c6ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a28c91cfb782b4976153d2afced3117498aaabaafa5bba655b020c13aebb1dc5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:36:50Z is after 2025-08-24T17:21:41Z" Feb 17 13:36:50 crc kubenswrapper[4768]: I0217 13:36:50.793299 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:41Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:36:50Z is after 2025-08-24T17:21:41Z" Feb 17 13:36:50 crc kubenswrapper[4768]: I0217 13:36:50.802518 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-5cplg" Feb 17 13:36:50 crc kubenswrapper[4768]: I0217 13:36:50.804407 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:41Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:36:50Z is after 2025-08-24T17:21:41Z" Feb 17 13:36:50 crc kubenswrapper[4768]: I0217 13:36:50.819634 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6l7rv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dac64b2f-4b0b-454c-96e0-fc7d563d300f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://903f9d31bd5f357a5a1f439a158788c1930d0f63cd50e809028689cfa7884e96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7njgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:36:41Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6l7rv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:36:50Z is after 2025-08-24T17:21:41Z" Feb 17 13:36:50 crc kubenswrapper[4768]: I0217 13:36:50.824593 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:36:50 crc kubenswrapper[4768]: I0217 13:36:50.824627 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:36:50 crc kubenswrapper[4768]: I0217 13:36:50.824637 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:36:50 crc kubenswrapper[4768]: I0217 13:36:50.824654 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:36:50 crc kubenswrapper[4768]: I0217 13:36:50.824665 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:36:50Z","lastTransitionTime":"2026-02-17T13:36:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:36:50 crc kubenswrapper[4768]: I0217 13:36:50.834225 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://07aa34038d6332ca2ddf610df510586d343655b1e38a2912ea1513cca35f4dcc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:36:50Z is after 2025-08-24T17:21:41Z" Feb 17 13:36:50 crc kubenswrapper[4768]: I0217 13:36:50.852557 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f9b2d1282c12fcb8597c7b4c60dc1bab1049afc179eaddbc8a72203b392f78d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:36:50Z is after 2025-08-24T17:21:41Z" Feb 17 13:36:50 crc kubenswrapper[4768]: I0217 13:36:50.864408 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-p97z4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"10c685ba-8fe0-425c-958c-3fb6754d3d84\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d56cb97a69888610458dcdfe5fa47e01b6641b0ab8ce5b11c01042001017d633\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b2h62\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://69b662556c0f67ed5b73c21d8dda6de43cd7b0e7c3dc7a57578f3ce94a4be7cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b2h62\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:36:42Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-p97z4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:36:50Z is after 2025-08-24T17:21:41Z" Feb 17 13:36:50 crc kubenswrapper[4768]: I0217 13:36:50.878297 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-jjjqj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e044bf1f-26b2-4a39-86e6-0440eff3eaa9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://19bc9ea99c2b3b0015849f2a4c730a14e048c35cf69f5b4025a4d51d707cf9f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jlkgb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:36:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-jjjqj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:36:50Z is after 2025-08-24T17:21:41Z" Feb 17 13:36:50 crc kubenswrapper[4768]: I0217 13:36:50.898549 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5cplg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"742e6df8-2a68-426e-982c-ef825c6efca3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg6ql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg6ql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg6ql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg6ql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg6ql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg6ql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg6ql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg6ql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6fcb3dc45db5b2c74f9bdd4c7c453815a392de1f2afe91ebed618327cfe5cb7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6fcb3dc45db5b2c74f9bdd4c7c453815a392de1f2afe91ebed618327cfe5cb7c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:36:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg6ql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:36:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-5cplg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:36:50Z is after 2025-08-24T17:21:41Z" Feb 17 13:36:50 crc kubenswrapper[4768]: I0217 13:36:50.908681 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-hngsc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a9682c-5dd5-49ce-bd8c-60e91527ec2a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db596eafde85d9ad74007393b8576b424cd8561f68f694af82cde80f997a5688\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fbjbb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:36:46Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-hngsc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:36:50Z is after 2025-08-24T17:21:41Z" Feb 17 13:36:50 crc kubenswrapper[4768]: I0217 13:36:50.919617 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"99a4349d-f5a6-431b-b8d6-9de1f9bbe63a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f45aa99773693c07f27add72aa8305d47c48deeb24382aa9ac8bca232fd26ff3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://47db5ef94c07ecce7b368bb809a736c9532fe3e47081cfa7e248b8b40d1d243f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0adb0161ea24422b8b1443a81f00e9c38f60bf8d0718075ce189158200b28a78\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22e1c78ff231c7a76ebb06589f5f14caed8c0b72475083230c80b4512ae2a048\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:36:21Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:36:50Z is after 2025-08-24T17:21:41Z" Feb 17 13:36:50 crc kubenswrapper[4768]: I0217 13:36:50.926734 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:36:50 crc kubenswrapper[4768]: I0217 13:36:50.926781 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:36:50 crc kubenswrapper[4768]: I0217 13:36:50.926792 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:36:50 crc kubenswrapper[4768]: I0217 13:36:50.926811 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:36:50 crc kubenswrapper[4768]: I0217 13:36:50.926823 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:36:50Z","lastTransitionTime":"2026-02-17T13:36:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:36:50 crc kubenswrapper[4768]: I0217 13:36:50.934300 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-6xvnz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"68ae92b3-aced-409b-901b-252d2364cc01\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42dxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ccce43f2773621b91f6782909e74d6c1cb29f3f2e0d4280753e2719f256d694\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3ccce43f2773621b91f6782909e74d6c1cb29f3f2e0d4280753e2719f256d694\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:36:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:36:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42dxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8fa44e73498e2987dca421d954e3c6b71c36138496d25e3498934a0ae03c25de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8fa44e73498e2987dca421d954e3c6b71c36138496d25e3498934a0ae03c25de\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:36:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:36:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42dxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8eee5a030051c479dc3e7d690a338260f3a886f6c8b6764037de0a4b3dcea101\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8eee5a030051c479dc3e7d690a338260f3a886f6c8b6764037de0a4b3dcea101\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:36:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:36:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42dxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://473b18f574df0c69d4f70ded0dee7c42ee64daab422dff19c6a58a0cde52d045\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://473b18f574df0c69d4f70ded0dee7c42ee64daab422dff19c6a58a0cde52d045\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:36:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:36:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42dxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a84122e995b4744d706f85250dc149036364dc1e476c38b54f08494e0cb41ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3a84122e995b4744d706f85250dc149036364dc1e476c38b54f08494e0cb41ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:36:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:36:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42dxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f77e278dccb14a5e674dad398d194080a57e6949529914ddfa9f8a63dbea8a34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f77e278dccb14a5e674dad398d194080a57e6949529914ddfa9f8a63dbea8a34\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:36:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:36:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42dxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:36:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-6xvnz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:36:50Z is after 2025-08-24T17:21:41Z" Feb 17 13:36:50 crc kubenswrapper[4768]: I0217 13:36:50.946421 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"88441663-1346-4328-a433-63cb4d9b6722\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1dd90501c9cf3ad0b27fefe4999afdfeaf0a8262728c8d47cd4714b0ef71d340\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://52dd97c46a2e371a68dcd9558be0f94d3bdd20c7a6ba6a65178fc09cdceea903\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4f0320f35632987c0e1f7eb6655c7221d7c159f1473088fa143a7c69a60f5b0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ecebe409455190192bdd157c1aa1c3eaec5d7930ccfaa6c456a1d04be7fff2f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9ecebe409455190192bdd157c1aa1c3eaec5d7930ccfaa6c456a1d04be7fff2f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-17T13:36:41Z\\\",\\\"message\\\":\\\"le observer\\\\nW0217 13:36:40.862072 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0217 13:36:40.862247 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0217 13:36:40.862863 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1936381186/tls.crt::/tmp/serving-cert-1936381186/tls.key\\\\\\\"\\\\nI0217 13:36:41.350711 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0217 13:36:41.368959 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0217 13:36:41.368982 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0217 13:36:41.369003 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0217 13:36:41.369008 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0217 13:36:41.374960 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0217 13:36:41.374983 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 13:36:41.374989 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0217 13:36:41.374990 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0217 13:36:41.374994 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0217 13:36:41.375023 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0217 13:36:41.375028 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0217 13:36:41.375032 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0217 13:36:41.377284 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T13:36:25Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7641c2023d65064decc69606567dff6eff07116a6cc62f9d1b1cddb34a39a407\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:24Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9dffd0f3cc3b89ababe812ea2e550f319242913da2908f466cbb8d9ac541e4b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9dffd0f3cc3b89ababe812ea2e550f319242913da2908f466cbb8d9ac541e4b9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:36:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:36:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:36:21Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:36:50Z is after 2025-08-24T17:21:41Z" Feb 17 13:36:50 crc kubenswrapper[4768]: I0217 13:36:50.960947 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c0c5a46beaa5e6c9df638b288967eaff1605591faafc373623753e3c434c6ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a28c91cfb782b4976153d2afced3117498aaabaafa5bba655b020c13aebb1dc5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:36:50Z is after 2025-08-24T17:21:41Z" Feb 17 13:36:50 crc kubenswrapper[4768]: I0217 13:36:50.971713 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:36:50Z is after 2025-08-24T17:21:41Z" Feb 17 13:36:50 crc kubenswrapper[4768]: I0217 13:36:50.982354 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://07aa34038d6332ca2ddf610df510586d343655b1e38a2912ea1513cca35f4dcc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:36:50Z is after 2025-08-24T17:21:41Z" Feb 17 13:36:50 crc kubenswrapper[4768]: I0217 13:36:50.992649 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:41Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:36:50Z is after 2025-08-24T17:21:41Z" Feb 17 13:36:51 crc kubenswrapper[4768]: I0217 13:36:51.002775 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:41Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:36:51Z is after 2025-08-24T17:21:41Z" Feb 17 13:36:51 crc kubenswrapper[4768]: I0217 13:36:51.012493 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6l7rv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dac64b2f-4b0b-454c-96e0-fc7d563d300f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://903f9d31bd5f357a5a1f439a158788c1930d0f63cd50e809028689cfa7884e96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7njgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:36:41Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6l7rv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:36:51Z is after 2025-08-24T17:21:41Z" Feb 17 13:36:51 crc kubenswrapper[4768]: I0217 13:36:51.024859 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-jjjqj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e044bf1f-26b2-4a39-86e6-0440eff3eaa9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://19bc9ea99c2b3b0015849f2a4c730a14e048c35cf69f5b4025a4d51d707cf9f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jlkgb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:36:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-jjjqj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:36:51Z is after 2025-08-24T17:21:41Z" Feb 17 13:36:51 crc kubenswrapper[4768]: I0217 13:36:51.029301 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:36:51 crc kubenswrapper[4768]: I0217 13:36:51.029325 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:36:51 crc kubenswrapper[4768]: I0217 13:36:51.029335 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:36:51 crc kubenswrapper[4768]: I0217 13:36:51.029349 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:36:51 crc kubenswrapper[4768]: I0217 13:36:51.029359 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:36:51Z","lastTransitionTime":"2026-02-17T13:36:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:36:51 crc kubenswrapper[4768]: I0217 13:36:51.041040 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5cplg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"742e6df8-2a68-426e-982c-ef825c6efca3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://88f4574f4bb95ef3e580f5e3e98feb55cd1381df1cc14a3797f040f3a8d92e86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg6ql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c29effffff4df037300ce8f32cae5337e3c60634d26fb82fde99ad7994a4fd4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg6ql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bca83097ac9d4b621e9bffa8f2f815a6060db34cb0272281b765aa7705d3ab5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg6ql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aaa535fc7f87d47d5bb54797f60b4adcabe1adc1d5d79720acc7a9ba7bb355c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg6ql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e14dfa7f4c26b9ae731392907658bf0a72b811243137b32320a1be70a334a1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg6ql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0849644593ce9a9b0ce3191eb62e47c5986c4b20f93313dd9fa721c7879ede5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg6ql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3442a73d88568604ffac8454d14cbb4b22aa3949cb87e52799c0a9d39faf1bbd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg6ql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8e5ca206927b1e2a6968ce42ecdb1441da4c68ad11b65c657a68889ac73af37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg6ql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6fcb3dc45db5b2c74f9bdd4c7c453815a392de1f2afe91ebed618327cfe5cb7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6fcb3dc45db5b2c74f9bdd4c7c453815a392de1f2afe91ebed618327cfe5cb7c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:36:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg6ql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:36:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-5cplg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:36:51Z is after 2025-08-24T17:21:41Z" Feb 17 13:36:51 crc kubenswrapper[4768]: I0217 13:36:51.050497 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-hngsc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a9682c-5dd5-49ce-bd8c-60e91527ec2a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db596eafde85d9ad74007393b8576b424cd8561f68f694af82cde80f997a5688\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fbjbb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:36:46Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-hngsc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:36:51Z is after 2025-08-24T17:21:41Z" Feb 17 13:36:51 crc kubenswrapper[4768]: I0217 13:36:51.062416 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"99a4349d-f5a6-431b-b8d6-9de1f9bbe63a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f45aa99773693c07f27add72aa8305d47c48deeb24382aa9ac8bca232fd26ff3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://47db5ef94c07ecce7b368bb809a736c9532fe3e47081cfa7e248b8b40d1d243f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0adb0161ea24422b8b1443a81f00e9c38f60bf8d0718075ce189158200b28a78\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22e1c78ff231c7a76ebb06589f5f14caed8c0b72475083230c80b4512ae2a048\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:36:21Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:36:51Z is after 2025-08-24T17:21:41Z" Feb 17 13:36:51 crc kubenswrapper[4768]: I0217 13:36:51.073319 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f9b2d1282c12fcb8597c7b4c60dc1bab1049afc179eaddbc8a72203b392f78d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:36:51Z is after 2025-08-24T17:21:41Z" Feb 17 13:36:51 crc kubenswrapper[4768]: I0217 13:36:51.083087 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-p97z4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"10c685ba-8fe0-425c-958c-3fb6754d3d84\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d56cb97a69888610458dcdfe5fa47e01b6641b0ab8ce5b11c01042001017d633\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b2h62\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://69b662556c0f67ed5b73c21d8dda6de43cd7b0e7c3dc7a57578f3ce94a4be7cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b2h62\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:36:42Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-p97z4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:36:51Z is after 2025-08-24T17:21:41Z" Feb 17 13:36:51 crc kubenswrapper[4768]: I0217 13:36:51.096232 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-6xvnz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"68ae92b3-aced-409b-901b-252d2364cc01\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42dxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ccce43f2773621b91f6782909e74d6c1cb29f3f2e0d4280753e2719f256d694\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3ccce43f2773621b91f6782909e74d6c1cb29f3f2e0d4280753e2719f256d694\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:36:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:36:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42dxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8fa44e73498e2987dca421d954e3c6b71c36138496d25e3498934a0ae03c25de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8fa44e73498e2987dca421d954e3c6b71c36138496d25e3498934a0ae03c25de\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:36:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:36:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42dxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8eee5a030051c479dc3e7d690a338260f3a886f6c8b6764037de0a4b3dcea101\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8eee5a030051c479dc3e7d690a338260f3a886f6c8b6764037de0a4b3dcea101\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:36:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:36:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42dxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://473b18f574df0c69d4f70ded0dee7c42ee64daab422dff19c6a58a0cde52d045\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://473b18f574df0c69d4f70ded0dee7c42ee64daab422dff19c6a58a0cde52d045\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:36:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:36:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42dxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a84122e995b4744d706f85250dc149036364dc1e476c38b54f08494e0cb41ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3a84122e995b4744d706f85250dc149036364dc1e476c38b54f08494e0cb41ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:36:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:36:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42dxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f77e278dccb14a5e674dad398d194080a57e6949529914ddfa9f8a63dbea8a34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f77e278dccb14a5e674dad398d194080a57e6949529914ddfa9f8a63dbea8a34\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:36:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:36:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42dxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:36:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-6xvnz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:36:51Z is after 2025-08-24T17:21:41Z" Feb 17 13:36:51 crc kubenswrapper[4768]: I0217 13:36:51.131820 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:36:51 crc kubenswrapper[4768]: I0217 13:36:51.131864 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:36:51 crc kubenswrapper[4768]: I0217 13:36:51.131874 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:36:51 crc kubenswrapper[4768]: I0217 13:36:51.131890 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:36:51 crc kubenswrapper[4768]: I0217 13:36:51.131902 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:36:51Z","lastTransitionTime":"2026-02-17T13:36:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:36:51 crc kubenswrapper[4768]: I0217 13:36:51.234626 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:36:51 crc kubenswrapper[4768]: I0217 13:36:51.234678 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:36:51 crc kubenswrapper[4768]: I0217 13:36:51.234694 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:36:51 crc kubenswrapper[4768]: I0217 13:36:51.234720 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:36:51 crc kubenswrapper[4768]: I0217 13:36:51.234736 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:36:51Z","lastTransitionTime":"2026-02-17T13:36:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:36:51 crc kubenswrapper[4768]: I0217 13:36:51.239147 4768 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Feb 17 13:36:51 crc kubenswrapper[4768]: I0217 13:36:51.337446 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:36:51 crc kubenswrapper[4768]: I0217 13:36:51.337499 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:36:51 crc kubenswrapper[4768]: I0217 13:36:51.337510 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:36:51 crc kubenswrapper[4768]: I0217 13:36:51.337528 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:36:51 crc kubenswrapper[4768]: I0217 13:36:51.337539 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:36:51Z","lastTransitionTime":"2026-02-17T13:36:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:36:51 crc kubenswrapper[4768]: I0217 13:36:51.440257 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:36:51 crc kubenswrapper[4768]: I0217 13:36:51.440302 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:36:51 crc kubenswrapper[4768]: I0217 13:36:51.440315 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:36:51 crc kubenswrapper[4768]: I0217 13:36:51.440335 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:36:51 crc kubenswrapper[4768]: I0217 13:36:51.440348 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:36:51Z","lastTransitionTime":"2026-02-17T13:36:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:36:51 crc kubenswrapper[4768]: I0217 13:36:51.525718 4768 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-07 03:28:01.781444812 +0000 UTC Feb 17 13:36:51 crc kubenswrapper[4768]: I0217 13:36:51.534032 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 13:36:51 crc kubenswrapper[4768]: I0217 13:36:51.534067 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 13:36:51 crc kubenswrapper[4768]: E0217 13:36:51.534170 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 13:36:51 crc kubenswrapper[4768]: I0217 13:36:51.534315 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 13:36:51 crc kubenswrapper[4768]: E0217 13:36:51.534456 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 13:36:51 crc kubenswrapper[4768]: E0217 13:36:51.534558 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 13:36:51 crc kubenswrapper[4768]: I0217 13:36:51.546148 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:36:51 crc kubenswrapper[4768]: I0217 13:36:51.546195 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:36:51 crc kubenswrapper[4768]: I0217 13:36:51.546210 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:36:51 crc kubenswrapper[4768]: I0217 13:36:51.546231 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:36:51 crc kubenswrapper[4768]: I0217 13:36:51.546245 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:36:51Z","lastTransitionTime":"2026-02-17T13:36:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:36:51 crc kubenswrapper[4768]: I0217 13:36:51.557672 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-6xvnz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"68ae92b3-aced-409b-901b-252d2364cc01\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42dxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ccce43f2773621b91f6782909e74d6c1cb29f3f2e0d4280753e2719f256d694\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3ccce43f2773621b91f6782909e74d6c1cb29f3f2e0d4280753e2719f256d694\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:36:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:36:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42dxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8fa44e73498e2987dca421d954e3c6b71c36138496d25e3498934a0ae03c25de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8fa44e73498e2987dca421d954e3c6b71c36138496d25e3498934a0ae03c25de\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:36:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:36:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42dxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8eee5a030051c479dc3e7d690a338260f3a886f6c8b6764037de0a4b3dcea101\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8eee5a030051c479dc3e7d690a338260f3a886f6c8b6764037de0a4b3dcea101\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:36:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:36:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42dxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://473b18f574df0c69d4f70ded0dee7c42ee64daab422dff19c6a58a0cde52d045\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://473b18f574df0c69d4f70ded0dee7c42ee64daab422dff19c6a58a0cde52d045\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:36:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:36:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42dxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a84122e995b4744d706f85250dc149036364dc1e476c38b54f08494e0cb41ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3a84122e995b4744d706f85250dc149036364dc1e476c38b54f08494e0cb41ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:36:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:36:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42dxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f77e278dccb14a5e674dad398d194080a57e6949529914ddfa9f8a63dbea8a34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f77e278dccb14a5e674dad398d194080a57e6949529914ddfa9f8a63dbea8a34\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:36:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:36:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42dxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:36:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-6xvnz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:36:51Z is after 2025-08-24T17:21:41Z" Feb 17 13:36:51 crc kubenswrapper[4768]: I0217 13:36:51.575547 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c0c5a46beaa5e6c9df638b288967eaff1605591faafc373623753e3c434c6ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a28c91cfb782b4976153d2afced3117498aaabaafa5bba655b020c13aebb1dc5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:36:51Z is after 2025-08-24T17:21:41Z" Feb 17 13:36:51 crc kubenswrapper[4768]: I0217 13:36:51.590580 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:36:51Z is after 2025-08-24T17:21:41Z" Feb 17 13:36:51 crc kubenswrapper[4768]: I0217 13:36:51.605771 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"88441663-1346-4328-a433-63cb4d9b6722\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1dd90501c9cf3ad0b27fefe4999afdfeaf0a8262728c8d47cd4714b0ef71d340\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://52dd97c46a2e371a68dcd9558be0f94d3bdd20c7a6ba6a65178fc09cdceea903\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4f0320f35632987c0e1f7eb6655c7221d7c159f1473088fa143a7c69a60f5b0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ecebe409455190192bdd157c1aa1c3eaec5d7930ccfaa6c456a1d04be7fff2f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9ecebe409455190192bdd157c1aa1c3eaec5d7930ccfaa6c456a1d04be7fff2f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-17T13:36:41Z\\\",\\\"message\\\":\\\"le observer\\\\nW0217 13:36:40.862072 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0217 13:36:40.862247 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0217 13:36:40.862863 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1936381186/tls.crt::/tmp/serving-cert-1936381186/tls.key\\\\\\\"\\\\nI0217 13:36:41.350711 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0217 13:36:41.368959 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0217 13:36:41.368982 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0217 13:36:41.369003 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0217 13:36:41.369008 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0217 13:36:41.374960 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0217 13:36:41.374983 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 13:36:41.374989 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0217 13:36:41.374990 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0217 13:36:41.374994 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0217 13:36:41.375023 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0217 13:36:41.375028 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0217 13:36:41.375032 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0217 13:36:41.377284 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T13:36:25Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7641c2023d65064decc69606567dff6eff07116a6cc62f9d1b1cddb34a39a407\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:24Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9dffd0f3cc3b89ababe812ea2e550f319242913da2908f466cbb8d9ac541e4b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9dffd0f3cc3b89ababe812ea2e550f319242913da2908f466cbb8d9ac541e4b9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:36:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:36:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:36:21Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:36:51Z is after 2025-08-24T17:21:41Z" Feb 17 13:36:51 crc kubenswrapper[4768]: I0217 13:36:51.623598 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://07aa34038d6332ca2ddf610df510586d343655b1e38a2912ea1513cca35f4dcc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:36:51Z is after 2025-08-24T17:21:41Z" Feb 17 13:36:51 crc kubenswrapper[4768]: I0217 13:36:51.639833 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:41Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:36:51Z is after 2025-08-24T17:21:41Z" Feb 17 13:36:51 crc kubenswrapper[4768]: I0217 13:36:51.648369 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:36:51 crc kubenswrapper[4768]: I0217 13:36:51.648404 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:36:51 crc kubenswrapper[4768]: I0217 13:36:51.648416 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:36:51 crc kubenswrapper[4768]: I0217 13:36:51.648433 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:36:51 crc kubenswrapper[4768]: I0217 13:36:51.648443 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:36:51Z","lastTransitionTime":"2026-02-17T13:36:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:36:51 crc kubenswrapper[4768]: I0217 13:36:51.657592 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:41Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:36:51Z is after 2025-08-24T17:21:41Z" Feb 17 13:36:51 crc kubenswrapper[4768]: I0217 13:36:51.671812 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6l7rv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dac64b2f-4b0b-454c-96e0-fc7d563d300f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://903f9d31bd5f357a5a1f439a158788c1930d0f63cd50e809028689cfa7884e96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7njgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:36:41Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6l7rv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:36:51Z is after 2025-08-24T17:21:41Z" Feb 17 13:36:51 crc kubenswrapper[4768]: I0217 13:36:51.684886 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f9b2d1282c12fcb8597c7b4c60dc1bab1049afc179eaddbc8a72203b392f78d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:36:51Z is after 2025-08-24T17:21:41Z" Feb 17 13:36:51 crc kubenswrapper[4768]: I0217 13:36:51.694034 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-p97z4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"10c685ba-8fe0-425c-958c-3fb6754d3d84\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d56cb97a69888610458dcdfe5fa47e01b6641b0ab8ce5b11c01042001017d633\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b2h62\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://69b662556c0f67ed5b73c21d8dda6de43cd7b0e7c3dc7a57578f3ce94a4be7cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b2h62\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:36:42Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-p97z4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:36:51Z is after 2025-08-24T17:21:41Z" Feb 17 13:36:51 crc kubenswrapper[4768]: I0217 13:36:51.706041 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-jjjqj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e044bf1f-26b2-4a39-86e6-0440eff3eaa9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://19bc9ea99c2b3b0015849f2a4c730a14e048c35cf69f5b4025a4d51d707cf9f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jlkgb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:36:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-jjjqj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:36:51Z is after 2025-08-24T17:21:41Z" Feb 17 13:36:51 crc kubenswrapper[4768]: I0217 13:36:51.736938 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5cplg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"742e6df8-2a68-426e-982c-ef825c6efca3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://88f4574f4bb95ef3e580f5e3e98feb55cd1381df1cc14a3797f040f3a8d92e86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg6ql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c29effffff4df037300ce8f32cae5337e3c60634d26fb82fde99ad7994a4fd4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg6ql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bca83097ac9d4b621e9bffa8f2f815a6060db34cb0272281b765aa7705d3ab5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg6ql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aaa535fc7f87d47d5bb54797f60b4adcabe1adc1d5d79720acc7a9ba7bb355c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg6ql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e14dfa7f4c26b9ae731392907658bf0a72b811243137b32320a1be70a334a1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg6ql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0849644593ce9a9b0ce3191eb62e47c5986c4b20f93313dd9fa721c7879ede5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg6ql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3442a73d88568604ffac8454d14cbb4b22aa3949cb87e52799c0a9d39faf1bbd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg6ql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8e5ca206927b1e2a6968ce42ecdb1441da4c68ad11b65c657a68889ac73af37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg6ql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6fcb3dc45db5b2c74f9bdd4c7c453815a392de1f2afe91ebed618327cfe5cb7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6fcb3dc45db5b2c74f9bdd4c7c453815a392de1f2afe91ebed618327cfe5cb7c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:36:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg6ql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:36:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-5cplg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:36:51Z is after 2025-08-24T17:21:41Z" Feb 17 13:36:51 crc kubenswrapper[4768]: I0217 13:36:51.748548 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-hngsc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a9682c-5dd5-49ce-bd8c-60e91527ec2a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db596eafde85d9ad74007393b8576b424cd8561f68f694af82cde80f997a5688\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fbjbb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:36:46Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-hngsc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:36:51Z is after 2025-08-24T17:21:41Z" Feb 17 13:36:51 crc kubenswrapper[4768]: I0217 13:36:51.755192 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:36:51 crc kubenswrapper[4768]: I0217 13:36:51.755227 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:36:51 crc kubenswrapper[4768]: I0217 13:36:51.755236 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:36:51 crc kubenswrapper[4768]: I0217 13:36:51.755251 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:36:51 crc kubenswrapper[4768]: I0217 13:36:51.755260 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:36:51Z","lastTransitionTime":"2026-02-17T13:36:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:36:51 crc kubenswrapper[4768]: I0217 13:36:51.760842 4768 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 17 13:36:51 crc kubenswrapper[4768]: I0217 13:36:51.761779 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-6xvnz" event={"ID":"68ae92b3-aced-409b-901b-252d2364cc01","Type":"ContainerStarted","Data":"c3e6dc9e6ef1908b60694a76b89114accbccc8e33d2334e751125f348ff68191"} Feb 17 13:36:51 crc kubenswrapper[4768]: I0217 13:36:51.761815 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-5cplg" Feb 17 13:36:51 crc kubenswrapper[4768]: I0217 13:36:51.762293 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"99a4349d-f5a6-431b-b8d6-9de1f9bbe63a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f45aa99773693c07f27add72aa8305d47c48deeb24382aa9ac8bca232fd26ff3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://47db5ef94c07ecce7b368bb809a736c9532fe3e47081cfa7e248b8b40d1d243f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0adb0161ea24422b8b1443a81f00e9c38f60bf8d0718075ce189158200b28a78\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22e1c78ff231c7a76ebb06589f5f14caed8c0b72475083230c80b4512ae2a048\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:36:21Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:36:51Z is after 2025-08-24T17:21:41Z" Feb 17 13:36:51 crc kubenswrapper[4768]: I0217 13:36:51.777978 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"88441663-1346-4328-a433-63cb4d9b6722\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1dd90501c9cf3ad0b27fefe4999afdfeaf0a8262728c8d47cd4714b0ef71d340\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://52dd97c46a2e371a68dcd9558be0f94d3bdd20c7a6ba6a65178fc09cdceea903\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4f0320f35632987c0e1f7eb6655c7221d7c159f1473088fa143a7c69a60f5b0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ecebe409455190192bdd157c1aa1c3eaec5d7930ccfaa6c456a1d04be7fff2f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9ecebe409455190192bdd157c1aa1c3eaec5d7930ccfaa6c456a1d04be7fff2f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-17T13:36:41Z\\\",\\\"message\\\":\\\"le observer\\\\nW0217 13:36:40.862072 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0217 13:36:40.862247 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0217 13:36:40.862863 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1936381186/tls.crt::/tmp/serving-cert-1936381186/tls.key\\\\\\\"\\\\nI0217 13:36:41.350711 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0217 13:36:41.368959 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0217 13:36:41.368982 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0217 13:36:41.369003 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0217 13:36:41.369008 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0217 13:36:41.374960 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0217 13:36:41.374983 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 13:36:41.374989 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0217 13:36:41.374990 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0217 13:36:41.374994 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0217 13:36:41.375023 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0217 13:36:41.375028 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0217 13:36:41.375032 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0217 13:36:41.377284 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T13:36:25Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7641c2023d65064decc69606567dff6eff07116a6cc62f9d1b1cddb34a39a407\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:24Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9dffd0f3cc3b89ababe812ea2e550f319242913da2908f466cbb8d9ac541e4b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9dffd0f3cc3b89ababe812ea2e550f319242913da2908f466cbb8d9ac541e4b9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:36:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:36:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:36:21Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:36:51Z is after 2025-08-24T17:21:41Z" Feb 17 13:36:51 crc kubenswrapper[4768]: I0217 13:36:51.791211 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-5cplg" Feb 17 13:36:51 crc kubenswrapper[4768]: I0217 13:36:51.793584 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c0c5a46beaa5e6c9df638b288967eaff1605591faafc373623753e3c434c6ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a28c91cfb782b4976153d2afced3117498aaabaafa5bba655b020c13aebb1dc5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:36:51Z is after 2025-08-24T17:21:41Z" Feb 17 13:36:51 crc kubenswrapper[4768]: I0217 13:36:51.806224 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:36:51Z is after 2025-08-24T17:21:41Z" Feb 17 13:36:51 crc kubenswrapper[4768]: I0217 13:36:51.832238 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://07aa34038d6332ca2ddf610df510586d343655b1e38a2912ea1513cca35f4dcc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:36:51Z is after 2025-08-24T17:21:41Z" Feb 17 13:36:51 crc kubenswrapper[4768]: I0217 13:36:51.858399 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:36:51 crc kubenswrapper[4768]: I0217 13:36:51.858445 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:36:51 crc kubenswrapper[4768]: I0217 13:36:51.858457 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:36:51 crc kubenswrapper[4768]: I0217 13:36:51.858477 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:36:51 crc kubenswrapper[4768]: I0217 13:36:51.858489 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:36:51Z","lastTransitionTime":"2026-02-17T13:36:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:36:51 crc kubenswrapper[4768]: I0217 13:36:51.881398 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:41Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:36:51Z is after 2025-08-24T17:21:41Z" Feb 17 13:36:51 crc kubenswrapper[4768]: I0217 13:36:51.910937 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:41Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:36:51Z is after 2025-08-24T17:21:41Z" Feb 17 13:36:51 crc kubenswrapper[4768]: I0217 13:36:51.952178 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6l7rv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dac64b2f-4b0b-454c-96e0-fc7d563d300f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://903f9d31bd5f357a5a1f439a158788c1930d0f63cd50e809028689cfa7884e96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7njgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:36:41Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6l7rv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:36:51Z is after 2025-08-24T17:21:41Z" Feb 17 13:36:51 crc kubenswrapper[4768]: I0217 13:36:51.961476 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:36:51 crc kubenswrapper[4768]: I0217 13:36:51.961523 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:36:51 crc kubenswrapper[4768]: I0217 13:36:51.961535 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:36:51 crc kubenswrapper[4768]: I0217 13:36:51.961558 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:36:51 crc kubenswrapper[4768]: I0217 13:36:51.961571 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:36:51Z","lastTransitionTime":"2026-02-17T13:36:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:36:51 crc kubenswrapper[4768]: I0217 13:36:51.993551 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"99a4349d-f5a6-431b-b8d6-9de1f9bbe63a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f45aa99773693c07f27add72aa8305d47c48deeb24382aa9ac8bca232fd26ff3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://47db5ef94c07ecce7b368bb809a736c9532fe3e47081cfa7e248b8b40d1d243f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0adb0161ea24422b8b1443a81f00e9c38f60bf8d0718075ce189158200b28a78\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22e1c78ff231c7a76ebb06589f5f14caed8c0b72475083230c80b4512ae2a048\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:36:21Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:36:51Z is after 2025-08-24T17:21:41Z" Feb 17 13:36:52 crc kubenswrapper[4768]: I0217 13:36:52.032277 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f9b2d1282c12fcb8597c7b4c60dc1bab1049afc179eaddbc8a72203b392f78d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:36:52Z is after 2025-08-24T17:21:41Z" Feb 17 13:36:52 crc kubenswrapper[4768]: I0217 13:36:52.064391 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:36:52 crc kubenswrapper[4768]: I0217 13:36:52.064432 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:36:52 crc kubenswrapper[4768]: I0217 13:36:52.064442 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:36:52 crc kubenswrapper[4768]: I0217 13:36:52.064457 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:36:52 crc kubenswrapper[4768]: I0217 13:36:52.064467 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:36:52Z","lastTransitionTime":"2026-02-17T13:36:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:36:52 crc kubenswrapper[4768]: I0217 13:36:52.071859 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-p97z4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"10c685ba-8fe0-425c-958c-3fb6754d3d84\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d56cb97a69888610458dcdfe5fa47e01b6641b0ab8ce5b11c01042001017d633\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b2h62\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://69b662556c0f67ed5b73c21d8dda6de43cd7b0e7c3dc7a57578f3ce94a4be7cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b2h62\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:36:42Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-p97z4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:36:52Z is after 2025-08-24T17:21:41Z" Feb 17 13:36:52 crc kubenswrapper[4768]: I0217 13:36:52.113000 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-jjjqj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e044bf1f-26b2-4a39-86e6-0440eff3eaa9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://19bc9ea99c2b3b0015849f2a4c730a14e048c35cf69f5b4025a4d51d707cf9f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jlkgb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:36:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-jjjqj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:36:52Z is after 2025-08-24T17:21:41Z" Feb 17 13:36:52 crc kubenswrapper[4768]: I0217 13:36:52.157923 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5cplg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"742e6df8-2a68-426e-982c-ef825c6efca3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://88f4574f4bb95ef3e580f5e3e98feb55cd1381df1cc14a3797f040f3a8d92e86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg6ql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c29effffff4df037300ce8f32cae5337e3c60634d26fb82fde99ad7994a4fd4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg6ql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bca83097ac9d4b621e9bffa8f2f815a6060db34cb0272281b765aa7705d3ab5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg6ql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aaa535fc7f87d47d5bb54797f60b4adcabe1adc1d5d79720acc7a9ba7bb355c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg6ql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e14dfa7f4c26b9ae731392907658bf0a72b811243137b32320a1be70a334a1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg6ql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0849644593ce9a9b0ce3191eb62e47c5986c4b20f93313dd9fa721c7879ede5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg6ql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3442a73d88568604ffac8454d14cbb4b22aa3949cb87e52799c0a9d39faf1bbd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg6ql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8e5ca206927b1e2a6968ce42ecdb1441da4c68ad11b65c657a68889ac73af37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg6ql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6fcb3dc45db5b2c74f9bdd4c7c453815a392de1f2afe91ebed618327cfe5cb7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6fcb3dc45db5b2c74f9bdd4c7c453815a392de1f2afe91ebed618327cfe5cb7c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:36:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg6ql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:36:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-5cplg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:36:52Z is after 2025-08-24T17:21:41Z" Feb 17 13:36:52 crc kubenswrapper[4768]: I0217 13:36:52.166673 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:36:52 crc kubenswrapper[4768]: I0217 13:36:52.166711 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:36:52 crc kubenswrapper[4768]: I0217 13:36:52.166723 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:36:52 crc kubenswrapper[4768]: I0217 13:36:52.166742 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:36:52 crc kubenswrapper[4768]: I0217 13:36:52.166752 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:36:52Z","lastTransitionTime":"2026-02-17T13:36:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:36:52 crc kubenswrapper[4768]: I0217 13:36:52.191787 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-hngsc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a9682c-5dd5-49ce-bd8c-60e91527ec2a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db596eafde85d9ad74007393b8576b424cd8561f68f694af82cde80f997a5688\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fbjbb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:36:46Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-hngsc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:36:52Z is after 2025-08-24T17:21:41Z" Feb 17 13:36:52 crc kubenswrapper[4768]: I0217 13:36:52.241431 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-6xvnz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"68ae92b3-aced-409b-901b-252d2364cc01\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c3e6dc9e6ef1908b60694a76b89114accbccc8e33d2334e751125f348ff68191\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42dxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ccce43f2773621b91f6782909e74d6c1cb29f3f2e0d4280753e2719f256d694\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3ccce43f2773621b91f6782909e74d6c1cb29f3f2e0d4280753e2719f256d694\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:36:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:36:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42dxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8fa44e73498e2987dca421d954e3c6b71c36138496d25e3498934a0ae03c25de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8fa44e73498e2987dca421d954e3c6b71c36138496d25e3498934a0ae03c25de\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:36:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:36:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42dxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8eee5a030051c479dc3e7d690a338260f3a886f6c8b6764037de0a4b3dcea101\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8eee5a030051c479dc3e7d690a338260f3a886f6c8b6764037de0a4b3dcea101\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:36:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:36:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42dxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://473b18f574df0c69d4f70ded0dee7c42ee64daab422dff19c6a58a0cde52d045\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://473b18f574df0c69d4f70ded0dee7c42ee64daab422dff19c6a58a0cde52d045\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:36:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:36:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42dxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a84122e995b4744d706f85250dc149036364dc1e476c38b54f08494e0cb41ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3a84122e995b4744d706f85250dc149036364dc1e476c38b54f08494e0cb41ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:36:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:36:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42dxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f77e278dccb14a5e674dad398d194080a57e6949529914ddfa9f8a63dbea8a34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f77e278dccb14a5e674dad398d194080a57e6949529914ddfa9f8a63dbea8a34\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:36:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:36:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42dxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:36:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-6xvnz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:36:52Z is after 2025-08-24T17:21:41Z" Feb 17 13:36:52 crc kubenswrapper[4768]: I0217 13:36:52.269310 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:36:52 crc kubenswrapper[4768]: I0217 13:36:52.269361 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:36:52 crc kubenswrapper[4768]: I0217 13:36:52.269375 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:36:52 crc kubenswrapper[4768]: I0217 13:36:52.269396 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:36:52 crc kubenswrapper[4768]: I0217 13:36:52.269411 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:36:52Z","lastTransitionTime":"2026-02-17T13:36:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:36:52 crc kubenswrapper[4768]: I0217 13:36:52.285899 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5cplg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"742e6df8-2a68-426e-982c-ef825c6efca3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://88f4574f4bb95ef3e580f5e3e98feb55cd1381df1cc14a3797f040f3a8d92e86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg6ql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c29effffff4df037300ce8f32cae5337e3c60634d26fb82fde99ad7994a4fd4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg6ql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bca83097ac9d4b621e9bffa8f2f815a6060db34cb0272281b765aa7705d3ab5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg6ql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aaa535fc7f87d47d5bb54797f60b4adcabe1adc1d5d79720acc7a9ba7bb355c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg6ql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e14dfa7f4c26b9ae731392907658bf0a72b811243137b32320a1be70a334a1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg6ql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0849644593ce9a9b0ce3191eb62e47c5986c4b20f93313dd9fa721c7879ede5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg6ql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3442a73d88568604ffac8454d14cbb4b22aa3949cb87e52799c0a9d39faf1bbd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg6ql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8e5ca206927b1e2a6968ce42ecdb1441da4c68ad11b65c657a68889ac73af37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg6ql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6fcb3dc45db5b2c74f9bdd4c7c453815a392de1f2afe91ebed618327cfe5cb7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6fcb3dc45db5b2c74f9bdd4c7c453815a392de1f2afe91ebed618327cfe5cb7c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:36:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg6ql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:36:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-5cplg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:36:52Z is after 2025-08-24T17:21:41Z" Feb 17 13:36:52 crc kubenswrapper[4768]: I0217 13:36:52.310578 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-hngsc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a9682c-5dd5-49ce-bd8c-60e91527ec2a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db596eafde85d9ad74007393b8576b424cd8561f68f694af82cde80f997a5688\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fbjbb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:36:46Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-hngsc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:36:52Z is after 2025-08-24T17:21:41Z" Feb 17 13:36:52 crc kubenswrapper[4768]: I0217 13:36:52.353545 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"99a4349d-f5a6-431b-b8d6-9de1f9bbe63a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f45aa99773693c07f27add72aa8305d47c48deeb24382aa9ac8bca232fd26ff3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://47db5ef94c07ecce7b368bb809a736c9532fe3e47081cfa7e248b8b40d1d243f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0adb0161ea24422b8b1443a81f00e9c38f60bf8d0718075ce189158200b28a78\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22e1c78ff231c7a76ebb06589f5f14caed8c0b72475083230c80b4512ae2a048\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:36:21Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:36:52Z is after 2025-08-24T17:21:41Z" Feb 17 13:36:52 crc kubenswrapper[4768]: I0217 13:36:52.371731 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:36:52 crc kubenswrapper[4768]: I0217 13:36:52.371791 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:36:52 crc kubenswrapper[4768]: I0217 13:36:52.371802 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:36:52 crc kubenswrapper[4768]: I0217 13:36:52.371820 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:36:52 crc kubenswrapper[4768]: I0217 13:36:52.372092 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:36:52Z","lastTransitionTime":"2026-02-17T13:36:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:36:52 crc kubenswrapper[4768]: I0217 13:36:52.400879 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f9b2d1282c12fcb8597c7b4c60dc1bab1049afc179eaddbc8a72203b392f78d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:36:52Z is after 2025-08-24T17:21:41Z" Feb 17 13:36:52 crc kubenswrapper[4768]: I0217 13:36:52.435688 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-p97z4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"10c685ba-8fe0-425c-958c-3fb6754d3d84\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d56cb97a69888610458dcdfe5fa47e01b6641b0ab8ce5b11c01042001017d633\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b2h62\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://69b662556c0f67ed5b73c21d8dda6de43cd7b0e7c3dc7a57578f3ce94a4be7cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b2h62\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:36:42Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-p97z4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:36:52Z is after 2025-08-24T17:21:41Z" Feb 17 13:36:52 crc kubenswrapper[4768]: I0217 13:36:52.474883 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:36:52 crc kubenswrapper[4768]: I0217 13:36:52.474935 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:36:52 crc kubenswrapper[4768]: I0217 13:36:52.474958 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:36:52 crc kubenswrapper[4768]: I0217 13:36:52.474979 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:36:52 crc kubenswrapper[4768]: I0217 13:36:52.474993 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:36:52Z","lastTransitionTime":"2026-02-17T13:36:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:36:52 crc kubenswrapper[4768]: I0217 13:36:52.481913 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-jjjqj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e044bf1f-26b2-4a39-86e6-0440eff3eaa9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://19bc9ea99c2b3b0015849f2a4c730a14e048c35cf69f5b4025a4d51d707cf9f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jlkgb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:36:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-jjjqj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:36:52Z is after 2025-08-24T17:21:41Z" Feb 17 13:36:52 crc kubenswrapper[4768]: I0217 13:36:52.515327 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-6xvnz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"68ae92b3-aced-409b-901b-252d2364cc01\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c3e6dc9e6ef1908b60694a76b89114accbccc8e33d2334e751125f348ff68191\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42dxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ccce43f2773621b91f6782909e74d6c1cb29f3f2e0d4280753e2719f256d694\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3ccce43f2773621b91f6782909e74d6c1cb29f3f2e0d4280753e2719f256d694\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:36:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:36:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42dxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8fa44e73498e2987dca421d954e3c6b71c36138496d25e3498934a0ae03c25de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8fa44e73498e2987dca421d954e3c6b71c36138496d25e3498934a0ae03c25de\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:36:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:36:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42dxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8eee5a030051c479dc3e7d690a338260f3a886f6c8b6764037de0a4b3dcea101\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8eee5a030051c479dc3e7d690a338260f3a886f6c8b6764037de0a4b3dcea101\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:36:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:36:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42dxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://473b18f574df0c69d4f70ded0dee7c42ee64daab422dff19c6a58a0cde52d045\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://473b18f574df0c69d4f70ded0dee7c42ee64daab422dff19c6a58a0cde52d045\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:36:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:36:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42dxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a84122e995b4744d706f85250dc149036364dc1e476c38b54f08494e0cb41ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3a84122e995b4744d706f85250dc149036364dc1e476c38b54f08494e0cb41ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:36:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:36:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42dxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f77e278dccb14a5e674dad398d194080a57e6949529914ddfa9f8a63dbea8a34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f77e278dccb14a5e674dad398d194080a57e6949529914ddfa9f8a63dbea8a34\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:36:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:36:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42dxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:36:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-6xvnz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:36:52Z is after 2025-08-24T17:21:41Z" Feb 17 13:36:52 crc kubenswrapper[4768]: I0217 13:36:52.526511 4768 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-26 18:58:57.078425182 +0000 UTC Feb 17 13:36:52 crc kubenswrapper[4768]: I0217 13:36:52.553842 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"88441663-1346-4328-a433-63cb4d9b6722\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1dd90501c9cf3ad0b27fefe4999afdfeaf0a8262728c8d47cd4714b0ef71d340\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://52dd97c46a2e371a68dcd9558be0f94d3bdd20c7a6ba6a65178fc09cdceea903\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4f0320f35632987c0e1f7eb6655c7221d7c159f1473088fa143a7c69a60f5b0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ecebe409455190192bdd157c1aa1c3eaec5d7930ccfaa6c456a1d04be7fff2f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9ecebe409455190192bdd157c1aa1c3eaec5d7930ccfaa6c456a1d04be7fff2f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-17T13:36:41Z\\\",\\\"message\\\":\\\"le observer\\\\nW0217 13:36:40.862072 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0217 13:36:40.862247 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0217 13:36:40.862863 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1936381186/tls.crt::/tmp/serving-cert-1936381186/tls.key\\\\\\\"\\\\nI0217 13:36:41.350711 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0217 13:36:41.368959 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0217 13:36:41.368982 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0217 13:36:41.369003 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0217 13:36:41.369008 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0217 13:36:41.374960 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0217 13:36:41.374983 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 13:36:41.374989 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0217 13:36:41.374990 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0217 13:36:41.374994 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0217 13:36:41.375023 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0217 13:36:41.375028 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0217 13:36:41.375032 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0217 13:36:41.377284 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T13:36:25Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7641c2023d65064decc69606567dff6eff07116a6cc62f9d1b1cddb34a39a407\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:24Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9dffd0f3cc3b89ababe812ea2e550f319242913da2908f466cbb8d9ac541e4b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9dffd0f3cc3b89ababe812ea2e550f319242913da2908f466cbb8d9ac541e4b9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:36:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:36:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:36:21Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:36:52Z is after 2025-08-24T17:21:41Z" Feb 17 13:36:52 crc kubenswrapper[4768]: I0217 13:36:52.576705 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:36:52 crc kubenswrapper[4768]: I0217 13:36:52.576762 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:36:52 crc kubenswrapper[4768]: I0217 13:36:52.576780 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:36:52 crc kubenswrapper[4768]: I0217 13:36:52.576801 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:36:52 crc kubenswrapper[4768]: I0217 13:36:52.576816 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:36:52Z","lastTransitionTime":"2026-02-17T13:36:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:36:52 crc kubenswrapper[4768]: I0217 13:36:52.592079 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c0c5a46beaa5e6c9df638b288967eaff1605591faafc373623753e3c434c6ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a28c91cfb782b4976153d2afced3117498aaabaafa5bba655b020c13aebb1dc5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:36:52Z is after 2025-08-24T17:21:41Z" Feb 17 13:36:52 crc kubenswrapper[4768]: I0217 13:36:52.632544 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:36:52Z is after 2025-08-24T17:21:41Z" Feb 17 13:36:52 crc kubenswrapper[4768]: I0217 13:36:52.676123 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://07aa34038d6332ca2ddf610df510586d343655b1e38a2912ea1513cca35f4dcc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:36:52Z is after 2025-08-24T17:21:41Z" Feb 17 13:36:52 crc kubenswrapper[4768]: I0217 13:36:52.679030 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:36:52 crc kubenswrapper[4768]: I0217 13:36:52.679071 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:36:52 crc kubenswrapper[4768]: I0217 13:36:52.679082 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:36:52 crc kubenswrapper[4768]: I0217 13:36:52.679122 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:36:52 crc kubenswrapper[4768]: I0217 13:36:52.679140 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:36:52Z","lastTransitionTime":"2026-02-17T13:36:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:36:52 crc kubenswrapper[4768]: I0217 13:36:52.709779 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:41Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:36:52Z is after 2025-08-24T17:21:41Z" Feb 17 13:36:52 crc kubenswrapper[4768]: I0217 13:36:52.751964 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:41Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:36:52Z is after 2025-08-24T17:21:41Z" Feb 17 13:36:52 crc kubenswrapper[4768]: I0217 13:36:52.763618 4768 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 17 13:36:52 crc kubenswrapper[4768]: I0217 13:36:52.781576 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:36:52 crc kubenswrapper[4768]: I0217 13:36:52.781608 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:36:52 crc kubenswrapper[4768]: I0217 13:36:52.781617 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:36:52 crc kubenswrapper[4768]: I0217 13:36:52.781632 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:36:52 crc kubenswrapper[4768]: I0217 13:36:52.781641 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:36:52Z","lastTransitionTime":"2026-02-17T13:36:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:36:52 crc kubenswrapper[4768]: I0217 13:36:52.790344 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6l7rv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dac64b2f-4b0b-454c-96e0-fc7d563d300f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://903f9d31bd5f357a5a1f439a158788c1930d0f63cd50e809028689cfa7884e96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7njgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:36:41Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6l7rv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:36:52Z is after 2025-08-24T17:21:41Z" Feb 17 13:36:52 crc kubenswrapper[4768]: I0217 13:36:52.884540 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:36:52 crc kubenswrapper[4768]: I0217 13:36:52.884597 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:36:52 crc kubenswrapper[4768]: I0217 13:36:52.884608 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:36:52 crc kubenswrapper[4768]: I0217 13:36:52.884633 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:36:52 crc kubenswrapper[4768]: I0217 13:36:52.884648 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:36:52Z","lastTransitionTime":"2026-02-17T13:36:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:36:52 crc kubenswrapper[4768]: I0217 13:36:52.988304 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:36:52 crc kubenswrapper[4768]: I0217 13:36:52.988347 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:36:52 crc kubenswrapper[4768]: I0217 13:36:52.988361 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:36:52 crc kubenswrapper[4768]: I0217 13:36:52.988382 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:36:52 crc kubenswrapper[4768]: I0217 13:36:52.988397 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:36:52Z","lastTransitionTime":"2026-02-17T13:36:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:36:53 crc kubenswrapper[4768]: I0217 13:36:53.093896 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:36:53 crc kubenswrapper[4768]: I0217 13:36:53.093951 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:36:53 crc kubenswrapper[4768]: I0217 13:36:53.093963 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:36:53 crc kubenswrapper[4768]: I0217 13:36:53.093980 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:36:53 crc kubenswrapper[4768]: I0217 13:36:53.093990 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:36:53Z","lastTransitionTime":"2026-02-17T13:36:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:36:53 crc kubenswrapper[4768]: I0217 13:36:53.196682 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:36:53 crc kubenswrapper[4768]: I0217 13:36:53.196723 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:36:53 crc kubenswrapper[4768]: I0217 13:36:53.196732 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:36:53 crc kubenswrapper[4768]: I0217 13:36:53.196749 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:36:53 crc kubenswrapper[4768]: I0217 13:36:53.196759 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:36:53Z","lastTransitionTime":"2026-02-17T13:36:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:36:53 crc kubenswrapper[4768]: I0217 13:36:53.299322 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:36:53 crc kubenswrapper[4768]: I0217 13:36:53.299363 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:36:53 crc kubenswrapper[4768]: I0217 13:36:53.299375 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:36:53 crc kubenswrapper[4768]: I0217 13:36:53.299393 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:36:53 crc kubenswrapper[4768]: I0217 13:36:53.299406 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:36:53Z","lastTransitionTime":"2026-02-17T13:36:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:36:53 crc kubenswrapper[4768]: I0217 13:36:53.401691 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:36:53 crc kubenswrapper[4768]: I0217 13:36:53.401734 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:36:53 crc kubenswrapper[4768]: I0217 13:36:53.401744 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:36:53 crc kubenswrapper[4768]: I0217 13:36:53.401760 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:36:53 crc kubenswrapper[4768]: I0217 13:36:53.401770 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:36:53Z","lastTransitionTime":"2026-02-17T13:36:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:36:53 crc kubenswrapper[4768]: I0217 13:36:53.503640 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:36:53 crc kubenswrapper[4768]: I0217 13:36:53.503674 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:36:53 crc kubenswrapper[4768]: I0217 13:36:53.503682 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:36:53 crc kubenswrapper[4768]: I0217 13:36:53.503697 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:36:53 crc kubenswrapper[4768]: I0217 13:36:53.503706 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:36:53Z","lastTransitionTime":"2026-02-17T13:36:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:36:53 crc kubenswrapper[4768]: I0217 13:36:53.527509 4768 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-19 21:45:16.420744236 +0000 UTC Feb 17 13:36:53 crc kubenswrapper[4768]: I0217 13:36:53.533856 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 13:36:53 crc kubenswrapper[4768]: I0217 13:36:53.533887 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 13:36:53 crc kubenswrapper[4768]: I0217 13:36:53.533856 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 13:36:53 crc kubenswrapper[4768]: E0217 13:36:53.533989 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 13:36:53 crc kubenswrapper[4768]: E0217 13:36:53.534037 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 13:36:53 crc kubenswrapper[4768]: E0217 13:36:53.534070 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 13:36:53 crc kubenswrapper[4768]: I0217 13:36:53.606571 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:36:53 crc kubenswrapper[4768]: I0217 13:36:53.606611 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:36:53 crc kubenswrapper[4768]: I0217 13:36:53.606621 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:36:53 crc kubenswrapper[4768]: I0217 13:36:53.606638 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:36:53 crc kubenswrapper[4768]: I0217 13:36:53.606649 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:36:53Z","lastTransitionTime":"2026-02-17T13:36:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:36:53 crc kubenswrapper[4768]: I0217 13:36:53.709273 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:36:53 crc kubenswrapper[4768]: I0217 13:36:53.709315 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:36:53 crc kubenswrapper[4768]: I0217 13:36:53.709326 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:36:53 crc kubenswrapper[4768]: I0217 13:36:53.709344 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:36:53 crc kubenswrapper[4768]: I0217 13:36:53.709355 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:36:53Z","lastTransitionTime":"2026-02-17T13:36:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:36:53 crc kubenswrapper[4768]: I0217 13:36:53.766408 4768 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 17 13:36:53 crc kubenswrapper[4768]: I0217 13:36:53.811804 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:36:53 crc kubenswrapper[4768]: I0217 13:36:53.811882 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:36:53 crc kubenswrapper[4768]: I0217 13:36:53.811894 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:36:53 crc kubenswrapper[4768]: I0217 13:36:53.811914 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:36:53 crc kubenswrapper[4768]: I0217 13:36:53.811926 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:36:53Z","lastTransitionTime":"2026-02-17T13:36:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:36:53 crc kubenswrapper[4768]: I0217 13:36:53.894396 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-62mzv"] Feb 17 13:36:53 crc kubenswrapper[4768]: I0217 13:36:53.894785 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-62mzv" Feb 17 13:36:53 crc kubenswrapper[4768]: I0217 13:36:53.896642 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Feb 17 13:36:53 crc kubenswrapper[4768]: I0217 13:36:53.898721 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Feb 17 13:36:53 crc kubenswrapper[4768]: I0217 13:36:53.914510 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:36:53 crc kubenswrapper[4768]: I0217 13:36:53.914559 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:36:53 crc kubenswrapper[4768]: I0217 13:36:53.914570 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:36:53 crc kubenswrapper[4768]: I0217 13:36:53.914592 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:36:53 crc kubenswrapper[4768]: I0217 13:36:53.914605 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:36:53Z","lastTransitionTime":"2026-02-17T13:36:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:36:53 crc kubenswrapper[4768]: I0217 13:36:53.921540 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-6xvnz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"68ae92b3-aced-409b-901b-252d2364cc01\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c3e6dc9e6ef1908b60694a76b89114accbccc8e33d2334e751125f348ff68191\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42dxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ccce43f2773621b91f6782909e74d6c1cb29f3f2e0d4280753e2719f256d694\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3ccce43f2773621b91f6782909e74d6c1cb29f3f2e0d4280753e2719f256d694\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:36:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:36:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42dxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8fa44e73498e2987dca421d954e3c6b71c36138496d25e3498934a0ae03c25de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8fa44e73498e2987dca421d954e3c6b71c36138496d25e3498934a0ae03c25de\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:36:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:36:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42dxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8eee5a030051c479dc3e7d690a338260f3a886f6c8b6764037de0a4b3dcea101\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8eee5a030051c479dc3e7d690a338260f3a886f6c8b6764037de0a4b3dcea101\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:36:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:36:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42dxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://473b18f574df0c69d4f70ded0dee7c42ee64daab422dff19c6a58a0cde52d045\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://473b18f574df0c69d4f70ded0dee7c42ee64daab422dff19c6a58a0cde52d045\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:36:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:36:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42dxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a84122e995b4744d706f85250dc149036364dc1e476c38b54f08494e0cb41ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3a84122e995b4744d706f85250dc149036364dc1e476c38b54f08494e0cb41ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:36:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:36:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42dxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f77e278dccb14a5e674dad398d194080a57e6949529914ddfa9f8a63dbea8a34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f77e278dccb14a5e674dad398d194080a57e6949529914ddfa9f8a63dbea8a34\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:36:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:36:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42dxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:36:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-6xvnz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:36:53Z is after 2025-08-24T17:21:41Z" Feb 17 13:36:53 crc kubenswrapper[4768]: I0217 13:36:53.935854 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-62mzv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f7e6dba9-bf9f-464a-9842-f4f2a793dedf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nz8rj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nz8rj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:36:53Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-62mzv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:36:53Z is after 2025-08-24T17:21:41Z" Feb 17 13:36:53 crc kubenswrapper[4768]: I0217 13:36:53.938523 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/f7e6dba9-bf9f-464a-9842-f4f2a793dedf-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-62mzv\" (UID: \"f7e6dba9-bf9f-464a-9842-f4f2a793dedf\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-62mzv" Feb 17 13:36:53 crc kubenswrapper[4768]: I0217 13:36:53.938565 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/f7e6dba9-bf9f-464a-9842-f4f2a793dedf-env-overrides\") pod \"ovnkube-control-plane-749d76644c-62mzv\" (UID: \"f7e6dba9-bf9f-464a-9842-f4f2a793dedf\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-62mzv" Feb 17 13:36:53 crc kubenswrapper[4768]: I0217 13:36:53.938609 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/f7e6dba9-bf9f-464a-9842-f4f2a793dedf-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-62mzv\" (UID: \"f7e6dba9-bf9f-464a-9842-f4f2a793dedf\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-62mzv" Feb 17 13:36:53 crc kubenswrapper[4768]: I0217 13:36:53.938676 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nz8rj\" (UniqueName: \"kubernetes.io/projected/f7e6dba9-bf9f-464a-9842-f4f2a793dedf-kube-api-access-nz8rj\") pod \"ovnkube-control-plane-749d76644c-62mzv\" (UID: \"f7e6dba9-bf9f-464a-9842-f4f2a793dedf\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-62mzv" Feb 17 13:36:53 crc kubenswrapper[4768]: I0217 13:36:53.952336 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"88441663-1346-4328-a433-63cb4d9b6722\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1dd90501c9cf3ad0b27fefe4999afdfeaf0a8262728c8d47cd4714b0ef71d340\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://52dd97c46a2e371a68dcd9558be0f94d3bdd20c7a6ba6a65178fc09cdceea903\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4f0320f35632987c0e1f7eb6655c7221d7c159f1473088fa143a7c69a60f5b0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ecebe409455190192bdd157c1aa1c3eaec5d7930ccfaa6c456a1d04be7fff2f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9ecebe409455190192bdd157c1aa1c3eaec5d7930ccfaa6c456a1d04be7fff2f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-17T13:36:41Z\\\",\\\"message\\\":\\\"le observer\\\\nW0217 13:36:40.862072 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0217 13:36:40.862247 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0217 13:36:40.862863 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1936381186/tls.crt::/tmp/serving-cert-1936381186/tls.key\\\\\\\"\\\\nI0217 13:36:41.350711 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0217 13:36:41.368959 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0217 13:36:41.368982 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0217 13:36:41.369003 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0217 13:36:41.369008 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0217 13:36:41.374960 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0217 13:36:41.374983 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 13:36:41.374989 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0217 13:36:41.374990 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0217 13:36:41.374994 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0217 13:36:41.375023 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0217 13:36:41.375028 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0217 13:36:41.375032 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0217 13:36:41.377284 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T13:36:25Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7641c2023d65064decc69606567dff6eff07116a6cc62f9d1b1cddb34a39a407\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:24Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9dffd0f3cc3b89ababe812ea2e550f319242913da2908f466cbb8d9ac541e4b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9dffd0f3cc3b89ababe812ea2e550f319242913da2908f466cbb8d9ac541e4b9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:36:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:36:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:36:21Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:36:53Z is after 2025-08-24T17:21:41Z" Feb 17 13:36:53 crc kubenswrapper[4768]: I0217 13:36:53.968153 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c0c5a46beaa5e6c9df638b288967eaff1605591faafc373623753e3c434c6ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a28c91cfb782b4976153d2afced3117498aaabaafa5bba655b020c13aebb1dc5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:36:53Z is after 2025-08-24T17:21:41Z" Feb 17 13:36:53 crc kubenswrapper[4768]: I0217 13:36:53.980663 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:36:53Z is after 2025-08-24T17:21:41Z" Feb 17 13:36:53 crc kubenswrapper[4768]: I0217 13:36:53.992301 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://07aa34038d6332ca2ddf610df510586d343655b1e38a2912ea1513cca35f4dcc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:36:53Z is after 2025-08-24T17:21:41Z" Feb 17 13:36:54 crc kubenswrapper[4768]: I0217 13:36:54.005375 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:41Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:36:54Z is after 2025-08-24T17:21:41Z" Feb 17 13:36:54 crc kubenswrapper[4768]: I0217 13:36:54.017661 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:36:54 crc kubenswrapper[4768]: I0217 13:36:54.017724 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:36:54 crc kubenswrapper[4768]: I0217 13:36:54.017746 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:36:54 crc kubenswrapper[4768]: I0217 13:36:54.017771 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:36:54 crc kubenswrapper[4768]: I0217 13:36:54.017786 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:36:54Z","lastTransitionTime":"2026-02-17T13:36:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:36:54 crc kubenswrapper[4768]: I0217 13:36:54.018741 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:41Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:36:54Z is after 2025-08-24T17:21:41Z" Feb 17 13:36:54 crc kubenswrapper[4768]: I0217 13:36:54.028286 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6l7rv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dac64b2f-4b0b-454c-96e0-fc7d563d300f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://903f9d31bd5f357a5a1f439a158788c1930d0f63cd50e809028689cfa7884e96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7njgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:36:41Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6l7rv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:36:54Z is after 2025-08-24T17:21:41Z" Feb 17 13:36:54 crc kubenswrapper[4768]: I0217 13:36:54.040009 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nz8rj\" (UniqueName: \"kubernetes.io/projected/f7e6dba9-bf9f-464a-9842-f4f2a793dedf-kube-api-access-nz8rj\") pod \"ovnkube-control-plane-749d76644c-62mzv\" (UID: \"f7e6dba9-bf9f-464a-9842-f4f2a793dedf\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-62mzv" Feb 17 13:36:54 crc kubenswrapper[4768]: I0217 13:36:54.040091 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/f7e6dba9-bf9f-464a-9842-f4f2a793dedf-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-62mzv\" (UID: \"f7e6dba9-bf9f-464a-9842-f4f2a793dedf\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-62mzv" Feb 17 13:36:54 crc kubenswrapper[4768]: I0217 13:36:54.040182 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/f7e6dba9-bf9f-464a-9842-f4f2a793dedf-env-overrides\") pod \"ovnkube-control-plane-749d76644c-62mzv\" (UID: \"f7e6dba9-bf9f-464a-9842-f4f2a793dedf\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-62mzv" Feb 17 13:36:54 crc kubenswrapper[4768]: I0217 13:36:54.040235 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/f7e6dba9-bf9f-464a-9842-f4f2a793dedf-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-62mzv\" (UID: \"f7e6dba9-bf9f-464a-9842-f4f2a793dedf\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-62mzv" Feb 17 13:36:54 crc kubenswrapper[4768]: I0217 13:36:54.040888 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/f7e6dba9-bf9f-464a-9842-f4f2a793dedf-env-overrides\") pod \"ovnkube-control-plane-749d76644c-62mzv\" (UID: \"f7e6dba9-bf9f-464a-9842-f4f2a793dedf\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-62mzv" Feb 17 13:36:54 crc kubenswrapper[4768]: I0217 13:36:54.040917 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/f7e6dba9-bf9f-464a-9842-f4f2a793dedf-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-62mzv\" (UID: \"f7e6dba9-bf9f-464a-9842-f4f2a793dedf\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-62mzv" Feb 17 13:36:54 crc kubenswrapper[4768]: I0217 13:36:54.042339 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-jjjqj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e044bf1f-26b2-4a39-86e6-0440eff3eaa9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://19bc9ea99c2b3b0015849f2a4c730a14e048c35cf69f5b4025a4d51d707cf9f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jlkgb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:36:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-jjjqj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:36:54Z is after 2025-08-24T17:21:41Z" Feb 17 13:36:54 crc kubenswrapper[4768]: I0217 13:36:54.046806 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/f7e6dba9-bf9f-464a-9842-f4f2a793dedf-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-62mzv\" (UID: \"f7e6dba9-bf9f-464a-9842-f4f2a793dedf\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-62mzv" Feb 17 13:36:54 crc kubenswrapper[4768]: I0217 13:36:54.060474 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nz8rj\" (UniqueName: \"kubernetes.io/projected/f7e6dba9-bf9f-464a-9842-f4f2a793dedf-kube-api-access-nz8rj\") pod \"ovnkube-control-plane-749d76644c-62mzv\" (UID: \"f7e6dba9-bf9f-464a-9842-f4f2a793dedf\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-62mzv" Feb 17 13:36:54 crc kubenswrapper[4768]: I0217 13:36:54.061196 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5cplg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"742e6df8-2a68-426e-982c-ef825c6efca3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://88f4574f4bb95ef3e580f5e3e98feb55cd1381df1cc14a3797f040f3a8d92e86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg6ql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c29effffff4df037300ce8f32cae5337e3c60634d26fb82fde99ad7994a4fd4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg6ql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bca83097ac9d4b621e9bffa8f2f815a6060db34cb0272281b765aa7705d3ab5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg6ql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aaa535fc7f87d47d5bb54797f60b4adcabe1adc1d5d79720acc7a9ba7bb355c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg6ql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e14dfa7f4c26b9ae731392907658bf0a72b811243137b32320a1be70a334a1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg6ql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0849644593ce9a9b0ce3191eb62e47c5986c4b20f93313dd9fa721c7879ede5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg6ql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3442a73d88568604ffac8454d14cbb4b22aa3949cb87e52799c0a9d39faf1bbd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg6ql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8e5ca206927b1e2a6968ce42ecdb1441da4c68ad11b65c657a68889ac73af37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg6ql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6fcb3dc45db5b2c74f9bdd4c7c453815a392de1f2afe91ebed618327cfe5cb7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6fcb3dc45db5b2c74f9bdd4c7c453815a392de1f2afe91ebed618327cfe5cb7c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:36:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg6ql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:36:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-5cplg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:36:54Z is after 2025-08-24T17:21:41Z" Feb 17 13:36:54 crc kubenswrapper[4768]: I0217 13:36:54.071535 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-hngsc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a9682c-5dd5-49ce-bd8c-60e91527ec2a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db596eafde85d9ad74007393b8576b424cd8561f68f694af82cde80f997a5688\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fbjbb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:36:46Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-hngsc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:36:54Z is after 2025-08-24T17:21:41Z" Feb 17 13:36:54 crc kubenswrapper[4768]: I0217 13:36:54.084360 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"99a4349d-f5a6-431b-b8d6-9de1f9bbe63a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f45aa99773693c07f27add72aa8305d47c48deeb24382aa9ac8bca232fd26ff3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://47db5ef94c07ecce7b368bb809a736c9532fe3e47081cfa7e248b8b40d1d243f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0adb0161ea24422b8b1443a81f00e9c38f60bf8d0718075ce189158200b28a78\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22e1c78ff231c7a76ebb06589f5f14caed8c0b72475083230c80b4512ae2a048\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:36:21Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:36:54Z is after 2025-08-24T17:21:41Z" Feb 17 13:36:54 crc kubenswrapper[4768]: I0217 13:36:54.098265 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f9b2d1282c12fcb8597c7b4c60dc1bab1049afc179eaddbc8a72203b392f78d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:36:54Z is after 2025-08-24T17:21:41Z" Feb 17 13:36:54 crc kubenswrapper[4768]: I0217 13:36:54.109139 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-p97z4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"10c685ba-8fe0-425c-958c-3fb6754d3d84\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d56cb97a69888610458dcdfe5fa47e01b6641b0ab8ce5b11c01042001017d633\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b2h62\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://69b662556c0f67ed5b73c21d8dda6de43cd7b0e7c3dc7a57578f3ce94a4be7cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b2h62\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:36:42Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-p97z4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:36:54Z is after 2025-08-24T17:21:41Z" Feb 17 13:36:54 crc kubenswrapper[4768]: I0217 13:36:54.119761 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:36:54 crc kubenswrapper[4768]: I0217 13:36:54.119793 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:36:54 crc kubenswrapper[4768]: I0217 13:36:54.119803 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:36:54 crc kubenswrapper[4768]: I0217 13:36:54.119820 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:36:54 crc kubenswrapper[4768]: I0217 13:36:54.119830 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:36:54Z","lastTransitionTime":"2026-02-17T13:36:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:36:54 crc kubenswrapper[4768]: I0217 13:36:54.207752 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-62mzv" Feb 17 13:36:54 crc kubenswrapper[4768]: I0217 13:36:54.223361 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:36:54 crc kubenswrapper[4768]: I0217 13:36:54.223437 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:36:54 crc kubenswrapper[4768]: I0217 13:36:54.223462 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:36:54 crc kubenswrapper[4768]: I0217 13:36:54.223494 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:36:54 crc kubenswrapper[4768]: I0217 13:36:54.223518 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:36:54Z","lastTransitionTime":"2026-02-17T13:36:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:36:54 crc kubenswrapper[4768]: W0217 13:36:54.231374 4768 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf7e6dba9_bf9f_464a_9842_f4f2a793dedf.slice/crio-45cabc7a2124fc7d3cfac621800c2608b786264f68db0c1890016957e970e1bb WatchSource:0}: Error finding container 45cabc7a2124fc7d3cfac621800c2608b786264f68db0c1890016957e970e1bb: Status 404 returned error can't find the container with id 45cabc7a2124fc7d3cfac621800c2608b786264f68db0c1890016957e970e1bb Feb 17 13:36:54 crc kubenswrapper[4768]: I0217 13:36:54.326177 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:36:54 crc kubenswrapper[4768]: I0217 13:36:54.326219 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:36:54 crc kubenswrapper[4768]: I0217 13:36:54.326230 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:36:54 crc kubenswrapper[4768]: I0217 13:36:54.326247 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:36:54 crc kubenswrapper[4768]: I0217 13:36:54.326259 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:36:54Z","lastTransitionTime":"2026-02-17T13:36:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:36:54 crc kubenswrapper[4768]: I0217 13:36:54.428202 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:36:54 crc kubenswrapper[4768]: I0217 13:36:54.428237 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:36:54 crc kubenswrapper[4768]: I0217 13:36:54.428253 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:36:54 crc kubenswrapper[4768]: I0217 13:36:54.428267 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:36:54 crc kubenswrapper[4768]: I0217 13:36:54.428277 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:36:54Z","lastTransitionTime":"2026-02-17T13:36:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:36:54 crc kubenswrapper[4768]: I0217 13:36:54.528519 4768 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-28 13:05:30.495350391 +0000 UTC Feb 17 13:36:54 crc kubenswrapper[4768]: I0217 13:36:54.530850 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:36:54 crc kubenswrapper[4768]: I0217 13:36:54.530917 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:36:54 crc kubenswrapper[4768]: I0217 13:36:54.530943 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:36:54 crc kubenswrapper[4768]: I0217 13:36:54.530974 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:36:54 crc kubenswrapper[4768]: I0217 13:36:54.530998 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:36:54Z","lastTransitionTime":"2026-02-17T13:36:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:36:54 crc kubenswrapper[4768]: I0217 13:36:54.634553 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:36:54 crc kubenswrapper[4768]: I0217 13:36:54.635001 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:36:54 crc kubenswrapper[4768]: I0217 13:36:54.635024 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:36:54 crc kubenswrapper[4768]: I0217 13:36:54.635049 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:36:54 crc kubenswrapper[4768]: I0217 13:36:54.635072 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:36:54Z","lastTransitionTime":"2026-02-17T13:36:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:36:54 crc kubenswrapper[4768]: I0217 13:36:54.737967 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:36:54 crc kubenswrapper[4768]: I0217 13:36:54.738037 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:36:54 crc kubenswrapper[4768]: I0217 13:36:54.738059 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:36:54 crc kubenswrapper[4768]: I0217 13:36:54.738085 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:36:54 crc kubenswrapper[4768]: I0217 13:36:54.738133 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:36:54Z","lastTransitionTime":"2026-02-17T13:36:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:36:54 crc kubenswrapper[4768]: I0217 13:36:54.771767 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-5cplg_742e6df8-2a68-426e-982c-ef825c6efca3/ovnkube-controller/0.log" Feb 17 13:36:54 crc kubenswrapper[4768]: I0217 13:36:54.774495 4768 generic.go:334] "Generic (PLEG): container finished" podID="742e6df8-2a68-426e-982c-ef825c6efca3" containerID="3442a73d88568604ffac8454d14cbb4b22aa3949cb87e52799c0a9d39faf1bbd" exitCode=1 Feb 17 13:36:54 crc kubenswrapper[4768]: I0217 13:36:54.774551 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5cplg" event={"ID":"742e6df8-2a68-426e-982c-ef825c6efca3","Type":"ContainerDied","Data":"3442a73d88568604ffac8454d14cbb4b22aa3949cb87e52799c0a9d39faf1bbd"} Feb 17 13:36:54 crc kubenswrapper[4768]: I0217 13:36:54.775328 4768 scope.go:117] "RemoveContainer" containerID="3442a73d88568604ffac8454d14cbb4b22aa3949cb87e52799c0a9d39faf1bbd" Feb 17 13:36:54 crc kubenswrapper[4768]: I0217 13:36:54.778745 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-62mzv" event={"ID":"f7e6dba9-bf9f-464a-9842-f4f2a793dedf","Type":"ContainerStarted","Data":"4dade5f6bbbe0473f9330ea16398a4522405ca6b22a0e65851a98fe57943b38f"} Feb 17 13:36:54 crc kubenswrapper[4768]: I0217 13:36:54.778779 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-62mzv" event={"ID":"f7e6dba9-bf9f-464a-9842-f4f2a793dedf","Type":"ContainerStarted","Data":"25c892cb5b274e124423e1da0778d7b3415022fb0280eed30a6b48055d44c3bd"} Feb 17 13:36:54 crc kubenswrapper[4768]: I0217 13:36:54.778794 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-62mzv" event={"ID":"f7e6dba9-bf9f-464a-9842-f4f2a793dedf","Type":"ContainerStarted","Data":"45cabc7a2124fc7d3cfac621800c2608b786264f68db0c1890016957e970e1bb"} Feb 17 13:36:54 crc kubenswrapper[4768]: I0217 13:36:54.788756 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-p97z4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"10c685ba-8fe0-425c-958c-3fb6754d3d84\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d56cb97a69888610458dcdfe5fa47e01b6641b0ab8ce5b11c01042001017d633\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b2h62\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://69b662556c0f67ed5b73c21d8dda6de43cd7b0e7c3dc7a57578f3ce94a4be7cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b2h62\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:36:42Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-p97z4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:36:54Z is after 2025-08-24T17:21:41Z" Feb 17 13:36:54 crc kubenswrapper[4768]: I0217 13:36:54.802230 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-jjjqj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e044bf1f-26b2-4a39-86e6-0440eff3eaa9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://19bc9ea99c2b3b0015849f2a4c730a14e048c35cf69f5b4025a4d51d707cf9f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jlkgb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:36:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-jjjqj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:36:54Z is after 2025-08-24T17:21:41Z" Feb 17 13:36:54 crc kubenswrapper[4768]: I0217 13:36:54.822965 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5cplg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"742e6df8-2a68-426e-982c-ef825c6efca3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://88f4574f4bb95ef3e580f5e3e98feb55cd1381df1cc14a3797f040f3a8d92e86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg6ql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c29effffff4df037300ce8f32cae5337e3c60634d26fb82fde99ad7994a4fd4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg6ql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bca83097ac9d4b621e9bffa8f2f815a6060db34cb0272281b765aa7705d3ab5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg6ql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aaa535fc7f87d47d5bb54797f60b4adcabe1adc1d5d79720acc7a9ba7bb355c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg6ql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e14dfa7f4c26b9ae731392907658bf0a72b811243137b32320a1be70a334a1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg6ql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0849644593ce9a9b0ce3191eb62e47c5986c4b20f93313dd9fa721c7879ede5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg6ql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3442a73d88568604ffac8454d14cbb4b22aa3949cb87e52799c0a9d39faf1bbd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3442a73d88568604ffac8454d14cbb4b22aa3949cb87e52799c0a9d39faf1bbd\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-17T13:36:54Z\\\",\\\"message\\\":\\\"ice/v1/apis/informers/externalversions/factory.go:140\\\\nI0217 13:36:53.869753 6056 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0217 13:36:53.869795 6056 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0217 13:36:53.869863 6056 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0217 13:36:53.869879 6056 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0217 13:36:53.869908 6056 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0217 13:36:53.869952 6056 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0217 13:36:53.869970 6056 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0217 13:36:53.870004 6056 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0217 13:36:53.870033 6056 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0217 13:36:53.870050 6056 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0217 13:36:53.870065 6056 handler.go:208] Removed *v1.Node event handler 2\\\\nI0217 13:36:53.870093 6056 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0217 13:36:53.870134 6056 handler.go:208] Removed *v1.Node event handler 7\\\\nI0217 13:36:53.870209 6056 factory.go:656] Stopping watch factory\\\\nI0217 13:36:53.870236 6056 handler.go:208] Removed *v1.EgressIP ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T13:36:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg6ql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8e5ca206927b1e2a6968ce42ecdb1441da4c68ad11b65c657a68889ac73af37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg6ql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6fcb3dc45db5b2c74f9bdd4c7c453815a392de1f2afe91ebed618327cfe5cb7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6fcb3dc45db5b2c74f9bdd4c7c453815a392de1f2afe91ebed618327cfe5cb7c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:36:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg6ql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:36:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-5cplg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:36:54Z is after 2025-08-24T17:21:41Z" Feb 17 13:36:54 crc kubenswrapper[4768]: I0217 13:36:54.840541 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:36:54 crc kubenswrapper[4768]: I0217 13:36:54.840589 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:36:54 crc kubenswrapper[4768]: I0217 13:36:54.840602 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:36:54 crc kubenswrapper[4768]: I0217 13:36:54.840618 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:36:54 crc kubenswrapper[4768]: I0217 13:36:54.840630 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:36:54Z","lastTransitionTime":"2026-02-17T13:36:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:36:54 crc kubenswrapper[4768]: I0217 13:36:54.844546 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-hngsc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a9682c-5dd5-49ce-bd8c-60e91527ec2a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db596eafde85d9ad74007393b8576b424cd8561f68f694af82cde80f997a5688\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fbjbb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:36:46Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-hngsc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:36:54Z is after 2025-08-24T17:21:41Z" Feb 17 13:36:54 crc kubenswrapper[4768]: I0217 13:36:54.865651 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"99a4349d-f5a6-431b-b8d6-9de1f9bbe63a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f45aa99773693c07f27add72aa8305d47c48deeb24382aa9ac8bca232fd26ff3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://47db5ef94c07ecce7b368bb809a736c9532fe3e47081cfa7e248b8b40d1d243f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0adb0161ea24422b8b1443a81f00e9c38f60bf8d0718075ce189158200b28a78\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22e1c78ff231c7a76ebb06589f5f14caed8c0b72475083230c80b4512ae2a048\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:36:21Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:36:54Z is after 2025-08-24T17:21:41Z" Feb 17 13:36:54 crc kubenswrapper[4768]: I0217 13:36:54.879468 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f9b2d1282c12fcb8597c7b4c60dc1bab1049afc179eaddbc8a72203b392f78d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:36:54Z is after 2025-08-24T17:21:41Z" Feb 17 13:36:54 crc kubenswrapper[4768]: I0217 13:36:54.895673 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-6xvnz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"68ae92b3-aced-409b-901b-252d2364cc01\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c3e6dc9e6ef1908b60694a76b89114accbccc8e33d2334e751125f348ff68191\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42dxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ccce43f2773621b91f6782909e74d6c1cb29f3f2e0d4280753e2719f256d694\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3ccce43f2773621b91f6782909e74d6c1cb29f3f2e0d4280753e2719f256d694\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:36:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:36:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42dxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8fa44e73498e2987dca421d954e3c6b71c36138496d25e3498934a0ae03c25de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8fa44e73498e2987dca421d954e3c6b71c36138496d25e3498934a0ae03c25de\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:36:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:36:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42dxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8eee5a030051c479dc3e7d690a338260f3a886f6c8b6764037de0a4b3dcea101\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8eee5a030051c479dc3e7d690a338260f3a886f6c8b6764037de0a4b3dcea101\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:36:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:36:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42dxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://473b18f574df0c69d4f70ded0dee7c42ee64daab422dff19c6a58a0cde52d045\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://473b18f574df0c69d4f70ded0dee7c42ee64daab422dff19c6a58a0cde52d045\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:36:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:36:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42dxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a84122e995b4744d706f85250dc149036364dc1e476c38b54f08494e0cb41ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3a84122e995b4744d706f85250dc149036364dc1e476c38b54f08494e0cb41ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:36:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:36:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42dxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f77e278dccb14a5e674dad398d194080a57e6949529914ddfa9f8a63dbea8a34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f77e278dccb14a5e674dad398d194080a57e6949529914ddfa9f8a63dbea8a34\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:36:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:36:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42dxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:36:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-6xvnz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:36:54Z is after 2025-08-24T17:21:41Z" Feb 17 13:36:54 crc kubenswrapper[4768]: I0217 13:36:54.907171 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-62mzv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f7e6dba9-bf9f-464a-9842-f4f2a793dedf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nz8rj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nz8rj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:36:53Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-62mzv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:36:54Z is after 2025-08-24T17:21:41Z" Feb 17 13:36:54 crc kubenswrapper[4768]: I0217 13:36:54.923089 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"88441663-1346-4328-a433-63cb4d9b6722\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1dd90501c9cf3ad0b27fefe4999afdfeaf0a8262728c8d47cd4714b0ef71d340\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://52dd97c46a2e371a68dcd9558be0f94d3bdd20c7a6ba6a65178fc09cdceea903\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4f0320f35632987c0e1f7eb6655c7221d7c159f1473088fa143a7c69a60f5b0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ecebe409455190192bdd157c1aa1c3eaec5d7930ccfaa6c456a1d04be7fff2f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9ecebe409455190192bdd157c1aa1c3eaec5d7930ccfaa6c456a1d04be7fff2f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-17T13:36:41Z\\\",\\\"message\\\":\\\"le observer\\\\nW0217 13:36:40.862072 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0217 13:36:40.862247 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0217 13:36:40.862863 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1936381186/tls.crt::/tmp/serving-cert-1936381186/tls.key\\\\\\\"\\\\nI0217 13:36:41.350711 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0217 13:36:41.368959 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0217 13:36:41.368982 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0217 13:36:41.369003 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0217 13:36:41.369008 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0217 13:36:41.374960 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0217 13:36:41.374983 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 13:36:41.374989 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0217 13:36:41.374990 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0217 13:36:41.374994 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0217 13:36:41.375023 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0217 13:36:41.375028 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0217 13:36:41.375032 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0217 13:36:41.377284 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T13:36:25Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7641c2023d65064decc69606567dff6eff07116a6cc62f9d1b1cddb34a39a407\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:24Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9dffd0f3cc3b89ababe812ea2e550f319242913da2908f466cbb8d9ac541e4b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9dffd0f3cc3b89ababe812ea2e550f319242913da2908f466cbb8d9ac541e4b9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:36:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:36:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:36:21Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:36:54Z is after 2025-08-24T17:21:41Z" Feb 17 13:36:54 crc kubenswrapper[4768]: I0217 13:36:54.939746 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c0c5a46beaa5e6c9df638b288967eaff1605591faafc373623753e3c434c6ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a28c91cfb782b4976153d2afced3117498aaabaafa5bba655b020c13aebb1dc5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:36:54Z is after 2025-08-24T17:21:41Z" Feb 17 13:36:54 crc kubenswrapper[4768]: I0217 13:36:54.944121 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:36:54 crc kubenswrapper[4768]: I0217 13:36:54.944147 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:36:54 crc kubenswrapper[4768]: I0217 13:36:54.944158 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:36:54 crc kubenswrapper[4768]: I0217 13:36:54.944175 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:36:54 crc kubenswrapper[4768]: I0217 13:36:54.944186 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:36:54Z","lastTransitionTime":"2026-02-17T13:36:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:36:54 crc kubenswrapper[4768]: I0217 13:36:54.953816 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:36:54Z is after 2025-08-24T17:21:41Z" Feb 17 13:36:54 crc kubenswrapper[4768]: I0217 13:36:54.966001 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6l7rv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dac64b2f-4b0b-454c-96e0-fc7d563d300f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://903f9d31bd5f357a5a1f439a158788c1930d0f63cd50e809028689cfa7884e96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7njgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:36:41Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6l7rv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:36:54Z is after 2025-08-24T17:21:41Z" Feb 17 13:36:54 crc kubenswrapper[4768]: I0217 13:36:54.979736 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://07aa34038d6332ca2ddf610df510586d343655b1e38a2912ea1513cca35f4dcc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:36:54Z is after 2025-08-24T17:21:41Z" Feb 17 13:36:55 crc kubenswrapper[4768]: I0217 13:36:55.012566 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:41Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:36:54Z is after 2025-08-24T17:21:41Z" Feb 17 13:36:55 crc kubenswrapper[4768]: I0217 13:36:55.046837 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:36:55 crc kubenswrapper[4768]: I0217 13:36:55.046892 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:36:55 crc kubenswrapper[4768]: I0217 13:36:55.046904 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:36:55 crc kubenswrapper[4768]: I0217 13:36:55.046920 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:36:55 crc kubenswrapper[4768]: I0217 13:36:55.046932 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:36:55Z","lastTransitionTime":"2026-02-17T13:36:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:36:55 crc kubenswrapper[4768]: I0217 13:36:55.057411 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:41Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:36:55Z is after 2025-08-24T17:21:41Z" Feb 17 13:36:55 crc kubenswrapper[4768]: I0217 13:36:55.073964 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://07aa34038d6332ca2ddf610df510586d343655b1e38a2912ea1513cca35f4dcc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:36:55Z is after 2025-08-24T17:21:41Z" Feb 17 13:36:55 crc kubenswrapper[4768]: I0217 13:36:55.089003 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:41Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:36:55Z is after 2025-08-24T17:21:41Z" Feb 17 13:36:55 crc kubenswrapper[4768]: I0217 13:36:55.105635 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:41Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:36:55Z is after 2025-08-24T17:21:41Z" Feb 17 13:36:55 crc kubenswrapper[4768]: I0217 13:36:55.117070 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6l7rv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dac64b2f-4b0b-454c-96e0-fc7d563d300f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://903f9d31bd5f357a5a1f439a158788c1930d0f63cd50e809028689cfa7884e96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7njgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:36:41Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6l7rv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:36:55Z is after 2025-08-24T17:21:41Z" Feb 17 13:36:55 crc kubenswrapper[4768]: I0217 13:36:55.134245 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"99a4349d-f5a6-431b-b8d6-9de1f9bbe63a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f45aa99773693c07f27add72aa8305d47c48deeb24382aa9ac8bca232fd26ff3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://47db5ef94c07ecce7b368bb809a736c9532fe3e47081cfa7e248b8b40d1d243f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0adb0161ea24422b8b1443a81f00e9c38f60bf8d0718075ce189158200b28a78\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22e1c78ff231c7a76ebb06589f5f14caed8c0b72475083230c80b4512ae2a048\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:36:21Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:36:55Z is after 2025-08-24T17:21:41Z" Feb 17 13:36:55 crc kubenswrapper[4768]: I0217 13:36:55.150053 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:36:55 crc kubenswrapper[4768]: I0217 13:36:55.150076 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:36:55 crc kubenswrapper[4768]: I0217 13:36:55.150086 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:36:55 crc kubenswrapper[4768]: I0217 13:36:55.150119 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:36:55 crc kubenswrapper[4768]: I0217 13:36:55.150131 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:36:55Z","lastTransitionTime":"2026-02-17T13:36:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:36:55 crc kubenswrapper[4768]: I0217 13:36:55.150625 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f9b2d1282c12fcb8597c7b4c60dc1bab1049afc179eaddbc8a72203b392f78d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:36:55Z is after 2025-08-24T17:21:41Z" Feb 17 13:36:55 crc kubenswrapper[4768]: I0217 13:36:55.166051 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-p97z4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"10c685ba-8fe0-425c-958c-3fb6754d3d84\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d56cb97a69888610458dcdfe5fa47e01b6641b0ab8ce5b11c01042001017d633\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b2h62\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://69b662556c0f67ed5b73c21d8dda6de43cd7b0e7c3dc7a57578f3ce94a4be7cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b2h62\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:36:42Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-p97z4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:36:55Z is after 2025-08-24T17:21:41Z" Feb 17 13:36:55 crc kubenswrapper[4768]: I0217 13:36:55.188052 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-jjjqj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e044bf1f-26b2-4a39-86e6-0440eff3eaa9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://19bc9ea99c2b3b0015849f2a4c730a14e048c35cf69f5b4025a4d51d707cf9f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jlkgb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:36:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-jjjqj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:36:55Z is after 2025-08-24T17:21:41Z" Feb 17 13:36:55 crc kubenswrapper[4768]: I0217 13:36:55.207990 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5cplg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"742e6df8-2a68-426e-982c-ef825c6efca3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://88f4574f4bb95ef3e580f5e3e98feb55cd1381df1cc14a3797f040f3a8d92e86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg6ql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c29effffff4df037300ce8f32cae5337e3c60634d26fb82fde99ad7994a4fd4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg6ql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bca83097ac9d4b621e9bffa8f2f815a6060db34cb0272281b765aa7705d3ab5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg6ql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aaa535fc7f87d47d5bb54797f60b4adcabe1adc1d5d79720acc7a9ba7bb355c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg6ql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e14dfa7f4c26b9ae731392907658bf0a72b811243137b32320a1be70a334a1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg6ql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0849644593ce9a9b0ce3191eb62e47c5986c4b20f93313dd9fa721c7879ede5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg6ql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3442a73d88568604ffac8454d14cbb4b22aa3949cb87e52799c0a9d39faf1bbd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3442a73d88568604ffac8454d14cbb4b22aa3949cb87e52799c0a9d39faf1bbd\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-17T13:36:54Z\\\",\\\"message\\\":\\\"ice/v1/apis/informers/externalversions/factory.go:140\\\\nI0217 13:36:53.869753 6056 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0217 13:36:53.869795 6056 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0217 13:36:53.869863 6056 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0217 13:36:53.869879 6056 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0217 13:36:53.869908 6056 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0217 13:36:53.869952 6056 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0217 13:36:53.869970 6056 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0217 13:36:53.870004 6056 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0217 13:36:53.870033 6056 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0217 13:36:53.870050 6056 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0217 13:36:53.870065 6056 handler.go:208] Removed *v1.Node event handler 2\\\\nI0217 13:36:53.870093 6056 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0217 13:36:53.870134 6056 handler.go:208] Removed *v1.Node event handler 7\\\\nI0217 13:36:53.870209 6056 factory.go:656] Stopping watch factory\\\\nI0217 13:36:53.870236 6056 handler.go:208] Removed *v1.EgressIP ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T13:36:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg6ql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8e5ca206927b1e2a6968ce42ecdb1441da4c68ad11b65c657a68889ac73af37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg6ql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6fcb3dc45db5b2c74f9bdd4c7c453815a392de1f2afe91ebed618327cfe5cb7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6fcb3dc45db5b2c74f9bdd4c7c453815a392de1f2afe91ebed618327cfe5cb7c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:36:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg6ql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:36:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-5cplg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:36:55Z is after 2025-08-24T17:21:41Z" Feb 17 13:36:55 crc kubenswrapper[4768]: I0217 13:36:55.217772 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-hngsc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a9682c-5dd5-49ce-bd8c-60e91527ec2a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db596eafde85d9ad74007393b8576b424cd8561f68f694af82cde80f997a5688\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fbjbb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:36:46Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-hngsc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:36:55Z is after 2025-08-24T17:21:41Z" Feb 17 13:36:55 crc kubenswrapper[4768]: I0217 13:36:55.231180 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-6xvnz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"68ae92b3-aced-409b-901b-252d2364cc01\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c3e6dc9e6ef1908b60694a76b89114accbccc8e33d2334e751125f348ff68191\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42dxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ccce43f2773621b91f6782909e74d6c1cb29f3f2e0d4280753e2719f256d694\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3ccce43f2773621b91f6782909e74d6c1cb29f3f2e0d4280753e2719f256d694\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:36:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:36:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42dxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8fa44e73498e2987dca421d954e3c6b71c36138496d25e3498934a0ae03c25de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8fa44e73498e2987dca421d954e3c6b71c36138496d25e3498934a0ae03c25de\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:36:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:36:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42dxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8eee5a030051c479dc3e7d690a338260f3a886f6c8b6764037de0a4b3dcea101\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8eee5a030051c479dc3e7d690a338260f3a886f6c8b6764037de0a4b3dcea101\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:36:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:36:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42dxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://473b18f574df0c69d4f70ded0dee7c42ee64daab422dff19c6a58a0cde52d045\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://473b18f574df0c69d4f70ded0dee7c42ee64daab422dff19c6a58a0cde52d045\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:36:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:36:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42dxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a84122e995b4744d706f85250dc149036364dc1e476c38b54f08494e0cb41ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3a84122e995b4744d706f85250dc149036364dc1e476c38b54f08494e0cb41ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:36:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:36:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42dxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f77e278dccb14a5e674dad398d194080a57e6949529914ddfa9f8a63dbea8a34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f77e278dccb14a5e674dad398d194080a57e6949529914ddfa9f8a63dbea8a34\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:36:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:36:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42dxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:36:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-6xvnz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:36:55Z is after 2025-08-24T17:21:41Z" Feb 17 13:36:55 crc kubenswrapper[4768]: I0217 13:36:55.242536 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-62mzv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f7e6dba9-bf9f-464a-9842-f4f2a793dedf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25c892cb5b274e124423e1da0778d7b3415022fb0280eed30a6b48055d44c3bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nz8rj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4dade5f6bbbe0473f9330ea16398a4522405ca6b22a0e65851a98fe57943b38f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nz8rj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:36:53Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-62mzv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:36:55Z is after 2025-08-24T17:21:41Z" Feb 17 13:36:55 crc kubenswrapper[4768]: I0217 13:36:55.252500 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:36:55 crc kubenswrapper[4768]: I0217 13:36:55.252532 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:36:55 crc kubenswrapper[4768]: I0217 13:36:55.252542 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:36:55 crc kubenswrapper[4768]: I0217 13:36:55.252557 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:36:55 crc kubenswrapper[4768]: I0217 13:36:55.252567 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:36:55Z","lastTransitionTime":"2026-02-17T13:36:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:36:55 crc kubenswrapper[4768]: I0217 13:36:55.256852 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"88441663-1346-4328-a433-63cb4d9b6722\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1dd90501c9cf3ad0b27fefe4999afdfeaf0a8262728c8d47cd4714b0ef71d340\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://52dd97c46a2e371a68dcd9558be0f94d3bdd20c7a6ba6a65178fc09cdceea903\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4f0320f35632987c0e1f7eb6655c7221d7c159f1473088fa143a7c69a60f5b0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ecebe409455190192bdd157c1aa1c3eaec5d7930ccfaa6c456a1d04be7fff2f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9ecebe409455190192bdd157c1aa1c3eaec5d7930ccfaa6c456a1d04be7fff2f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-17T13:36:41Z\\\",\\\"message\\\":\\\"le observer\\\\nW0217 13:36:40.862072 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0217 13:36:40.862247 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0217 13:36:40.862863 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1936381186/tls.crt::/tmp/serving-cert-1936381186/tls.key\\\\\\\"\\\\nI0217 13:36:41.350711 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0217 13:36:41.368959 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0217 13:36:41.368982 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0217 13:36:41.369003 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0217 13:36:41.369008 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0217 13:36:41.374960 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0217 13:36:41.374983 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 13:36:41.374989 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0217 13:36:41.374990 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0217 13:36:41.374994 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0217 13:36:41.375023 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0217 13:36:41.375028 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0217 13:36:41.375032 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0217 13:36:41.377284 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T13:36:25Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7641c2023d65064decc69606567dff6eff07116a6cc62f9d1b1cddb34a39a407\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:24Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9dffd0f3cc3b89ababe812ea2e550f319242913da2908f466cbb8d9ac541e4b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9dffd0f3cc3b89ababe812ea2e550f319242913da2908f466cbb8d9ac541e4b9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:36:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:36:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:36:21Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:36:55Z is after 2025-08-24T17:21:41Z" Feb 17 13:36:55 crc kubenswrapper[4768]: I0217 13:36:55.268380 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c0c5a46beaa5e6c9df638b288967eaff1605591faafc373623753e3c434c6ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a28c91cfb782b4976153d2afced3117498aaabaafa5bba655b020c13aebb1dc5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:36:55Z is after 2025-08-24T17:21:41Z" Feb 17 13:36:55 crc kubenswrapper[4768]: I0217 13:36:55.281585 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:36:55Z is after 2025-08-24T17:21:41Z" Feb 17 13:36:55 crc kubenswrapper[4768]: I0217 13:36:55.354836 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:36:55 crc kubenswrapper[4768]: I0217 13:36:55.355189 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:36:55 crc kubenswrapper[4768]: I0217 13:36:55.355257 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:36:55 crc kubenswrapper[4768]: I0217 13:36:55.355347 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:36:55 crc kubenswrapper[4768]: I0217 13:36:55.355412 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:36:55Z","lastTransitionTime":"2026-02-17T13:36:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:36:55 crc kubenswrapper[4768]: I0217 13:36:55.408643 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/network-metrics-daemon-5bxh7"] Feb 17 13:36:55 crc kubenswrapper[4768]: I0217 13:36:55.409318 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5bxh7" Feb 17 13:36:55 crc kubenswrapper[4768]: E0217 13:36:55.409439 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5bxh7" podUID="8c8b1469-ed55-4743-9553-f81efd79e5f1" Feb 17 13:36:55 crc kubenswrapper[4768]: I0217 13:36:55.422608 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-6xvnz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"68ae92b3-aced-409b-901b-252d2364cc01\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c3e6dc9e6ef1908b60694a76b89114accbccc8e33d2334e751125f348ff68191\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42dxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ccce43f2773621b91f6782909e74d6c1cb29f3f2e0d4280753e2719f256d694\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3ccce43f2773621b91f6782909e74d6c1cb29f3f2e0d4280753e2719f256d694\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:36:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:36:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42dxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8fa44e73498e2987dca421d954e3c6b71c36138496d25e3498934a0ae03c25de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8fa44e73498e2987dca421d954e3c6b71c36138496d25e3498934a0ae03c25de\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:36:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:36:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42dxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8eee5a030051c479dc3e7d690a338260f3a886f6c8b6764037de0a4b3dcea101\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8eee5a030051c479dc3e7d690a338260f3a886f6c8b6764037de0a4b3dcea101\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:36:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:36:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42dxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://473b18f574df0c69d4f70ded0dee7c42ee64daab422dff19c6a58a0cde52d045\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://473b18f574df0c69d4f70ded0dee7c42ee64daab422dff19c6a58a0cde52d045\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:36:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:36:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42dxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a84122e995b4744d706f85250dc149036364dc1e476c38b54f08494e0cb41ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3a84122e995b4744d706f85250dc149036364dc1e476c38b54f08494e0cb41ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:36:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:36:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42dxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f77e278dccb14a5e674dad398d194080a57e6949529914ddfa9f8a63dbea8a34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f77e278dccb14a5e674dad398d194080a57e6949529914ddfa9f8a63dbea8a34\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:36:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:36:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42dxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:36:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-6xvnz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:36:55Z is after 2025-08-24T17:21:41Z" Feb 17 13:36:55 crc kubenswrapper[4768]: I0217 13:36:55.435078 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-62mzv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f7e6dba9-bf9f-464a-9842-f4f2a793dedf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25c892cb5b274e124423e1da0778d7b3415022fb0280eed30a6b48055d44c3bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nz8rj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4dade5f6bbbe0473f9330ea16398a4522405ca6b22a0e65851a98fe57943b38f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nz8rj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:36:53Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-62mzv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:36:55Z is after 2025-08-24T17:21:41Z" Feb 17 13:36:55 crc kubenswrapper[4768]: I0217 13:36:55.475471 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ltsbm\" (UniqueName: \"kubernetes.io/projected/8c8b1469-ed55-4743-9553-f81efd79e5f1-kube-api-access-ltsbm\") pod \"network-metrics-daemon-5bxh7\" (UID: \"8c8b1469-ed55-4743-9553-f81efd79e5f1\") " pod="openshift-multus/network-metrics-daemon-5bxh7" Feb 17 13:36:55 crc kubenswrapper[4768]: I0217 13:36:55.475539 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/8c8b1469-ed55-4743-9553-f81efd79e5f1-metrics-certs\") pod \"network-metrics-daemon-5bxh7\" (UID: \"8c8b1469-ed55-4743-9553-f81efd79e5f1\") " pod="openshift-multus/network-metrics-daemon-5bxh7" Feb 17 13:36:55 crc kubenswrapper[4768]: I0217 13:36:55.477219 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:36:55 crc kubenswrapper[4768]: I0217 13:36:55.477249 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:36:55 crc kubenswrapper[4768]: I0217 13:36:55.477258 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:36:55 crc kubenswrapper[4768]: I0217 13:36:55.477274 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:36:55 crc kubenswrapper[4768]: I0217 13:36:55.477282 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:36:55Z","lastTransitionTime":"2026-02-17T13:36:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:36:55 crc kubenswrapper[4768]: I0217 13:36:55.495864 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"88441663-1346-4328-a433-63cb4d9b6722\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1dd90501c9cf3ad0b27fefe4999afdfeaf0a8262728c8d47cd4714b0ef71d340\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://52dd97c46a2e371a68dcd9558be0f94d3bdd20c7a6ba6a65178fc09cdceea903\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4f0320f35632987c0e1f7eb6655c7221d7c159f1473088fa143a7c69a60f5b0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ecebe409455190192bdd157c1aa1c3eaec5d7930ccfaa6c456a1d04be7fff2f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9ecebe409455190192bdd157c1aa1c3eaec5d7930ccfaa6c456a1d04be7fff2f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-17T13:36:41Z\\\",\\\"message\\\":\\\"le observer\\\\nW0217 13:36:40.862072 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0217 13:36:40.862247 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0217 13:36:40.862863 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1936381186/tls.crt::/tmp/serving-cert-1936381186/tls.key\\\\\\\"\\\\nI0217 13:36:41.350711 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0217 13:36:41.368959 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0217 13:36:41.368982 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0217 13:36:41.369003 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0217 13:36:41.369008 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0217 13:36:41.374960 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0217 13:36:41.374983 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 13:36:41.374989 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0217 13:36:41.374990 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0217 13:36:41.374994 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0217 13:36:41.375023 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0217 13:36:41.375028 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0217 13:36:41.375032 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0217 13:36:41.377284 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T13:36:25Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7641c2023d65064decc69606567dff6eff07116a6cc62f9d1b1cddb34a39a407\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:24Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9dffd0f3cc3b89ababe812ea2e550f319242913da2908f466cbb8d9ac541e4b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9dffd0f3cc3b89ababe812ea2e550f319242913da2908f466cbb8d9ac541e4b9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:36:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:36:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:36:21Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:36:55Z is after 2025-08-24T17:21:41Z" Feb 17 13:36:55 crc kubenswrapper[4768]: I0217 13:36:55.511672 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c0c5a46beaa5e6c9df638b288967eaff1605591faafc373623753e3c434c6ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a28c91cfb782b4976153d2afced3117498aaabaafa5bba655b020c13aebb1dc5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:36:55Z is after 2025-08-24T17:21:41Z" Feb 17 13:36:55 crc kubenswrapper[4768]: I0217 13:36:55.525959 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:36:55Z is after 2025-08-24T17:21:41Z" Feb 17 13:36:55 crc kubenswrapper[4768]: I0217 13:36:55.528924 4768 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-14 06:23:57.206546891 +0000 UTC Feb 17 13:36:55 crc kubenswrapper[4768]: I0217 13:36:55.533401 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 13:36:55 crc kubenswrapper[4768]: E0217 13:36:55.533549 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 13:36:55 crc kubenswrapper[4768]: I0217 13:36:55.533628 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 13:36:55 crc kubenswrapper[4768]: E0217 13:36:55.533856 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 13:36:55 crc kubenswrapper[4768]: I0217 13:36:55.533946 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 13:36:55 crc kubenswrapper[4768]: E0217 13:36:55.534082 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 13:36:55 crc kubenswrapper[4768]: I0217 13:36:55.543155 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-5bxh7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8c8b1469-ed55-4743-9553-f81efd79e5f1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ltsbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ltsbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:36:55Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-5bxh7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:36:55Z is after 2025-08-24T17:21:41Z" Feb 17 13:36:55 crc kubenswrapper[4768]: I0217 13:36:55.561305 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://07aa34038d6332ca2ddf610df510586d343655b1e38a2912ea1513cca35f4dcc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:36:55Z is after 2025-08-24T17:21:41Z" Feb 17 13:36:55 crc kubenswrapper[4768]: I0217 13:36:55.574310 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:41Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:36:55Z is after 2025-08-24T17:21:41Z" Feb 17 13:36:55 crc kubenswrapper[4768]: I0217 13:36:55.576254 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ltsbm\" (UniqueName: \"kubernetes.io/projected/8c8b1469-ed55-4743-9553-f81efd79e5f1-kube-api-access-ltsbm\") pod \"network-metrics-daemon-5bxh7\" (UID: \"8c8b1469-ed55-4743-9553-f81efd79e5f1\") " pod="openshift-multus/network-metrics-daemon-5bxh7" Feb 17 13:36:55 crc kubenswrapper[4768]: I0217 13:36:55.576399 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/8c8b1469-ed55-4743-9553-f81efd79e5f1-metrics-certs\") pod \"network-metrics-daemon-5bxh7\" (UID: \"8c8b1469-ed55-4743-9553-f81efd79e5f1\") " pod="openshift-multus/network-metrics-daemon-5bxh7" Feb 17 13:36:55 crc kubenswrapper[4768]: E0217 13:36:55.576564 4768 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 17 13:36:55 crc kubenswrapper[4768]: E0217 13:36:55.576624 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8c8b1469-ed55-4743-9553-f81efd79e5f1-metrics-certs podName:8c8b1469-ed55-4743-9553-f81efd79e5f1 nodeName:}" failed. No retries permitted until 2026-02-17 13:36:56.076609859 +0000 UTC m=+35.355996301 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/8c8b1469-ed55-4743-9553-f81efd79e5f1-metrics-certs") pod "network-metrics-daemon-5bxh7" (UID: "8c8b1469-ed55-4743-9553-f81efd79e5f1") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 17 13:36:55 crc kubenswrapper[4768]: I0217 13:36:55.579608 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:36:55 crc kubenswrapper[4768]: I0217 13:36:55.579648 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:36:55 crc kubenswrapper[4768]: I0217 13:36:55.579660 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:36:55 crc kubenswrapper[4768]: I0217 13:36:55.579675 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:36:55 crc kubenswrapper[4768]: I0217 13:36:55.579691 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:36:55Z","lastTransitionTime":"2026-02-17T13:36:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:36:55 crc kubenswrapper[4768]: I0217 13:36:55.590267 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:41Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:36:55Z is after 2025-08-24T17:21:41Z" Feb 17 13:36:55 crc kubenswrapper[4768]: I0217 13:36:55.601678 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ltsbm\" (UniqueName: \"kubernetes.io/projected/8c8b1469-ed55-4743-9553-f81efd79e5f1-kube-api-access-ltsbm\") pod \"network-metrics-daemon-5bxh7\" (UID: \"8c8b1469-ed55-4743-9553-f81efd79e5f1\") " pod="openshift-multus/network-metrics-daemon-5bxh7" Feb 17 13:36:55 crc kubenswrapper[4768]: I0217 13:36:55.603621 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6l7rv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dac64b2f-4b0b-454c-96e0-fc7d563d300f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://903f9d31bd5f357a5a1f439a158788c1930d0f63cd50e809028689cfa7884e96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7njgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:36:41Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6l7rv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:36:55Z is after 2025-08-24T17:21:41Z" Feb 17 13:36:55 crc kubenswrapper[4768]: I0217 13:36:55.615687 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-jjjqj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e044bf1f-26b2-4a39-86e6-0440eff3eaa9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://19bc9ea99c2b3b0015849f2a4c730a14e048c35cf69f5b4025a4d51d707cf9f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jlkgb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:36:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-jjjqj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:36:55Z is after 2025-08-24T17:21:41Z" Feb 17 13:36:55 crc kubenswrapper[4768]: I0217 13:36:55.632995 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5cplg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"742e6df8-2a68-426e-982c-ef825c6efca3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://88f4574f4bb95ef3e580f5e3e98feb55cd1381df1cc14a3797f040f3a8d92e86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg6ql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c29effffff4df037300ce8f32cae5337e3c60634d26fb82fde99ad7994a4fd4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg6ql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bca83097ac9d4b621e9bffa8f2f815a6060db34cb0272281b765aa7705d3ab5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg6ql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aaa535fc7f87d47d5bb54797f60b4adcabe1adc1d5d79720acc7a9ba7bb355c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg6ql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e14dfa7f4c26b9ae731392907658bf0a72b811243137b32320a1be70a334a1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg6ql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0849644593ce9a9b0ce3191eb62e47c5986c4b20f93313dd9fa721c7879ede5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg6ql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3442a73d88568604ffac8454d14cbb4b22aa3949cb87e52799c0a9d39faf1bbd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3442a73d88568604ffac8454d14cbb4b22aa3949cb87e52799c0a9d39faf1bbd\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-17T13:36:54Z\\\",\\\"message\\\":\\\"ice/v1/apis/informers/externalversions/factory.go:140\\\\nI0217 13:36:53.869753 6056 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0217 13:36:53.869795 6056 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0217 13:36:53.869863 6056 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0217 13:36:53.869879 6056 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0217 13:36:53.869908 6056 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0217 13:36:53.869952 6056 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0217 13:36:53.869970 6056 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0217 13:36:53.870004 6056 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0217 13:36:53.870033 6056 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0217 13:36:53.870050 6056 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0217 13:36:53.870065 6056 handler.go:208] Removed *v1.Node event handler 2\\\\nI0217 13:36:53.870093 6056 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0217 13:36:53.870134 6056 handler.go:208] Removed *v1.Node event handler 7\\\\nI0217 13:36:53.870209 6056 factory.go:656] Stopping watch factory\\\\nI0217 13:36:53.870236 6056 handler.go:208] Removed *v1.EgressIP ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T13:36:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg6ql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8e5ca206927b1e2a6968ce42ecdb1441da4c68ad11b65c657a68889ac73af37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg6ql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6fcb3dc45db5b2c74f9bdd4c7c453815a392de1f2afe91ebed618327cfe5cb7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6fcb3dc45db5b2c74f9bdd4c7c453815a392de1f2afe91ebed618327cfe5cb7c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:36:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg6ql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:36:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-5cplg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:36:55Z is after 2025-08-24T17:21:41Z" Feb 17 13:36:55 crc kubenswrapper[4768]: I0217 13:36:55.641927 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-hngsc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a9682c-5dd5-49ce-bd8c-60e91527ec2a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db596eafde85d9ad74007393b8576b424cd8561f68f694af82cde80f997a5688\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fbjbb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:36:46Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-hngsc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:36:55Z is after 2025-08-24T17:21:41Z" Feb 17 13:36:55 crc kubenswrapper[4768]: I0217 13:36:55.655066 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"99a4349d-f5a6-431b-b8d6-9de1f9bbe63a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f45aa99773693c07f27add72aa8305d47c48deeb24382aa9ac8bca232fd26ff3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://47db5ef94c07ecce7b368bb809a736c9532fe3e47081cfa7e248b8b40d1d243f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0adb0161ea24422b8b1443a81f00e9c38f60bf8d0718075ce189158200b28a78\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22e1c78ff231c7a76ebb06589f5f14caed8c0b72475083230c80b4512ae2a048\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:36:21Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:36:55Z is after 2025-08-24T17:21:41Z" Feb 17 13:36:55 crc kubenswrapper[4768]: I0217 13:36:55.672094 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f9b2d1282c12fcb8597c7b4c60dc1bab1049afc179eaddbc8a72203b392f78d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:36:55Z is after 2025-08-24T17:21:41Z" Feb 17 13:36:55 crc kubenswrapper[4768]: I0217 13:36:55.682483 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:36:55 crc kubenswrapper[4768]: I0217 13:36:55.682540 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:36:55 crc kubenswrapper[4768]: I0217 13:36:55.682553 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:36:55 crc kubenswrapper[4768]: I0217 13:36:55.682573 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:36:55 crc kubenswrapper[4768]: I0217 13:36:55.682585 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:36:55Z","lastTransitionTime":"2026-02-17T13:36:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:36:55 crc kubenswrapper[4768]: I0217 13:36:55.684610 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-p97z4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"10c685ba-8fe0-425c-958c-3fb6754d3d84\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d56cb97a69888610458dcdfe5fa47e01b6641b0ab8ce5b11c01042001017d633\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b2h62\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://69b662556c0f67ed5b73c21d8dda6de43cd7b0e7c3dc7a57578f3ce94a4be7cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b2h62\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:36:42Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-p97z4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:36:55Z is after 2025-08-24T17:21:41Z" Feb 17 13:36:55 crc kubenswrapper[4768]: I0217 13:36:55.783903 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-5cplg_742e6df8-2a68-426e-982c-ef825c6efca3/ovnkube-controller/1.log" Feb 17 13:36:55 crc kubenswrapper[4768]: I0217 13:36:55.784418 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:36:55 crc kubenswrapper[4768]: I0217 13:36:55.784447 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:36:55 crc kubenswrapper[4768]: I0217 13:36:55.784458 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:36:55 crc kubenswrapper[4768]: I0217 13:36:55.784466 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-5cplg_742e6df8-2a68-426e-982c-ef825c6efca3/ovnkube-controller/0.log" Feb 17 13:36:55 crc kubenswrapper[4768]: I0217 13:36:55.784473 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:36:55 crc kubenswrapper[4768]: I0217 13:36:55.784528 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:36:55Z","lastTransitionTime":"2026-02-17T13:36:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:36:55 crc kubenswrapper[4768]: I0217 13:36:55.787128 4768 generic.go:334] "Generic (PLEG): container finished" podID="742e6df8-2a68-426e-982c-ef825c6efca3" containerID="e937c78628c81e51ca2970f2890cadee5475242e013b3d67f8ef3eefb2276705" exitCode=1 Feb 17 13:36:55 crc kubenswrapper[4768]: I0217 13:36:55.787128 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5cplg" event={"ID":"742e6df8-2a68-426e-982c-ef825c6efca3","Type":"ContainerDied","Data":"e937c78628c81e51ca2970f2890cadee5475242e013b3d67f8ef3eefb2276705"} Feb 17 13:36:55 crc kubenswrapper[4768]: I0217 13:36:55.787186 4768 scope.go:117] "RemoveContainer" containerID="3442a73d88568604ffac8454d14cbb4b22aa3949cb87e52799c0a9d39faf1bbd" Feb 17 13:36:55 crc kubenswrapper[4768]: I0217 13:36:55.788267 4768 scope.go:117] "RemoveContainer" containerID="e937c78628c81e51ca2970f2890cadee5475242e013b3d67f8ef3eefb2276705" Feb 17 13:36:55 crc kubenswrapper[4768]: E0217 13:36:55.788449 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-5cplg_openshift-ovn-kubernetes(742e6df8-2a68-426e-982c-ef825c6efca3)\"" pod="openshift-ovn-kubernetes/ovnkube-node-5cplg" podUID="742e6df8-2a68-426e-982c-ef825c6efca3" Feb 17 13:36:55 crc kubenswrapper[4768]: I0217 13:36:55.802990 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"88441663-1346-4328-a433-63cb4d9b6722\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1dd90501c9cf3ad0b27fefe4999afdfeaf0a8262728c8d47cd4714b0ef71d340\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://52dd97c46a2e371a68dcd9558be0f94d3bdd20c7a6ba6a65178fc09cdceea903\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4f0320f35632987c0e1f7eb6655c7221d7c159f1473088fa143a7c69a60f5b0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ecebe409455190192bdd157c1aa1c3eaec5d7930ccfaa6c456a1d04be7fff2f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9ecebe409455190192bdd157c1aa1c3eaec5d7930ccfaa6c456a1d04be7fff2f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-17T13:36:41Z\\\",\\\"message\\\":\\\"le observer\\\\nW0217 13:36:40.862072 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0217 13:36:40.862247 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0217 13:36:40.862863 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1936381186/tls.crt::/tmp/serving-cert-1936381186/tls.key\\\\\\\"\\\\nI0217 13:36:41.350711 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0217 13:36:41.368959 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0217 13:36:41.368982 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0217 13:36:41.369003 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0217 13:36:41.369008 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0217 13:36:41.374960 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0217 13:36:41.374983 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 13:36:41.374989 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0217 13:36:41.374990 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0217 13:36:41.374994 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0217 13:36:41.375023 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0217 13:36:41.375028 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0217 13:36:41.375032 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0217 13:36:41.377284 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T13:36:25Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7641c2023d65064decc69606567dff6eff07116a6cc62f9d1b1cddb34a39a407\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:24Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9dffd0f3cc3b89ababe812ea2e550f319242913da2908f466cbb8d9ac541e4b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9dffd0f3cc3b89ababe812ea2e550f319242913da2908f466cbb8d9ac541e4b9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:36:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:36:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:36:21Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:36:55Z is after 2025-08-24T17:21:41Z" Feb 17 13:36:55 crc kubenswrapper[4768]: I0217 13:36:55.813630 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c0c5a46beaa5e6c9df638b288967eaff1605591faafc373623753e3c434c6ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a28c91cfb782b4976153d2afced3117498aaabaafa5bba655b020c13aebb1dc5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:36:55Z is after 2025-08-24T17:21:41Z" Feb 17 13:36:55 crc kubenswrapper[4768]: I0217 13:36:55.822956 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:36:55Z is after 2025-08-24T17:21:41Z" Feb 17 13:36:55 crc kubenswrapper[4768]: I0217 13:36:55.832334 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-5bxh7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8c8b1469-ed55-4743-9553-f81efd79e5f1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ltsbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ltsbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:36:55Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-5bxh7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:36:55Z is after 2025-08-24T17:21:41Z" Feb 17 13:36:55 crc kubenswrapper[4768]: I0217 13:36:55.843580 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://07aa34038d6332ca2ddf610df510586d343655b1e38a2912ea1513cca35f4dcc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:36:55Z is after 2025-08-24T17:21:41Z" Feb 17 13:36:55 crc kubenswrapper[4768]: I0217 13:36:55.856047 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:41Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:36:55Z is after 2025-08-24T17:21:41Z" Feb 17 13:36:55 crc kubenswrapper[4768]: I0217 13:36:55.866834 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:41Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:36:55Z is after 2025-08-24T17:21:41Z" Feb 17 13:36:55 crc kubenswrapper[4768]: I0217 13:36:55.874997 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6l7rv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dac64b2f-4b0b-454c-96e0-fc7d563d300f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://903f9d31bd5f357a5a1f439a158788c1930d0f63cd50e809028689cfa7884e96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7njgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:36:41Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6l7rv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:36:55Z is after 2025-08-24T17:21:41Z" Feb 17 13:36:55 crc kubenswrapper[4768]: I0217 13:36:55.883788 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-hngsc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a9682c-5dd5-49ce-bd8c-60e91527ec2a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db596eafde85d9ad74007393b8576b424cd8561f68f694af82cde80f997a5688\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fbjbb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:36:46Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-hngsc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:36:55Z is after 2025-08-24T17:21:41Z" Feb 17 13:36:55 crc kubenswrapper[4768]: I0217 13:36:55.886119 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:36:55 crc kubenswrapper[4768]: I0217 13:36:55.886150 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:36:55 crc kubenswrapper[4768]: I0217 13:36:55.886159 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:36:55 crc kubenswrapper[4768]: I0217 13:36:55.886172 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:36:55 crc kubenswrapper[4768]: I0217 13:36:55.886182 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:36:55Z","lastTransitionTime":"2026-02-17T13:36:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:36:55 crc kubenswrapper[4768]: I0217 13:36:55.894448 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"99a4349d-f5a6-431b-b8d6-9de1f9bbe63a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f45aa99773693c07f27add72aa8305d47c48deeb24382aa9ac8bca232fd26ff3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://47db5ef94c07ecce7b368bb809a736c9532fe3e47081cfa7e248b8b40d1d243f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0adb0161ea24422b8b1443a81f00e9c38f60bf8d0718075ce189158200b28a78\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22e1c78ff231c7a76ebb06589f5f14caed8c0b72475083230c80b4512ae2a048\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:36:21Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:36:55Z is after 2025-08-24T17:21:41Z" Feb 17 13:36:55 crc kubenswrapper[4768]: I0217 13:36:55.905314 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f9b2d1282c12fcb8597c7b4c60dc1bab1049afc179eaddbc8a72203b392f78d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:36:55Z is after 2025-08-24T17:21:41Z" Feb 17 13:36:55 crc kubenswrapper[4768]: I0217 13:36:55.914924 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-p97z4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"10c685ba-8fe0-425c-958c-3fb6754d3d84\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d56cb97a69888610458dcdfe5fa47e01b6641b0ab8ce5b11c01042001017d633\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b2h62\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://69b662556c0f67ed5b73c21d8dda6de43cd7b0e7c3dc7a57578f3ce94a4be7cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b2h62\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:36:42Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-p97z4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:36:55Z is after 2025-08-24T17:21:41Z" Feb 17 13:36:55 crc kubenswrapper[4768]: I0217 13:36:55.926750 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-jjjqj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e044bf1f-26b2-4a39-86e6-0440eff3eaa9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://19bc9ea99c2b3b0015849f2a4c730a14e048c35cf69f5b4025a4d51d707cf9f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jlkgb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:36:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-jjjqj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:36:55Z is after 2025-08-24T17:21:41Z" Feb 17 13:36:55 crc kubenswrapper[4768]: I0217 13:36:55.945246 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5cplg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"742e6df8-2a68-426e-982c-ef825c6efca3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://88f4574f4bb95ef3e580f5e3e98feb55cd1381df1cc14a3797f040f3a8d92e86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg6ql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c29effffff4df037300ce8f32cae5337e3c60634d26fb82fde99ad7994a4fd4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg6ql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bca83097ac9d4b621e9bffa8f2f815a6060db34cb0272281b765aa7705d3ab5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg6ql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aaa535fc7f87d47d5bb54797f60b4adcabe1adc1d5d79720acc7a9ba7bb355c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg6ql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e14dfa7f4c26b9ae731392907658bf0a72b811243137b32320a1be70a334a1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg6ql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0849644593ce9a9b0ce3191eb62e47c5986c4b20f93313dd9fa721c7879ede5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg6ql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e937c78628c81e51ca2970f2890cadee5475242e013b3d67f8ef3eefb2276705\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3442a73d88568604ffac8454d14cbb4b22aa3949cb87e52799c0a9d39faf1bbd\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-17T13:36:54Z\\\",\\\"message\\\":\\\"ice/v1/apis/informers/externalversions/factory.go:140\\\\nI0217 13:36:53.869753 6056 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0217 13:36:53.869795 6056 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0217 13:36:53.869863 6056 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0217 13:36:53.869879 6056 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0217 13:36:53.869908 6056 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0217 13:36:53.869952 6056 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0217 13:36:53.869970 6056 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0217 13:36:53.870004 6056 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0217 13:36:53.870033 6056 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0217 13:36:53.870050 6056 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0217 13:36:53.870065 6056 handler.go:208] Removed *v1.Node event handler 2\\\\nI0217 13:36:53.870093 6056 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0217 13:36:53.870134 6056 handler.go:208] Removed *v1.Node event handler 7\\\\nI0217 13:36:53.870209 6056 factory.go:656] Stopping watch factory\\\\nI0217 13:36:53.870236 6056 handler.go:208] Removed *v1.EgressIP ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T13:36:49Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e937c78628c81e51ca2970f2890cadee5475242e013b3d67f8ef3eefb2276705\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-17T13:36:55Z\\\",\\\"message\\\":\\\" not yet valid: current time 2026-02-17T13:36:55Z is after 2025-08-24T17:21:41Z]\\\\nI0217 13:36:55.571984 6269 services_controller.go:360] Finished syncing service apiserver on namespace openshift-kube-apiserver for network=default : 2.624546ms\\\\nI0217 13:36:55.571947 6269 services_controller.go:434] Service openshift-machine-config-operator/machine-config-operator retrieved from lister for network=default: \\\\u0026Service{ObjectMeta:{machine-config-operator openshift-machine-config-operator 8bc1afc2-8724-4135-84df-aee09f23af4c 4514 0 2025-02-23 05:12:24 +0000 UTC \\\\u003cnil\\\\u003e \\\\u003cnil\\\\u003e map[k8s-app:machine-config-operator] map[include.release.openshift.io/ibm-cloud-managed:true include.release.openshift.io/self-managed-high-availability:true include.release.openshift.io/single-node-developer:true service.alpha.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1740288168 service.beta.openshift.io/serving-cert-secret-name:mco-proxy-tls service.beta.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1740288168] [{config.openshift.io/v1 ClusterVersion version 9101b518-476b-4eea-8fa6-69b0534e5caa 0xc00757033b \\\\u003cnil\\\\u003e}] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:metrics,Protocol:TCP,Port:9001,TargetPort:{0 9001 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{k8s-app: machine-config-operator,},Cluster\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T13:36:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg6ql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8e5ca206927b1e2a6968ce42ecdb1441da4c68ad11b65c657a68889ac73af37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg6ql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6fcb3dc45db5b2c74f9bdd4c7c453815a392de1f2afe91ebed618327cfe5cb7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6fcb3dc45db5b2c74f9bdd4c7c453815a392de1f2afe91ebed618327cfe5cb7c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:36:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg6ql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:36:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-5cplg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:36:55Z is after 2025-08-24T17:21:41Z" Feb 17 13:36:55 crc kubenswrapper[4768]: I0217 13:36:55.957679 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-6xvnz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"68ae92b3-aced-409b-901b-252d2364cc01\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c3e6dc9e6ef1908b60694a76b89114accbccc8e33d2334e751125f348ff68191\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42dxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ccce43f2773621b91f6782909e74d6c1cb29f3f2e0d4280753e2719f256d694\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3ccce43f2773621b91f6782909e74d6c1cb29f3f2e0d4280753e2719f256d694\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:36:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:36:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42dxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8fa44e73498e2987dca421d954e3c6b71c36138496d25e3498934a0ae03c25de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8fa44e73498e2987dca421d954e3c6b71c36138496d25e3498934a0ae03c25de\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:36:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:36:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42dxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8eee5a030051c479dc3e7d690a338260f3a886f6c8b6764037de0a4b3dcea101\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8eee5a030051c479dc3e7d690a338260f3a886f6c8b6764037de0a4b3dcea101\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:36:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:36:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42dxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://473b18f574df0c69d4f70ded0dee7c42ee64daab422dff19c6a58a0cde52d045\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://473b18f574df0c69d4f70ded0dee7c42ee64daab422dff19c6a58a0cde52d045\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:36:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:36:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42dxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a84122e995b4744d706f85250dc149036364dc1e476c38b54f08494e0cb41ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3a84122e995b4744d706f85250dc149036364dc1e476c38b54f08494e0cb41ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:36:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:36:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42dxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f77e278dccb14a5e674dad398d194080a57e6949529914ddfa9f8a63dbea8a34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f77e278dccb14a5e674dad398d194080a57e6949529914ddfa9f8a63dbea8a34\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:36:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:36:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42dxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:36:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-6xvnz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:36:55Z is after 2025-08-24T17:21:41Z" Feb 17 13:36:55 crc kubenswrapper[4768]: I0217 13:36:55.968036 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-62mzv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f7e6dba9-bf9f-464a-9842-f4f2a793dedf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25c892cb5b274e124423e1da0778d7b3415022fb0280eed30a6b48055d44c3bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nz8rj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4dade5f6bbbe0473f9330ea16398a4522405ca6b22a0e65851a98fe57943b38f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nz8rj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:36:53Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-62mzv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:36:55Z is after 2025-08-24T17:21:41Z" Feb 17 13:36:55 crc kubenswrapper[4768]: I0217 13:36:55.988424 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:36:55 crc kubenswrapper[4768]: I0217 13:36:55.988471 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:36:55 crc kubenswrapper[4768]: I0217 13:36:55.988481 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:36:55 crc kubenswrapper[4768]: I0217 13:36:55.988500 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:36:55 crc kubenswrapper[4768]: I0217 13:36:55.988511 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:36:55Z","lastTransitionTime":"2026-02-17T13:36:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:36:56 crc kubenswrapper[4768]: I0217 13:36:56.080845 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/8c8b1469-ed55-4743-9553-f81efd79e5f1-metrics-certs\") pod \"network-metrics-daemon-5bxh7\" (UID: \"8c8b1469-ed55-4743-9553-f81efd79e5f1\") " pod="openshift-multus/network-metrics-daemon-5bxh7" Feb 17 13:36:56 crc kubenswrapper[4768]: E0217 13:36:56.080921 4768 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 17 13:36:56 crc kubenswrapper[4768]: E0217 13:36:56.080981 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8c8b1469-ed55-4743-9553-f81efd79e5f1-metrics-certs podName:8c8b1469-ed55-4743-9553-f81efd79e5f1 nodeName:}" failed. No retries permitted until 2026-02-17 13:36:57.080966402 +0000 UTC m=+36.360352844 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/8c8b1469-ed55-4743-9553-f81efd79e5f1-metrics-certs") pod "network-metrics-daemon-5bxh7" (UID: "8c8b1469-ed55-4743-9553-f81efd79e5f1") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 17 13:36:56 crc kubenswrapper[4768]: I0217 13:36:56.090463 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:36:56 crc kubenswrapper[4768]: I0217 13:36:56.090496 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:36:56 crc kubenswrapper[4768]: I0217 13:36:56.090506 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:36:56 crc kubenswrapper[4768]: I0217 13:36:56.090522 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:36:56 crc kubenswrapper[4768]: I0217 13:36:56.090532 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:36:56Z","lastTransitionTime":"2026-02-17T13:36:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:36:56 crc kubenswrapper[4768]: I0217 13:36:56.193618 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:36:56 crc kubenswrapper[4768]: I0217 13:36:56.193663 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:36:56 crc kubenswrapper[4768]: I0217 13:36:56.193682 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:36:56 crc kubenswrapper[4768]: I0217 13:36:56.193706 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:36:56 crc kubenswrapper[4768]: I0217 13:36:56.193723 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:36:56Z","lastTransitionTime":"2026-02-17T13:36:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:36:56 crc kubenswrapper[4768]: I0217 13:36:56.296170 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:36:56 crc kubenswrapper[4768]: I0217 13:36:56.296215 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:36:56 crc kubenswrapper[4768]: I0217 13:36:56.296227 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:36:56 crc kubenswrapper[4768]: I0217 13:36:56.296244 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:36:56 crc kubenswrapper[4768]: I0217 13:36:56.296257 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:36:56Z","lastTransitionTime":"2026-02-17T13:36:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:36:56 crc kubenswrapper[4768]: I0217 13:36:56.398342 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:36:56 crc kubenswrapper[4768]: I0217 13:36:56.398380 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:36:56 crc kubenswrapper[4768]: I0217 13:36:56.398391 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:36:56 crc kubenswrapper[4768]: I0217 13:36:56.398408 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:36:56 crc kubenswrapper[4768]: I0217 13:36:56.398419 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:36:56Z","lastTransitionTime":"2026-02-17T13:36:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:36:56 crc kubenswrapper[4768]: I0217 13:36:56.500998 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:36:56 crc kubenswrapper[4768]: I0217 13:36:56.501042 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:36:56 crc kubenswrapper[4768]: I0217 13:36:56.501053 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:36:56 crc kubenswrapper[4768]: I0217 13:36:56.501071 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:36:56 crc kubenswrapper[4768]: I0217 13:36:56.501082 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:36:56Z","lastTransitionTime":"2026-02-17T13:36:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:36:56 crc kubenswrapper[4768]: I0217 13:36:56.529382 4768 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-09 20:28:36.518088871 +0000 UTC Feb 17 13:36:56 crc kubenswrapper[4768]: I0217 13:36:56.533753 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5bxh7" Feb 17 13:36:56 crc kubenswrapper[4768]: E0217 13:36:56.533906 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5bxh7" podUID="8c8b1469-ed55-4743-9553-f81efd79e5f1" Feb 17 13:36:56 crc kubenswrapper[4768]: I0217 13:36:56.603922 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:36:56 crc kubenswrapper[4768]: I0217 13:36:56.603969 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:36:56 crc kubenswrapper[4768]: I0217 13:36:56.603982 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:36:56 crc kubenswrapper[4768]: I0217 13:36:56.604000 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:36:56 crc kubenswrapper[4768]: I0217 13:36:56.604016 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:36:56Z","lastTransitionTime":"2026-02-17T13:36:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:36:56 crc kubenswrapper[4768]: I0217 13:36:56.706432 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:36:56 crc kubenswrapper[4768]: I0217 13:36:56.706679 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:36:56 crc kubenswrapper[4768]: I0217 13:36:56.706744 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:36:56 crc kubenswrapper[4768]: I0217 13:36:56.706823 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:36:56 crc kubenswrapper[4768]: I0217 13:36:56.706907 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:36:56Z","lastTransitionTime":"2026-02-17T13:36:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:36:56 crc kubenswrapper[4768]: I0217 13:36:56.793929 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-5cplg_742e6df8-2a68-426e-982c-ef825c6efca3/ovnkube-controller/1.log" Feb 17 13:36:56 crc kubenswrapper[4768]: I0217 13:36:56.809340 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:36:56 crc kubenswrapper[4768]: I0217 13:36:56.809375 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:36:56 crc kubenswrapper[4768]: I0217 13:36:56.809390 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:36:56 crc kubenswrapper[4768]: I0217 13:36:56.809409 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:36:56 crc kubenswrapper[4768]: I0217 13:36:56.809421 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:36:56Z","lastTransitionTime":"2026-02-17T13:36:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:36:56 crc kubenswrapper[4768]: I0217 13:36:56.912033 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:36:56 crc kubenswrapper[4768]: I0217 13:36:56.912079 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:36:56 crc kubenswrapper[4768]: I0217 13:36:56.912089 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:36:56 crc kubenswrapper[4768]: I0217 13:36:56.912126 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:36:56 crc kubenswrapper[4768]: I0217 13:36:56.912137 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:36:56Z","lastTransitionTime":"2026-02-17T13:36:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:36:57 crc kubenswrapper[4768]: I0217 13:36:57.014489 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:36:57 crc kubenswrapper[4768]: I0217 13:36:57.014537 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:36:57 crc kubenswrapper[4768]: I0217 13:36:57.014546 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:36:57 crc kubenswrapper[4768]: I0217 13:36:57.014562 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:36:57 crc kubenswrapper[4768]: I0217 13:36:57.014572 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:36:57Z","lastTransitionTime":"2026-02-17T13:36:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:36:57 crc kubenswrapper[4768]: I0217 13:36:57.091352 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/8c8b1469-ed55-4743-9553-f81efd79e5f1-metrics-certs\") pod \"network-metrics-daemon-5bxh7\" (UID: \"8c8b1469-ed55-4743-9553-f81efd79e5f1\") " pod="openshift-multus/network-metrics-daemon-5bxh7" Feb 17 13:36:57 crc kubenswrapper[4768]: E0217 13:36:57.091724 4768 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 17 13:36:57 crc kubenswrapper[4768]: E0217 13:36:57.091886 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8c8b1469-ed55-4743-9553-f81efd79e5f1-metrics-certs podName:8c8b1469-ed55-4743-9553-f81efd79e5f1 nodeName:}" failed. No retries permitted until 2026-02-17 13:36:59.09185319 +0000 UTC m=+38.371239632 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/8c8b1469-ed55-4743-9553-f81efd79e5f1-metrics-certs") pod "network-metrics-daemon-5bxh7" (UID: "8c8b1469-ed55-4743-9553-f81efd79e5f1") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 17 13:36:57 crc kubenswrapper[4768]: I0217 13:36:57.119379 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:36:57 crc kubenswrapper[4768]: I0217 13:36:57.119463 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:36:57 crc kubenswrapper[4768]: I0217 13:36:57.119478 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:36:57 crc kubenswrapper[4768]: I0217 13:36:57.119501 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:36:57 crc kubenswrapper[4768]: I0217 13:36:57.119518 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:36:57Z","lastTransitionTime":"2026-02-17T13:36:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:36:57 crc kubenswrapper[4768]: I0217 13:36:57.221809 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:36:57 crc kubenswrapper[4768]: I0217 13:36:57.221851 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:36:57 crc kubenswrapper[4768]: I0217 13:36:57.221866 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:36:57 crc kubenswrapper[4768]: I0217 13:36:57.221881 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:36:57 crc kubenswrapper[4768]: I0217 13:36:57.221892 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:36:57Z","lastTransitionTime":"2026-02-17T13:36:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:36:57 crc kubenswrapper[4768]: I0217 13:36:57.293319 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 13:36:57 crc kubenswrapper[4768]: I0217 13:36:57.293475 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 13:36:57 crc kubenswrapper[4768]: E0217 13:36:57.293513 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 13:37:13.293487926 +0000 UTC m=+52.572874368 (durationBeforeRetry 16s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 13:36:57 crc kubenswrapper[4768]: E0217 13:36:57.293543 4768 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 17 13:36:57 crc kubenswrapper[4768]: I0217 13:36:57.293564 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 13:36:57 crc kubenswrapper[4768]: E0217 13:36:57.293587 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-17 13:37:13.293574959 +0000 UTC m=+52.572961401 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 17 13:36:57 crc kubenswrapper[4768]: I0217 13:36:57.293623 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 13:36:57 crc kubenswrapper[4768]: E0217 13:36:57.293787 4768 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 17 13:36:57 crc kubenswrapper[4768]: E0217 13:36:57.293837 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-17 13:37:13.293825586 +0000 UTC m=+52.573212108 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 17 13:36:57 crc kubenswrapper[4768]: E0217 13:36:57.293912 4768 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 17 13:36:57 crc kubenswrapper[4768]: E0217 13:36:57.293930 4768 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 17 13:36:57 crc kubenswrapper[4768]: E0217 13:36:57.293943 4768 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 17 13:36:57 crc kubenswrapper[4768]: E0217 13:36:57.293982 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-17 13:37:13.29397314 +0000 UTC m=+52.573359672 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 17 13:36:57 crc kubenswrapper[4768]: I0217 13:36:57.325268 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:36:57 crc kubenswrapper[4768]: I0217 13:36:57.325348 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:36:57 crc kubenswrapper[4768]: I0217 13:36:57.325360 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:36:57 crc kubenswrapper[4768]: I0217 13:36:57.325395 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:36:57 crc kubenswrapper[4768]: I0217 13:36:57.325409 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:36:57Z","lastTransitionTime":"2026-02-17T13:36:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:36:57 crc kubenswrapper[4768]: I0217 13:36:57.394615 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 13:36:57 crc kubenswrapper[4768]: E0217 13:36:57.394751 4768 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 17 13:36:57 crc kubenswrapper[4768]: E0217 13:36:57.394778 4768 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 17 13:36:57 crc kubenswrapper[4768]: E0217 13:36:57.394790 4768 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 17 13:36:57 crc kubenswrapper[4768]: E0217 13:36:57.394842 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-17 13:37:13.394825064 +0000 UTC m=+52.674211506 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 17 13:36:57 crc kubenswrapper[4768]: I0217 13:36:57.428599 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:36:57 crc kubenswrapper[4768]: I0217 13:36:57.428655 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:36:57 crc kubenswrapper[4768]: I0217 13:36:57.428678 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:36:57 crc kubenswrapper[4768]: I0217 13:36:57.428710 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:36:57 crc kubenswrapper[4768]: I0217 13:36:57.428735 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:36:57Z","lastTransitionTime":"2026-02-17T13:36:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:36:57 crc kubenswrapper[4768]: I0217 13:36:57.522259 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:36:57 crc kubenswrapper[4768]: I0217 13:36:57.522295 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:36:57 crc kubenswrapper[4768]: I0217 13:36:57.522305 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:36:57 crc kubenswrapper[4768]: I0217 13:36:57.522321 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:36:57 crc kubenswrapper[4768]: I0217 13:36:57.522330 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:36:57Z","lastTransitionTime":"2026-02-17T13:36:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:36:57 crc kubenswrapper[4768]: I0217 13:36:57.530152 4768 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-20 10:12:28.507338348 +0000 UTC Feb 17 13:36:57 crc kubenswrapper[4768]: E0217 13:36:57.533431 4768 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T13:36:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T13:36:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:57Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T13:36:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T13:36:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:57Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"72b2b2e1-552d-4984-900d-b4db18ea60be\\\",\\\"systemUUID\\\":\\\"85ded5cb-c1f6-4d1b-b23e-ee8660dcd6ef\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:36:57Z is after 2025-08-24T17:21:41Z" Feb 17 13:36:57 crc kubenswrapper[4768]: I0217 13:36:57.533690 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 13:36:57 crc kubenswrapper[4768]: I0217 13:36:57.533777 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 13:36:57 crc kubenswrapper[4768]: I0217 13:36:57.533780 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 13:36:57 crc kubenswrapper[4768]: E0217 13:36:57.533899 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 13:36:57 crc kubenswrapper[4768]: E0217 13:36:57.534289 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 13:36:57 crc kubenswrapper[4768]: E0217 13:36:57.534532 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 13:36:57 crc kubenswrapper[4768]: I0217 13:36:57.534772 4768 scope.go:117] "RemoveContainer" containerID="9ecebe409455190192bdd157c1aa1c3eaec5d7930ccfaa6c456a1d04be7fff2f" Feb 17 13:36:57 crc kubenswrapper[4768]: I0217 13:36:57.536660 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:36:57 crc kubenswrapper[4768]: I0217 13:36:57.536729 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:36:57 crc kubenswrapper[4768]: I0217 13:36:57.536742 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:36:57 crc kubenswrapper[4768]: I0217 13:36:57.536758 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:36:57 crc kubenswrapper[4768]: I0217 13:36:57.536768 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:36:57Z","lastTransitionTime":"2026-02-17T13:36:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:36:57 crc kubenswrapper[4768]: E0217 13:36:57.549208 4768 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T13:36:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T13:36:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:57Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T13:36:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T13:36:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:57Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"72b2b2e1-552d-4984-900d-b4db18ea60be\\\",\\\"systemUUID\\\":\\\"85ded5cb-c1f6-4d1b-b23e-ee8660dcd6ef\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:36:57Z is after 2025-08-24T17:21:41Z" Feb 17 13:36:57 crc kubenswrapper[4768]: I0217 13:36:57.555077 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:36:57 crc kubenswrapper[4768]: I0217 13:36:57.555132 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:36:57 crc kubenswrapper[4768]: I0217 13:36:57.555144 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:36:57 crc kubenswrapper[4768]: I0217 13:36:57.555162 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:36:57 crc kubenswrapper[4768]: I0217 13:36:57.555174 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:36:57Z","lastTransitionTime":"2026-02-17T13:36:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:36:57 crc kubenswrapper[4768]: E0217 13:36:57.569416 4768 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T13:36:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T13:36:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:57Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T13:36:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T13:36:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:57Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"72b2b2e1-552d-4984-900d-b4db18ea60be\\\",\\\"systemUUID\\\":\\\"85ded5cb-c1f6-4d1b-b23e-ee8660dcd6ef\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:36:57Z is after 2025-08-24T17:21:41Z" Feb 17 13:36:57 crc kubenswrapper[4768]: I0217 13:36:57.573964 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:36:57 crc kubenswrapper[4768]: I0217 13:36:57.574007 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:36:57 crc kubenswrapper[4768]: I0217 13:36:57.574019 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:36:57 crc kubenswrapper[4768]: I0217 13:36:57.574060 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:36:57 crc kubenswrapper[4768]: I0217 13:36:57.574071 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:36:57Z","lastTransitionTime":"2026-02-17T13:36:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:36:57 crc kubenswrapper[4768]: E0217 13:36:57.590077 4768 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T13:36:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T13:36:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:57Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T13:36:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T13:36:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:57Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"72b2b2e1-552d-4984-900d-b4db18ea60be\\\",\\\"systemUUID\\\":\\\"85ded5cb-c1f6-4d1b-b23e-ee8660dcd6ef\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:36:57Z is after 2025-08-24T17:21:41Z" Feb 17 13:36:57 crc kubenswrapper[4768]: I0217 13:36:57.594239 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:36:57 crc kubenswrapper[4768]: I0217 13:36:57.594302 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:36:57 crc kubenswrapper[4768]: I0217 13:36:57.594312 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:36:57 crc kubenswrapper[4768]: I0217 13:36:57.594327 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:36:57 crc kubenswrapper[4768]: I0217 13:36:57.594335 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:36:57Z","lastTransitionTime":"2026-02-17T13:36:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:36:57 crc kubenswrapper[4768]: E0217 13:36:57.605513 4768 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T13:36:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T13:36:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:57Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T13:36:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T13:36:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:57Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"72b2b2e1-552d-4984-900d-b4db18ea60be\\\",\\\"systemUUID\\\":\\\"85ded5cb-c1f6-4d1b-b23e-ee8660dcd6ef\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:36:57Z is after 2025-08-24T17:21:41Z" Feb 17 13:36:57 crc kubenswrapper[4768]: E0217 13:36:57.605684 4768 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 17 13:36:57 crc kubenswrapper[4768]: I0217 13:36:57.606967 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:36:57 crc kubenswrapper[4768]: I0217 13:36:57.606995 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:36:57 crc kubenswrapper[4768]: I0217 13:36:57.607003 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:36:57 crc kubenswrapper[4768]: I0217 13:36:57.607020 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:36:57 crc kubenswrapper[4768]: I0217 13:36:57.607031 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:36:57Z","lastTransitionTime":"2026-02-17T13:36:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:36:57 crc kubenswrapper[4768]: I0217 13:36:57.712977 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:36:57 crc kubenswrapper[4768]: I0217 13:36:57.713023 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:36:57 crc kubenswrapper[4768]: I0217 13:36:57.713035 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:36:57 crc kubenswrapper[4768]: I0217 13:36:57.713052 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:36:57 crc kubenswrapper[4768]: I0217 13:36:57.713063 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:36:57Z","lastTransitionTime":"2026-02-17T13:36:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:36:57 crc kubenswrapper[4768]: I0217 13:36:57.802940 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Feb 17 13:36:57 crc kubenswrapper[4768]: I0217 13:36:57.804656 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"1670eca5569373c8bcdf22d19faf8794c43207f7711d502eeb2ade60d643f5dd"} Feb 17 13:36:57 crc kubenswrapper[4768]: I0217 13:36:57.804954 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 13:36:57 crc kubenswrapper[4768]: I0217 13:36:57.816652 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:36:57 crc kubenswrapper[4768]: I0217 13:36:57.816686 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:36:57 crc kubenswrapper[4768]: I0217 13:36:57.816697 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:36:57 crc kubenswrapper[4768]: I0217 13:36:57.816713 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:36:57 crc kubenswrapper[4768]: I0217 13:36:57.816724 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:36:57Z","lastTransitionTime":"2026-02-17T13:36:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:36:57 crc kubenswrapper[4768]: I0217 13:36:57.820604 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-p97z4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"10c685ba-8fe0-425c-958c-3fb6754d3d84\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d56cb97a69888610458dcdfe5fa47e01b6641b0ab8ce5b11c01042001017d633\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b2h62\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://69b662556c0f67ed5b73c21d8dda6de43cd7b0e7c3dc7a57578f3ce94a4be7cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b2h62\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:36:42Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-p97z4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:36:57Z is after 2025-08-24T17:21:41Z" Feb 17 13:36:57 crc kubenswrapper[4768]: I0217 13:36:57.839705 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-jjjqj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e044bf1f-26b2-4a39-86e6-0440eff3eaa9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://19bc9ea99c2b3b0015849f2a4c730a14e048c35cf69f5b4025a4d51d707cf9f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jlkgb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:36:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-jjjqj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:36:57Z is after 2025-08-24T17:21:41Z" Feb 17 13:36:57 crc kubenswrapper[4768]: I0217 13:36:57.872176 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5cplg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"742e6df8-2a68-426e-982c-ef825c6efca3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://88f4574f4bb95ef3e580f5e3e98feb55cd1381df1cc14a3797f040f3a8d92e86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg6ql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c29effffff4df037300ce8f32cae5337e3c60634d26fb82fde99ad7994a4fd4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg6ql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bca83097ac9d4b621e9bffa8f2f815a6060db34cb0272281b765aa7705d3ab5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg6ql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aaa535fc7f87d47d5bb54797f60b4adcabe1adc1d5d79720acc7a9ba7bb355c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg6ql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e14dfa7f4c26b9ae731392907658bf0a72b811243137b32320a1be70a334a1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg6ql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0849644593ce9a9b0ce3191eb62e47c5986c4b20f93313dd9fa721c7879ede5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg6ql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e937c78628c81e51ca2970f2890cadee5475242e013b3d67f8ef3eefb2276705\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3442a73d88568604ffac8454d14cbb4b22aa3949cb87e52799c0a9d39faf1bbd\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-17T13:36:54Z\\\",\\\"message\\\":\\\"ice/v1/apis/informers/externalversions/factory.go:140\\\\nI0217 13:36:53.869753 6056 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0217 13:36:53.869795 6056 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0217 13:36:53.869863 6056 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0217 13:36:53.869879 6056 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0217 13:36:53.869908 6056 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0217 13:36:53.869952 6056 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0217 13:36:53.869970 6056 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0217 13:36:53.870004 6056 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0217 13:36:53.870033 6056 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0217 13:36:53.870050 6056 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0217 13:36:53.870065 6056 handler.go:208] Removed *v1.Node event handler 2\\\\nI0217 13:36:53.870093 6056 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0217 13:36:53.870134 6056 handler.go:208] Removed *v1.Node event handler 7\\\\nI0217 13:36:53.870209 6056 factory.go:656] Stopping watch factory\\\\nI0217 13:36:53.870236 6056 handler.go:208] Removed *v1.EgressIP ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T13:36:49Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e937c78628c81e51ca2970f2890cadee5475242e013b3d67f8ef3eefb2276705\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-17T13:36:55Z\\\",\\\"message\\\":\\\" not yet valid: current time 2026-02-17T13:36:55Z is after 2025-08-24T17:21:41Z]\\\\nI0217 13:36:55.571984 6269 services_controller.go:360] Finished syncing service apiserver on namespace openshift-kube-apiserver for network=default : 2.624546ms\\\\nI0217 13:36:55.571947 6269 services_controller.go:434] Service openshift-machine-config-operator/machine-config-operator retrieved from lister for network=default: \\\\u0026Service{ObjectMeta:{machine-config-operator openshift-machine-config-operator 8bc1afc2-8724-4135-84df-aee09f23af4c 4514 0 2025-02-23 05:12:24 +0000 UTC \\\\u003cnil\\\\u003e \\\\u003cnil\\\\u003e map[k8s-app:machine-config-operator] map[include.release.openshift.io/ibm-cloud-managed:true include.release.openshift.io/self-managed-high-availability:true include.release.openshift.io/single-node-developer:true service.alpha.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1740288168 service.beta.openshift.io/serving-cert-secret-name:mco-proxy-tls service.beta.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1740288168] [{config.openshift.io/v1 ClusterVersion version 9101b518-476b-4eea-8fa6-69b0534e5caa 0xc00757033b \\\\u003cnil\\\\u003e}] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:metrics,Protocol:TCP,Port:9001,TargetPort:{0 9001 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{k8s-app: machine-config-operator,},Cluster\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T13:36:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg6ql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8e5ca206927b1e2a6968ce42ecdb1441da4c68ad11b65c657a68889ac73af37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg6ql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6fcb3dc45db5b2c74f9bdd4c7c453815a392de1f2afe91ebed618327cfe5cb7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6fcb3dc45db5b2c74f9bdd4c7c453815a392de1f2afe91ebed618327cfe5cb7c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:36:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg6ql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:36:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-5cplg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:36:57Z is after 2025-08-24T17:21:41Z" Feb 17 13:36:57 crc kubenswrapper[4768]: I0217 13:36:57.887652 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-hngsc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a9682c-5dd5-49ce-bd8c-60e91527ec2a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db596eafde85d9ad74007393b8576b424cd8561f68f694af82cde80f997a5688\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fbjbb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:36:46Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-hngsc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:36:57Z is after 2025-08-24T17:21:41Z" Feb 17 13:36:57 crc kubenswrapper[4768]: I0217 13:36:57.907466 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"99a4349d-f5a6-431b-b8d6-9de1f9bbe63a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f45aa99773693c07f27add72aa8305d47c48deeb24382aa9ac8bca232fd26ff3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://47db5ef94c07ecce7b368bb809a736c9532fe3e47081cfa7e248b8b40d1d243f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0adb0161ea24422b8b1443a81f00e9c38f60bf8d0718075ce189158200b28a78\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22e1c78ff231c7a76ebb06589f5f14caed8c0b72475083230c80b4512ae2a048\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:36:21Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:36:57Z is after 2025-08-24T17:21:41Z" Feb 17 13:36:57 crc kubenswrapper[4768]: I0217 13:36:57.919460 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:36:57 crc kubenswrapper[4768]: I0217 13:36:57.919502 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:36:57 crc kubenswrapper[4768]: I0217 13:36:57.919513 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:36:57 crc kubenswrapper[4768]: I0217 13:36:57.919533 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:36:57 crc kubenswrapper[4768]: I0217 13:36:57.919545 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:36:57Z","lastTransitionTime":"2026-02-17T13:36:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:36:57 crc kubenswrapper[4768]: I0217 13:36:57.926412 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f9b2d1282c12fcb8597c7b4c60dc1bab1049afc179eaddbc8a72203b392f78d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:36:57Z is after 2025-08-24T17:21:41Z" Feb 17 13:36:57 crc kubenswrapper[4768]: I0217 13:36:57.948810 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-6xvnz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"68ae92b3-aced-409b-901b-252d2364cc01\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c3e6dc9e6ef1908b60694a76b89114accbccc8e33d2334e751125f348ff68191\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42dxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ccce43f2773621b91f6782909e74d6c1cb29f3f2e0d4280753e2719f256d694\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3ccce43f2773621b91f6782909e74d6c1cb29f3f2e0d4280753e2719f256d694\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:36:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:36:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42dxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8fa44e73498e2987dca421d954e3c6b71c36138496d25e3498934a0ae03c25de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8fa44e73498e2987dca421d954e3c6b71c36138496d25e3498934a0ae03c25de\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:36:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:36:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42dxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8eee5a030051c479dc3e7d690a338260f3a886f6c8b6764037de0a4b3dcea101\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8eee5a030051c479dc3e7d690a338260f3a886f6c8b6764037de0a4b3dcea101\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:36:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:36:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42dxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://473b18f574df0c69d4f70ded0dee7c42ee64daab422dff19c6a58a0cde52d045\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://473b18f574df0c69d4f70ded0dee7c42ee64daab422dff19c6a58a0cde52d045\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:36:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:36:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42dxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a84122e995b4744d706f85250dc149036364dc1e476c38b54f08494e0cb41ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3a84122e995b4744d706f85250dc149036364dc1e476c38b54f08494e0cb41ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:36:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:36:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42dxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f77e278dccb14a5e674dad398d194080a57e6949529914ddfa9f8a63dbea8a34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f77e278dccb14a5e674dad398d194080a57e6949529914ddfa9f8a63dbea8a34\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:36:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:36:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42dxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:36:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-6xvnz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:36:57Z is after 2025-08-24T17:21:41Z" Feb 17 13:36:57 crc kubenswrapper[4768]: I0217 13:36:57.965024 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-62mzv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f7e6dba9-bf9f-464a-9842-f4f2a793dedf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25c892cb5b274e124423e1da0778d7b3415022fb0280eed30a6b48055d44c3bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nz8rj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4dade5f6bbbe0473f9330ea16398a4522405ca6b22a0e65851a98fe57943b38f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nz8rj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:36:53Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-62mzv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:36:57Z is after 2025-08-24T17:21:41Z" Feb 17 13:36:57 crc kubenswrapper[4768]: I0217 13:36:57.982145 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"88441663-1346-4328-a433-63cb4d9b6722\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1dd90501c9cf3ad0b27fefe4999afdfeaf0a8262728c8d47cd4714b0ef71d340\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://52dd97c46a2e371a68dcd9558be0f94d3bdd20c7a6ba6a65178fc09cdceea903\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4f0320f35632987c0e1f7eb6655c7221d7c159f1473088fa143a7c69a60f5b0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1670eca5569373c8bcdf22d19faf8794c43207f7711d502eeb2ade60d643f5dd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9ecebe409455190192bdd157c1aa1c3eaec5d7930ccfaa6c456a1d04be7fff2f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-17T13:36:41Z\\\",\\\"message\\\":\\\"le observer\\\\nW0217 13:36:40.862072 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0217 13:36:40.862247 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0217 13:36:40.862863 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1936381186/tls.crt::/tmp/serving-cert-1936381186/tls.key\\\\\\\"\\\\nI0217 13:36:41.350711 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0217 13:36:41.368959 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0217 13:36:41.368982 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0217 13:36:41.369003 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0217 13:36:41.369008 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0217 13:36:41.374960 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0217 13:36:41.374983 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 13:36:41.374989 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0217 13:36:41.374990 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0217 13:36:41.374994 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0217 13:36:41.375023 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0217 13:36:41.375028 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0217 13:36:41.375032 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0217 13:36:41.377284 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T13:36:25Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7641c2023d65064decc69606567dff6eff07116a6cc62f9d1b1cddb34a39a407\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:24Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9dffd0f3cc3b89ababe812ea2e550f319242913da2908f466cbb8d9ac541e4b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9dffd0f3cc3b89ababe812ea2e550f319242913da2908f466cbb8d9ac541e4b9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:36:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:36:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:36:21Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:36:57Z is after 2025-08-24T17:21:41Z" Feb 17 13:36:58 crc kubenswrapper[4768]: I0217 13:36:58.000860 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c0c5a46beaa5e6c9df638b288967eaff1605591faafc373623753e3c434c6ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a28c91cfb782b4976153d2afced3117498aaabaafa5bba655b020c13aebb1dc5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:36:57Z is after 2025-08-24T17:21:41Z" Feb 17 13:36:58 crc kubenswrapper[4768]: I0217 13:36:58.015801 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:36:58Z is after 2025-08-24T17:21:41Z" Feb 17 13:36:58 crc kubenswrapper[4768]: I0217 13:36:58.022501 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:36:58 crc kubenswrapper[4768]: I0217 13:36:58.022538 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:36:58 crc kubenswrapper[4768]: I0217 13:36:58.022547 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:36:58 crc kubenswrapper[4768]: I0217 13:36:58.022564 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:36:58 crc kubenswrapper[4768]: I0217 13:36:58.022573 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:36:58Z","lastTransitionTime":"2026-02-17T13:36:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:36:58 crc kubenswrapper[4768]: I0217 13:36:58.028470 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-5bxh7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8c8b1469-ed55-4743-9553-f81efd79e5f1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ltsbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ltsbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:36:55Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-5bxh7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:36:58Z is after 2025-08-24T17:21:41Z" Feb 17 13:36:58 crc kubenswrapper[4768]: I0217 13:36:58.040309 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6l7rv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dac64b2f-4b0b-454c-96e0-fc7d563d300f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://903f9d31bd5f357a5a1f439a158788c1930d0f63cd50e809028689cfa7884e96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7njgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:36:41Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6l7rv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:36:58Z is after 2025-08-24T17:21:41Z" Feb 17 13:36:58 crc kubenswrapper[4768]: I0217 13:36:58.056924 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://07aa34038d6332ca2ddf610df510586d343655b1e38a2912ea1513cca35f4dcc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:36:58Z is after 2025-08-24T17:21:41Z" Feb 17 13:36:58 crc kubenswrapper[4768]: I0217 13:36:58.069496 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:41Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:36:58Z is after 2025-08-24T17:21:41Z" Feb 17 13:36:58 crc kubenswrapper[4768]: I0217 13:36:58.083209 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:41Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:36:58Z is after 2025-08-24T17:21:41Z" Feb 17 13:36:58 crc kubenswrapper[4768]: I0217 13:36:58.125115 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:36:58 crc kubenswrapper[4768]: I0217 13:36:58.125143 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:36:58 crc kubenswrapper[4768]: I0217 13:36:58.125150 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:36:58 crc kubenswrapper[4768]: I0217 13:36:58.125166 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:36:58 crc kubenswrapper[4768]: I0217 13:36:58.125175 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:36:58Z","lastTransitionTime":"2026-02-17T13:36:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:36:58 crc kubenswrapper[4768]: I0217 13:36:58.227158 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:36:58 crc kubenswrapper[4768]: I0217 13:36:58.227196 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:36:58 crc kubenswrapper[4768]: I0217 13:36:58.227213 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:36:58 crc kubenswrapper[4768]: I0217 13:36:58.227232 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:36:58 crc kubenswrapper[4768]: I0217 13:36:58.227243 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:36:58Z","lastTransitionTime":"2026-02-17T13:36:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:36:58 crc kubenswrapper[4768]: I0217 13:36:58.328881 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:36:58 crc kubenswrapper[4768]: I0217 13:36:58.328916 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:36:58 crc kubenswrapper[4768]: I0217 13:36:58.328924 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:36:58 crc kubenswrapper[4768]: I0217 13:36:58.328939 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:36:58 crc kubenswrapper[4768]: I0217 13:36:58.328950 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:36:58Z","lastTransitionTime":"2026-02-17T13:36:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:36:58 crc kubenswrapper[4768]: I0217 13:36:58.431157 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:36:58 crc kubenswrapper[4768]: I0217 13:36:58.431201 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:36:58 crc kubenswrapper[4768]: I0217 13:36:58.431212 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:36:58 crc kubenswrapper[4768]: I0217 13:36:58.431230 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:36:58 crc kubenswrapper[4768]: I0217 13:36:58.431244 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:36:58Z","lastTransitionTime":"2026-02-17T13:36:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:36:58 crc kubenswrapper[4768]: I0217 13:36:58.530608 4768 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-12 21:01:53.490841104 +0000 UTC Feb 17 13:36:58 crc kubenswrapper[4768]: I0217 13:36:58.533372 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5bxh7" Feb 17 13:36:58 crc kubenswrapper[4768]: E0217 13:36:58.533466 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5bxh7" podUID="8c8b1469-ed55-4743-9553-f81efd79e5f1" Feb 17 13:36:58 crc kubenswrapper[4768]: I0217 13:36:58.533484 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:36:58 crc kubenswrapper[4768]: I0217 13:36:58.533532 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:36:58 crc kubenswrapper[4768]: I0217 13:36:58.533548 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:36:58 crc kubenswrapper[4768]: I0217 13:36:58.533577 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:36:58 crc kubenswrapper[4768]: I0217 13:36:58.533595 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:36:58Z","lastTransitionTime":"2026-02-17T13:36:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:36:58 crc kubenswrapper[4768]: I0217 13:36:58.636377 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:36:58 crc kubenswrapper[4768]: I0217 13:36:58.636453 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:36:58 crc kubenswrapper[4768]: I0217 13:36:58.636479 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:36:58 crc kubenswrapper[4768]: I0217 13:36:58.636512 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:36:58 crc kubenswrapper[4768]: I0217 13:36:58.636537 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:36:58Z","lastTransitionTime":"2026-02-17T13:36:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:36:58 crc kubenswrapper[4768]: I0217 13:36:58.738785 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:36:58 crc kubenswrapper[4768]: I0217 13:36:58.738829 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:36:58 crc kubenswrapper[4768]: I0217 13:36:58.738840 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:36:58 crc kubenswrapper[4768]: I0217 13:36:58.738858 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:36:58 crc kubenswrapper[4768]: I0217 13:36:58.738870 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:36:58Z","lastTransitionTime":"2026-02-17T13:36:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:36:58 crc kubenswrapper[4768]: I0217 13:36:58.841772 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:36:58 crc kubenswrapper[4768]: I0217 13:36:58.841815 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:36:58 crc kubenswrapper[4768]: I0217 13:36:58.841825 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:36:58 crc kubenswrapper[4768]: I0217 13:36:58.841842 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:36:58 crc kubenswrapper[4768]: I0217 13:36:58.841852 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:36:58Z","lastTransitionTime":"2026-02-17T13:36:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:36:58 crc kubenswrapper[4768]: I0217 13:36:58.944672 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:36:58 crc kubenswrapper[4768]: I0217 13:36:58.944720 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:36:58 crc kubenswrapper[4768]: I0217 13:36:58.944742 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:36:58 crc kubenswrapper[4768]: I0217 13:36:58.944762 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:36:58 crc kubenswrapper[4768]: I0217 13:36:58.944774 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:36:58Z","lastTransitionTime":"2026-02-17T13:36:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:36:59 crc kubenswrapper[4768]: I0217 13:36:59.046943 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:36:59 crc kubenswrapper[4768]: I0217 13:36:59.046985 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:36:59 crc kubenswrapper[4768]: I0217 13:36:59.046994 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:36:59 crc kubenswrapper[4768]: I0217 13:36:59.047033 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:36:59 crc kubenswrapper[4768]: I0217 13:36:59.047048 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:36:59Z","lastTransitionTime":"2026-02-17T13:36:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:36:59 crc kubenswrapper[4768]: I0217 13:36:59.113799 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/8c8b1469-ed55-4743-9553-f81efd79e5f1-metrics-certs\") pod \"network-metrics-daemon-5bxh7\" (UID: \"8c8b1469-ed55-4743-9553-f81efd79e5f1\") " pod="openshift-multus/network-metrics-daemon-5bxh7" Feb 17 13:36:59 crc kubenswrapper[4768]: E0217 13:36:59.114066 4768 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 17 13:36:59 crc kubenswrapper[4768]: E0217 13:36:59.114224 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8c8b1469-ed55-4743-9553-f81efd79e5f1-metrics-certs podName:8c8b1469-ed55-4743-9553-f81efd79e5f1 nodeName:}" failed. No retries permitted until 2026-02-17 13:37:03.114188592 +0000 UTC m=+42.393575084 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/8c8b1469-ed55-4743-9553-f81efd79e5f1-metrics-certs") pod "network-metrics-daemon-5bxh7" (UID: "8c8b1469-ed55-4743-9553-f81efd79e5f1") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 17 13:36:59 crc kubenswrapper[4768]: I0217 13:36:59.150211 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:36:59 crc kubenswrapper[4768]: I0217 13:36:59.150264 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:36:59 crc kubenswrapper[4768]: I0217 13:36:59.150276 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:36:59 crc kubenswrapper[4768]: I0217 13:36:59.150298 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:36:59 crc kubenswrapper[4768]: I0217 13:36:59.150311 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:36:59Z","lastTransitionTime":"2026-02-17T13:36:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:36:59 crc kubenswrapper[4768]: I0217 13:36:59.253147 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:36:59 crc kubenswrapper[4768]: I0217 13:36:59.253193 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:36:59 crc kubenswrapper[4768]: I0217 13:36:59.253204 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:36:59 crc kubenswrapper[4768]: I0217 13:36:59.253220 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:36:59 crc kubenswrapper[4768]: I0217 13:36:59.253229 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:36:59Z","lastTransitionTime":"2026-02-17T13:36:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:36:59 crc kubenswrapper[4768]: I0217 13:36:59.355085 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:36:59 crc kubenswrapper[4768]: I0217 13:36:59.355178 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:36:59 crc kubenswrapper[4768]: I0217 13:36:59.355195 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:36:59 crc kubenswrapper[4768]: I0217 13:36:59.355223 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:36:59 crc kubenswrapper[4768]: I0217 13:36:59.355247 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:36:59Z","lastTransitionTime":"2026-02-17T13:36:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:36:59 crc kubenswrapper[4768]: I0217 13:36:59.457713 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:36:59 crc kubenswrapper[4768]: I0217 13:36:59.457762 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:36:59 crc kubenswrapper[4768]: I0217 13:36:59.457773 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:36:59 crc kubenswrapper[4768]: I0217 13:36:59.457791 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:36:59 crc kubenswrapper[4768]: I0217 13:36:59.457802 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:36:59Z","lastTransitionTime":"2026-02-17T13:36:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:36:59 crc kubenswrapper[4768]: I0217 13:36:59.531416 4768 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-06 01:42:39.164189677 +0000 UTC Feb 17 13:36:59 crc kubenswrapper[4768]: I0217 13:36:59.533770 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 13:36:59 crc kubenswrapper[4768]: I0217 13:36:59.533771 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 13:36:59 crc kubenswrapper[4768]: I0217 13:36:59.533860 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 13:36:59 crc kubenswrapper[4768]: E0217 13:36:59.534046 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 13:36:59 crc kubenswrapper[4768]: E0217 13:36:59.534704 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 13:36:59 crc kubenswrapper[4768]: E0217 13:36:59.534852 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 13:36:59 crc kubenswrapper[4768]: I0217 13:36:59.560856 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:36:59 crc kubenswrapper[4768]: I0217 13:36:59.560894 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:36:59 crc kubenswrapper[4768]: I0217 13:36:59.560904 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:36:59 crc kubenswrapper[4768]: I0217 13:36:59.560920 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:36:59 crc kubenswrapper[4768]: I0217 13:36:59.560929 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:36:59Z","lastTransitionTime":"2026-02-17T13:36:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:36:59 crc kubenswrapper[4768]: I0217 13:36:59.664589 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:36:59 crc kubenswrapper[4768]: I0217 13:36:59.664643 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:36:59 crc kubenswrapper[4768]: I0217 13:36:59.664655 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:36:59 crc kubenswrapper[4768]: I0217 13:36:59.664677 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:36:59 crc kubenswrapper[4768]: I0217 13:36:59.664691 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:36:59Z","lastTransitionTime":"2026-02-17T13:36:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:36:59 crc kubenswrapper[4768]: I0217 13:36:59.767043 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:36:59 crc kubenswrapper[4768]: I0217 13:36:59.767075 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:36:59 crc kubenswrapper[4768]: I0217 13:36:59.767084 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:36:59 crc kubenswrapper[4768]: I0217 13:36:59.767112 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:36:59 crc kubenswrapper[4768]: I0217 13:36:59.767120 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:36:59Z","lastTransitionTime":"2026-02-17T13:36:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:36:59 crc kubenswrapper[4768]: I0217 13:36:59.870088 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:36:59 crc kubenswrapper[4768]: I0217 13:36:59.870180 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:36:59 crc kubenswrapper[4768]: I0217 13:36:59.870199 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:36:59 crc kubenswrapper[4768]: I0217 13:36:59.870219 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:36:59 crc kubenswrapper[4768]: I0217 13:36:59.870235 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:36:59Z","lastTransitionTime":"2026-02-17T13:36:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:36:59 crc kubenswrapper[4768]: I0217 13:36:59.972259 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:36:59 crc kubenswrapper[4768]: I0217 13:36:59.972308 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:36:59 crc kubenswrapper[4768]: I0217 13:36:59.972320 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:36:59 crc kubenswrapper[4768]: I0217 13:36:59.972338 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:36:59 crc kubenswrapper[4768]: I0217 13:36:59.972351 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:36:59Z","lastTransitionTime":"2026-02-17T13:36:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:00 crc kubenswrapper[4768]: I0217 13:37:00.076024 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:00 crc kubenswrapper[4768]: I0217 13:37:00.076083 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:00 crc kubenswrapper[4768]: I0217 13:37:00.076128 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:00 crc kubenswrapper[4768]: I0217 13:37:00.076202 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:00 crc kubenswrapper[4768]: I0217 13:37:00.076219 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:00Z","lastTransitionTime":"2026-02-17T13:37:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:00 crc kubenswrapper[4768]: I0217 13:37:00.178420 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:00 crc kubenswrapper[4768]: I0217 13:37:00.178477 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:00 crc kubenswrapper[4768]: I0217 13:37:00.178489 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:00 crc kubenswrapper[4768]: I0217 13:37:00.178510 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:00 crc kubenswrapper[4768]: I0217 13:37:00.179060 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:00Z","lastTransitionTime":"2026-02-17T13:37:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:00 crc kubenswrapper[4768]: I0217 13:37:00.284289 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:00 crc kubenswrapper[4768]: I0217 13:37:00.284356 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:00 crc kubenswrapper[4768]: I0217 13:37:00.284370 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:00 crc kubenswrapper[4768]: I0217 13:37:00.284388 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:00 crc kubenswrapper[4768]: I0217 13:37:00.284400 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:00Z","lastTransitionTime":"2026-02-17T13:37:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:00 crc kubenswrapper[4768]: I0217 13:37:00.387307 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:00 crc kubenswrapper[4768]: I0217 13:37:00.387349 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:00 crc kubenswrapper[4768]: I0217 13:37:00.387359 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:00 crc kubenswrapper[4768]: I0217 13:37:00.387375 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:00 crc kubenswrapper[4768]: I0217 13:37:00.387386 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:00Z","lastTransitionTime":"2026-02-17T13:37:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:00 crc kubenswrapper[4768]: I0217 13:37:00.490315 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:00 crc kubenswrapper[4768]: I0217 13:37:00.490356 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:00 crc kubenswrapper[4768]: I0217 13:37:00.490367 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:00 crc kubenswrapper[4768]: I0217 13:37:00.490385 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:00 crc kubenswrapper[4768]: I0217 13:37:00.490397 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:00Z","lastTransitionTime":"2026-02-17T13:37:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:00 crc kubenswrapper[4768]: I0217 13:37:00.532084 4768 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-07 14:48:51.130780274 +0000 UTC Feb 17 13:37:00 crc kubenswrapper[4768]: I0217 13:37:00.533280 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5bxh7" Feb 17 13:37:00 crc kubenswrapper[4768]: E0217 13:37:00.533436 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5bxh7" podUID="8c8b1469-ed55-4743-9553-f81efd79e5f1" Feb 17 13:37:00 crc kubenswrapper[4768]: I0217 13:37:00.593435 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:00 crc kubenswrapper[4768]: I0217 13:37:00.593472 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:00 crc kubenswrapper[4768]: I0217 13:37:00.593485 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:00 crc kubenswrapper[4768]: I0217 13:37:00.593501 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:00 crc kubenswrapper[4768]: I0217 13:37:00.593512 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:00Z","lastTransitionTime":"2026-02-17T13:37:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:00 crc kubenswrapper[4768]: I0217 13:37:00.696359 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:00 crc kubenswrapper[4768]: I0217 13:37:00.696411 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:00 crc kubenswrapper[4768]: I0217 13:37:00.696423 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:00 crc kubenswrapper[4768]: I0217 13:37:00.696440 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:00 crc kubenswrapper[4768]: I0217 13:37:00.696454 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:00Z","lastTransitionTime":"2026-02-17T13:37:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:00 crc kubenswrapper[4768]: I0217 13:37:00.843936 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:00 crc kubenswrapper[4768]: I0217 13:37:00.843982 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:00 crc kubenswrapper[4768]: I0217 13:37:00.843995 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:00 crc kubenswrapper[4768]: I0217 13:37:00.844012 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:00 crc kubenswrapper[4768]: I0217 13:37:00.844024 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:00Z","lastTransitionTime":"2026-02-17T13:37:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:00 crc kubenswrapper[4768]: I0217 13:37:00.946669 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:00 crc kubenswrapper[4768]: I0217 13:37:00.946737 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:00 crc kubenswrapper[4768]: I0217 13:37:00.946768 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:00 crc kubenswrapper[4768]: I0217 13:37:00.946797 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:00 crc kubenswrapper[4768]: I0217 13:37:00.946817 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:00Z","lastTransitionTime":"2026-02-17T13:37:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:01 crc kubenswrapper[4768]: I0217 13:37:01.049977 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:01 crc kubenswrapper[4768]: I0217 13:37:01.050025 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:01 crc kubenswrapper[4768]: I0217 13:37:01.050033 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:01 crc kubenswrapper[4768]: I0217 13:37:01.050052 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:01 crc kubenswrapper[4768]: I0217 13:37:01.050061 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:01Z","lastTransitionTime":"2026-02-17T13:37:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:01 crc kubenswrapper[4768]: I0217 13:37:01.152425 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:01 crc kubenswrapper[4768]: I0217 13:37:01.152471 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:01 crc kubenswrapper[4768]: I0217 13:37:01.152484 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:01 crc kubenswrapper[4768]: I0217 13:37:01.152505 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:01 crc kubenswrapper[4768]: I0217 13:37:01.152518 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:01Z","lastTransitionTime":"2026-02-17T13:37:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:01 crc kubenswrapper[4768]: I0217 13:37:01.255368 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:01 crc kubenswrapper[4768]: I0217 13:37:01.255417 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:01 crc kubenswrapper[4768]: I0217 13:37:01.255427 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:01 crc kubenswrapper[4768]: I0217 13:37:01.255449 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:01 crc kubenswrapper[4768]: I0217 13:37:01.255462 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:01Z","lastTransitionTime":"2026-02-17T13:37:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:01 crc kubenswrapper[4768]: I0217 13:37:01.358629 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:01 crc kubenswrapper[4768]: I0217 13:37:01.358698 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:01 crc kubenswrapper[4768]: I0217 13:37:01.358708 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:01 crc kubenswrapper[4768]: I0217 13:37:01.358725 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:01 crc kubenswrapper[4768]: I0217 13:37:01.358738 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:01Z","lastTransitionTime":"2026-02-17T13:37:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:01 crc kubenswrapper[4768]: I0217 13:37:01.464285 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:01 crc kubenswrapper[4768]: I0217 13:37:01.464340 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:01 crc kubenswrapper[4768]: I0217 13:37:01.464352 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:01 crc kubenswrapper[4768]: I0217 13:37:01.464376 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:01 crc kubenswrapper[4768]: I0217 13:37:01.464389 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:01Z","lastTransitionTime":"2026-02-17T13:37:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:01 crc kubenswrapper[4768]: I0217 13:37:01.532277 4768 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-13 01:13:20.810170558 +0000 UTC Feb 17 13:37:01 crc kubenswrapper[4768]: I0217 13:37:01.533382 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 13:37:01 crc kubenswrapper[4768]: I0217 13:37:01.533401 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 13:37:01 crc kubenswrapper[4768]: E0217 13:37:01.533516 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 13:37:01 crc kubenswrapper[4768]: I0217 13:37:01.533524 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 13:37:01 crc kubenswrapper[4768]: E0217 13:37:01.533773 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 13:37:01 crc kubenswrapper[4768]: E0217 13:37:01.533946 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 13:37:01 crc kubenswrapper[4768]: I0217 13:37:01.544958 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-hngsc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a9682c-5dd5-49ce-bd8c-60e91527ec2a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db596eafde85d9ad74007393b8576b424cd8561f68f694af82cde80f997a5688\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fbjbb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:36:46Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-hngsc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:37:01Z is after 2025-08-24T17:21:41Z" Feb 17 13:37:01 crc kubenswrapper[4768]: I0217 13:37:01.560521 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"99a4349d-f5a6-431b-b8d6-9de1f9bbe63a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f45aa99773693c07f27add72aa8305d47c48deeb24382aa9ac8bca232fd26ff3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://47db5ef94c07ecce7b368bb809a736c9532fe3e47081cfa7e248b8b40d1d243f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0adb0161ea24422b8b1443a81f00e9c38f60bf8d0718075ce189158200b28a78\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22e1c78ff231c7a76ebb06589f5f14caed8c0b72475083230c80b4512ae2a048\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:36:21Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:37:01Z is after 2025-08-24T17:21:41Z" Feb 17 13:37:01 crc kubenswrapper[4768]: I0217 13:37:01.567025 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:01 crc kubenswrapper[4768]: I0217 13:37:01.567065 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:01 crc kubenswrapper[4768]: I0217 13:37:01.567077 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:01 crc kubenswrapper[4768]: I0217 13:37:01.567094 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:01 crc kubenswrapper[4768]: I0217 13:37:01.567125 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:01Z","lastTransitionTime":"2026-02-17T13:37:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:01 crc kubenswrapper[4768]: I0217 13:37:01.573735 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f9b2d1282c12fcb8597c7b4c60dc1bab1049afc179eaddbc8a72203b392f78d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:37:01Z is after 2025-08-24T17:21:41Z" Feb 17 13:37:01 crc kubenswrapper[4768]: I0217 13:37:01.584436 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-p97z4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"10c685ba-8fe0-425c-958c-3fb6754d3d84\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d56cb97a69888610458dcdfe5fa47e01b6641b0ab8ce5b11c01042001017d633\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b2h62\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://69b662556c0f67ed5b73c21d8dda6de43cd7b0e7c3dc7a57578f3ce94a4be7cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b2h62\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:36:42Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-p97z4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:37:01Z is after 2025-08-24T17:21:41Z" Feb 17 13:37:01 crc kubenswrapper[4768]: I0217 13:37:01.598979 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-jjjqj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e044bf1f-26b2-4a39-86e6-0440eff3eaa9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://19bc9ea99c2b3b0015849f2a4c730a14e048c35cf69f5b4025a4d51d707cf9f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jlkgb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:36:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-jjjqj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:37:01Z is after 2025-08-24T17:21:41Z" Feb 17 13:37:01 crc kubenswrapper[4768]: I0217 13:37:01.627948 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5cplg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"742e6df8-2a68-426e-982c-ef825c6efca3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://88f4574f4bb95ef3e580f5e3e98feb55cd1381df1cc14a3797f040f3a8d92e86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg6ql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c29effffff4df037300ce8f32cae5337e3c60634d26fb82fde99ad7994a4fd4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg6ql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bca83097ac9d4b621e9bffa8f2f815a6060db34cb0272281b765aa7705d3ab5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg6ql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aaa535fc7f87d47d5bb54797f60b4adcabe1adc1d5d79720acc7a9ba7bb355c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg6ql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e14dfa7f4c26b9ae731392907658bf0a72b811243137b32320a1be70a334a1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg6ql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0849644593ce9a9b0ce3191eb62e47c5986c4b20f93313dd9fa721c7879ede5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg6ql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e937c78628c81e51ca2970f2890cadee5475242e013b3d67f8ef3eefb2276705\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3442a73d88568604ffac8454d14cbb4b22aa3949cb87e52799c0a9d39faf1bbd\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-17T13:36:54Z\\\",\\\"message\\\":\\\"ice/v1/apis/informers/externalversions/factory.go:140\\\\nI0217 13:36:53.869753 6056 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0217 13:36:53.869795 6056 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0217 13:36:53.869863 6056 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0217 13:36:53.869879 6056 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0217 13:36:53.869908 6056 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0217 13:36:53.869952 6056 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0217 13:36:53.869970 6056 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0217 13:36:53.870004 6056 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0217 13:36:53.870033 6056 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0217 13:36:53.870050 6056 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0217 13:36:53.870065 6056 handler.go:208] Removed *v1.Node event handler 2\\\\nI0217 13:36:53.870093 6056 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0217 13:36:53.870134 6056 handler.go:208] Removed *v1.Node event handler 7\\\\nI0217 13:36:53.870209 6056 factory.go:656] Stopping watch factory\\\\nI0217 13:36:53.870236 6056 handler.go:208] Removed *v1.EgressIP ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T13:36:49Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e937c78628c81e51ca2970f2890cadee5475242e013b3d67f8ef3eefb2276705\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-17T13:36:55Z\\\",\\\"message\\\":\\\" not yet valid: current time 2026-02-17T13:36:55Z is after 2025-08-24T17:21:41Z]\\\\nI0217 13:36:55.571984 6269 services_controller.go:360] Finished syncing service apiserver on namespace openshift-kube-apiserver for network=default : 2.624546ms\\\\nI0217 13:36:55.571947 6269 services_controller.go:434] Service openshift-machine-config-operator/machine-config-operator retrieved from lister for network=default: \\\\u0026Service{ObjectMeta:{machine-config-operator openshift-machine-config-operator 8bc1afc2-8724-4135-84df-aee09f23af4c 4514 0 2025-02-23 05:12:24 +0000 UTC \\\\u003cnil\\\\u003e \\\\u003cnil\\\\u003e map[k8s-app:machine-config-operator] map[include.release.openshift.io/ibm-cloud-managed:true include.release.openshift.io/self-managed-high-availability:true include.release.openshift.io/single-node-developer:true service.alpha.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1740288168 service.beta.openshift.io/serving-cert-secret-name:mco-proxy-tls service.beta.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1740288168] [{config.openshift.io/v1 ClusterVersion version 9101b518-476b-4eea-8fa6-69b0534e5caa 0xc00757033b \\\\u003cnil\\\\u003e}] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:metrics,Protocol:TCP,Port:9001,TargetPort:{0 9001 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{k8s-app: machine-config-operator,},Cluster\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T13:36:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg6ql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8e5ca206927b1e2a6968ce42ecdb1441da4c68ad11b65c657a68889ac73af37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg6ql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6fcb3dc45db5b2c74f9bdd4c7c453815a392de1f2afe91ebed618327cfe5cb7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6fcb3dc45db5b2c74f9bdd4c7c453815a392de1f2afe91ebed618327cfe5cb7c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:36:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg6ql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:36:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-5cplg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:37:01Z is after 2025-08-24T17:21:41Z" Feb 17 13:37:01 crc kubenswrapper[4768]: I0217 13:37:01.642573 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-6xvnz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"68ae92b3-aced-409b-901b-252d2364cc01\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c3e6dc9e6ef1908b60694a76b89114accbccc8e33d2334e751125f348ff68191\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42dxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ccce43f2773621b91f6782909e74d6c1cb29f3f2e0d4280753e2719f256d694\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3ccce43f2773621b91f6782909e74d6c1cb29f3f2e0d4280753e2719f256d694\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:36:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:36:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42dxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8fa44e73498e2987dca421d954e3c6b71c36138496d25e3498934a0ae03c25de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8fa44e73498e2987dca421d954e3c6b71c36138496d25e3498934a0ae03c25de\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:36:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:36:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42dxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8eee5a030051c479dc3e7d690a338260f3a886f6c8b6764037de0a4b3dcea101\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8eee5a030051c479dc3e7d690a338260f3a886f6c8b6764037de0a4b3dcea101\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:36:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:36:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42dxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://473b18f574df0c69d4f70ded0dee7c42ee64daab422dff19c6a58a0cde52d045\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://473b18f574df0c69d4f70ded0dee7c42ee64daab422dff19c6a58a0cde52d045\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:36:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:36:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42dxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a84122e995b4744d706f85250dc149036364dc1e476c38b54f08494e0cb41ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3a84122e995b4744d706f85250dc149036364dc1e476c38b54f08494e0cb41ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:36:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:36:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42dxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f77e278dccb14a5e674dad398d194080a57e6949529914ddfa9f8a63dbea8a34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f77e278dccb14a5e674dad398d194080a57e6949529914ddfa9f8a63dbea8a34\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:36:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:36:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42dxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:36:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-6xvnz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:37:01Z is after 2025-08-24T17:21:41Z" Feb 17 13:37:01 crc kubenswrapper[4768]: I0217 13:37:01.653967 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-62mzv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f7e6dba9-bf9f-464a-9842-f4f2a793dedf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25c892cb5b274e124423e1da0778d7b3415022fb0280eed30a6b48055d44c3bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nz8rj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4dade5f6bbbe0473f9330ea16398a4522405ca6b22a0e65851a98fe57943b38f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nz8rj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:36:53Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-62mzv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:37:01Z is after 2025-08-24T17:21:41Z" Feb 17 13:37:01 crc kubenswrapper[4768]: I0217 13:37:01.667603 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"88441663-1346-4328-a433-63cb4d9b6722\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1dd90501c9cf3ad0b27fefe4999afdfeaf0a8262728c8d47cd4714b0ef71d340\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://52dd97c46a2e371a68dcd9558be0f94d3bdd20c7a6ba6a65178fc09cdceea903\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4f0320f35632987c0e1f7eb6655c7221d7c159f1473088fa143a7c69a60f5b0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1670eca5569373c8bcdf22d19faf8794c43207f7711d502eeb2ade60d643f5dd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9ecebe409455190192bdd157c1aa1c3eaec5d7930ccfaa6c456a1d04be7fff2f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-17T13:36:41Z\\\",\\\"message\\\":\\\"le observer\\\\nW0217 13:36:40.862072 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0217 13:36:40.862247 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0217 13:36:40.862863 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1936381186/tls.crt::/tmp/serving-cert-1936381186/tls.key\\\\\\\"\\\\nI0217 13:36:41.350711 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0217 13:36:41.368959 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0217 13:36:41.368982 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0217 13:36:41.369003 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0217 13:36:41.369008 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0217 13:36:41.374960 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0217 13:36:41.374983 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 13:36:41.374989 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0217 13:36:41.374990 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0217 13:36:41.374994 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0217 13:36:41.375023 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0217 13:36:41.375028 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0217 13:36:41.375032 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0217 13:36:41.377284 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T13:36:25Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7641c2023d65064decc69606567dff6eff07116a6cc62f9d1b1cddb34a39a407\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:24Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9dffd0f3cc3b89ababe812ea2e550f319242913da2908f466cbb8d9ac541e4b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9dffd0f3cc3b89ababe812ea2e550f319242913da2908f466cbb8d9ac541e4b9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:36:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:36:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:36:21Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:37:01Z is after 2025-08-24T17:21:41Z" Feb 17 13:37:01 crc kubenswrapper[4768]: I0217 13:37:01.669037 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:01 crc kubenswrapper[4768]: I0217 13:37:01.669072 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:01 crc kubenswrapper[4768]: I0217 13:37:01.669084 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:01 crc kubenswrapper[4768]: I0217 13:37:01.669118 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:01 crc kubenswrapper[4768]: I0217 13:37:01.669131 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:01Z","lastTransitionTime":"2026-02-17T13:37:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:01 crc kubenswrapper[4768]: I0217 13:37:01.679814 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c0c5a46beaa5e6c9df638b288967eaff1605591faafc373623753e3c434c6ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a28c91cfb782b4976153d2afced3117498aaabaafa5bba655b020c13aebb1dc5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:37:01Z is after 2025-08-24T17:21:41Z" Feb 17 13:37:01 crc kubenswrapper[4768]: I0217 13:37:01.692346 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:37:01Z is after 2025-08-24T17:21:41Z" Feb 17 13:37:01 crc kubenswrapper[4768]: I0217 13:37:01.725625 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-5bxh7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8c8b1469-ed55-4743-9553-f81efd79e5f1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ltsbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ltsbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:36:55Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-5bxh7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:37:01Z is after 2025-08-24T17:21:41Z" Feb 17 13:37:01 crc kubenswrapper[4768]: I0217 13:37:01.746034 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://07aa34038d6332ca2ddf610df510586d343655b1e38a2912ea1513cca35f4dcc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:37:01Z is after 2025-08-24T17:21:41Z" Feb 17 13:37:01 crc kubenswrapper[4768]: I0217 13:37:01.758996 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:41Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:37:01Z is after 2025-08-24T17:21:41Z" Feb 17 13:37:01 crc kubenswrapper[4768]: I0217 13:37:01.771010 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:01 crc kubenswrapper[4768]: I0217 13:37:01.771059 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:01 crc kubenswrapper[4768]: I0217 13:37:01.771070 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:01 crc kubenswrapper[4768]: I0217 13:37:01.771093 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:01 crc kubenswrapper[4768]: I0217 13:37:01.771131 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:01Z","lastTransitionTime":"2026-02-17T13:37:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:01 crc kubenswrapper[4768]: I0217 13:37:01.771871 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:41Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:37:01Z is after 2025-08-24T17:21:41Z" Feb 17 13:37:01 crc kubenswrapper[4768]: I0217 13:37:01.781549 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6l7rv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dac64b2f-4b0b-454c-96e0-fc7d563d300f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://903f9d31bd5f357a5a1f439a158788c1930d0f63cd50e809028689cfa7884e96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7njgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:36:41Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6l7rv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:37:01Z is after 2025-08-24T17:21:41Z" Feb 17 13:37:01 crc kubenswrapper[4768]: I0217 13:37:01.873692 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:01 crc kubenswrapper[4768]: I0217 13:37:01.873731 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:01 crc kubenswrapper[4768]: I0217 13:37:01.873741 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:01 crc kubenswrapper[4768]: I0217 13:37:01.873758 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:01 crc kubenswrapper[4768]: I0217 13:37:01.873771 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:01Z","lastTransitionTime":"2026-02-17T13:37:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:01 crc kubenswrapper[4768]: I0217 13:37:01.975894 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:01 crc kubenswrapper[4768]: I0217 13:37:01.976279 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:01 crc kubenswrapper[4768]: I0217 13:37:01.976324 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:01 crc kubenswrapper[4768]: I0217 13:37:01.976348 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:01 crc kubenswrapper[4768]: I0217 13:37:01.976358 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:01Z","lastTransitionTime":"2026-02-17T13:37:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:02 crc kubenswrapper[4768]: I0217 13:37:02.079088 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:02 crc kubenswrapper[4768]: I0217 13:37:02.079152 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:02 crc kubenswrapper[4768]: I0217 13:37:02.079160 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:02 crc kubenswrapper[4768]: I0217 13:37:02.079173 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:02 crc kubenswrapper[4768]: I0217 13:37:02.079184 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:02Z","lastTransitionTime":"2026-02-17T13:37:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:02 crc kubenswrapper[4768]: I0217 13:37:02.181201 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:02 crc kubenswrapper[4768]: I0217 13:37:02.181247 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:02 crc kubenswrapper[4768]: I0217 13:37:02.181259 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:02 crc kubenswrapper[4768]: I0217 13:37:02.181276 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:02 crc kubenswrapper[4768]: I0217 13:37:02.181287 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:02Z","lastTransitionTime":"2026-02-17T13:37:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:02 crc kubenswrapper[4768]: I0217 13:37:02.284007 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:02 crc kubenswrapper[4768]: I0217 13:37:02.284068 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:02 crc kubenswrapper[4768]: I0217 13:37:02.284080 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:02 crc kubenswrapper[4768]: I0217 13:37:02.284125 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:02 crc kubenswrapper[4768]: I0217 13:37:02.284139 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:02Z","lastTransitionTime":"2026-02-17T13:37:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:02 crc kubenswrapper[4768]: I0217 13:37:02.386050 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:02 crc kubenswrapper[4768]: I0217 13:37:02.386084 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:02 crc kubenswrapper[4768]: I0217 13:37:02.386094 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:02 crc kubenswrapper[4768]: I0217 13:37:02.386135 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:02 crc kubenswrapper[4768]: I0217 13:37:02.386146 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:02Z","lastTransitionTime":"2026-02-17T13:37:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:02 crc kubenswrapper[4768]: I0217 13:37:02.489301 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:02 crc kubenswrapper[4768]: I0217 13:37:02.489367 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:02 crc kubenswrapper[4768]: I0217 13:37:02.489393 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:02 crc kubenswrapper[4768]: I0217 13:37:02.489422 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:02 crc kubenswrapper[4768]: I0217 13:37:02.489445 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:02Z","lastTransitionTime":"2026-02-17T13:37:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:02 crc kubenswrapper[4768]: I0217 13:37:02.533574 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5bxh7" Feb 17 13:37:02 crc kubenswrapper[4768]: I0217 13:37:02.533602 4768 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-19 13:17:32.160596751 +0000 UTC Feb 17 13:37:02 crc kubenswrapper[4768]: E0217 13:37:02.533775 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5bxh7" podUID="8c8b1469-ed55-4743-9553-f81efd79e5f1" Feb 17 13:37:02 crc kubenswrapper[4768]: I0217 13:37:02.592239 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:02 crc kubenswrapper[4768]: I0217 13:37:02.592272 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:02 crc kubenswrapper[4768]: I0217 13:37:02.592287 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:02 crc kubenswrapper[4768]: I0217 13:37:02.592304 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:02 crc kubenswrapper[4768]: I0217 13:37:02.592315 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:02Z","lastTransitionTime":"2026-02-17T13:37:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:02 crc kubenswrapper[4768]: I0217 13:37:02.694772 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:02 crc kubenswrapper[4768]: I0217 13:37:02.694808 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:02 crc kubenswrapper[4768]: I0217 13:37:02.694816 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:02 crc kubenswrapper[4768]: I0217 13:37:02.694830 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:02 crc kubenswrapper[4768]: I0217 13:37:02.694839 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:02Z","lastTransitionTime":"2026-02-17T13:37:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:02 crc kubenswrapper[4768]: I0217 13:37:02.797346 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:02 crc kubenswrapper[4768]: I0217 13:37:02.797389 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:02 crc kubenswrapper[4768]: I0217 13:37:02.797400 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:02 crc kubenswrapper[4768]: I0217 13:37:02.797419 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:02 crc kubenswrapper[4768]: I0217 13:37:02.797428 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:02Z","lastTransitionTime":"2026-02-17T13:37:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:02 crc kubenswrapper[4768]: I0217 13:37:02.900065 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:02 crc kubenswrapper[4768]: I0217 13:37:02.900162 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:02 crc kubenswrapper[4768]: I0217 13:37:02.900179 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:02 crc kubenswrapper[4768]: I0217 13:37:02.900203 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:02 crc kubenswrapper[4768]: I0217 13:37:02.900220 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:02Z","lastTransitionTime":"2026-02-17T13:37:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:03 crc kubenswrapper[4768]: I0217 13:37:03.002328 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:03 crc kubenswrapper[4768]: I0217 13:37:03.002883 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:03 crc kubenswrapper[4768]: I0217 13:37:03.002898 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:03 crc kubenswrapper[4768]: I0217 13:37:03.002914 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:03 crc kubenswrapper[4768]: I0217 13:37:03.002926 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:03Z","lastTransitionTime":"2026-02-17T13:37:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:03 crc kubenswrapper[4768]: I0217 13:37:03.105046 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:03 crc kubenswrapper[4768]: I0217 13:37:03.105075 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:03 crc kubenswrapper[4768]: I0217 13:37:03.105083 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:03 crc kubenswrapper[4768]: I0217 13:37:03.105111 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:03 crc kubenswrapper[4768]: I0217 13:37:03.105119 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:03Z","lastTransitionTime":"2026-02-17T13:37:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:03 crc kubenswrapper[4768]: I0217 13:37:03.156007 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/8c8b1469-ed55-4743-9553-f81efd79e5f1-metrics-certs\") pod \"network-metrics-daemon-5bxh7\" (UID: \"8c8b1469-ed55-4743-9553-f81efd79e5f1\") " pod="openshift-multus/network-metrics-daemon-5bxh7" Feb 17 13:37:03 crc kubenswrapper[4768]: E0217 13:37:03.156178 4768 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 17 13:37:03 crc kubenswrapper[4768]: E0217 13:37:03.156276 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8c8b1469-ed55-4743-9553-f81efd79e5f1-metrics-certs podName:8c8b1469-ed55-4743-9553-f81efd79e5f1 nodeName:}" failed. No retries permitted until 2026-02-17 13:37:11.156257012 +0000 UTC m=+50.435643454 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/8c8b1469-ed55-4743-9553-f81efd79e5f1-metrics-certs") pod "network-metrics-daemon-5bxh7" (UID: "8c8b1469-ed55-4743-9553-f81efd79e5f1") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 17 13:37:03 crc kubenswrapper[4768]: I0217 13:37:03.207511 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:03 crc kubenswrapper[4768]: I0217 13:37:03.207602 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:03 crc kubenswrapper[4768]: I0217 13:37:03.207626 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:03 crc kubenswrapper[4768]: I0217 13:37:03.207658 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:03 crc kubenswrapper[4768]: I0217 13:37:03.207677 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:03Z","lastTransitionTime":"2026-02-17T13:37:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:03 crc kubenswrapper[4768]: I0217 13:37:03.310272 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:03 crc kubenswrapper[4768]: I0217 13:37:03.310317 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:03 crc kubenswrapper[4768]: I0217 13:37:03.310326 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:03 crc kubenswrapper[4768]: I0217 13:37:03.310342 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:03 crc kubenswrapper[4768]: I0217 13:37:03.310354 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:03Z","lastTransitionTime":"2026-02-17T13:37:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:03 crc kubenswrapper[4768]: I0217 13:37:03.412865 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:03 crc kubenswrapper[4768]: I0217 13:37:03.412918 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:03 crc kubenswrapper[4768]: I0217 13:37:03.412929 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:03 crc kubenswrapper[4768]: I0217 13:37:03.412945 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:03 crc kubenswrapper[4768]: I0217 13:37:03.412957 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:03Z","lastTransitionTime":"2026-02-17T13:37:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:03 crc kubenswrapper[4768]: I0217 13:37:03.515392 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:03 crc kubenswrapper[4768]: I0217 13:37:03.515456 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:03 crc kubenswrapper[4768]: I0217 13:37:03.515472 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:03 crc kubenswrapper[4768]: I0217 13:37:03.515492 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:03 crc kubenswrapper[4768]: I0217 13:37:03.515507 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:03Z","lastTransitionTime":"2026-02-17T13:37:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:03 crc kubenswrapper[4768]: I0217 13:37:03.533924 4768 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-12 13:31:07.920699354 +0000 UTC Feb 17 13:37:03 crc kubenswrapper[4768]: I0217 13:37:03.534088 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 13:37:03 crc kubenswrapper[4768]: I0217 13:37:03.534248 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 13:37:03 crc kubenswrapper[4768]: E0217 13:37:03.534345 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 13:37:03 crc kubenswrapper[4768]: I0217 13:37:03.534379 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 13:37:03 crc kubenswrapper[4768]: E0217 13:37:03.534461 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 13:37:03 crc kubenswrapper[4768]: E0217 13:37:03.534688 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 13:37:03 crc kubenswrapper[4768]: I0217 13:37:03.617474 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:03 crc kubenswrapper[4768]: I0217 13:37:03.617530 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:03 crc kubenswrapper[4768]: I0217 13:37:03.617541 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:03 crc kubenswrapper[4768]: I0217 13:37:03.617559 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:03 crc kubenswrapper[4768]: I0217 13:37:03.617570 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:03Z","lastTransitionTime":"2026-02-17T13:37:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:03 crc kubenswrapper[4768]: I0217 13:37:03.720312 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:03 crc kubenswrapper[4768]: I0217 13:37:03.720372 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:03 crc kubenswrapper[4768]: I0217 13:37:03.720384 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:03 crc kubenswrapper[4768]: I0217 13:37:03.720404 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:03 crc kubenswrapper[4768]: I0217 13:37:03.720419 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:03Z","lastTransitionTime":"2026-02-17T13:37:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:03 crc kubenswrapper[4768]: I0217 13:37:03.822330 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:03 crc kubenswrapper[4768]: I0217 13:37:03.822372 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:03 crc kubenswrapper[4768]: I0217 13:37:03.822382 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:03 crc kubenswrapper[4768]: I0217 13:37:03.822398 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:03 crc kubenswrapper[4768]: I0217 13:37:03.822408 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:03Z","lastTransitionTime":"2026-02-17T13:37:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:03 crc kubenswrapper[4768]: I0217 13:37:03.924638 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:03 crc kubenswrapper[4768]: I0217 13:37:03.924685 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:03 crc kubenswrapper[4768]: I0217 13:37:03.924702 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:03 crc kubenswrapper[4768]: I0217 13:37:03.924720 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:03 crc kubenswrapper[4768]: I0217 13:37:03.924733 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:03Z","lastTransitionTime":"2026-02-17T13:37:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:04 crc kubenswrapper[4768]: I0217 13:37:04.027930 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:04 crc kubenswrapper[4768]: I0217 13:37:04.027975 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:04 crc kubenswrapper[4768]: I0217 13:37:04.027983 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:04 crc kubenswrapper[4768]: I0217 13:37:04.028000 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:04 crc kubenswrapper[4768]: I0217 13:37:04.028009 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:04Z","lastTransitionTime":"2026-02-17T13:37:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:04 crc kubenswrapper[4768]: I0217 13:37:04.130352 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:04 crc kubenswrapper[4768]: I0217 13:37:04.130386 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:04 crc kubenswrapper[4768]: I0217 13:37:04.130394 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:04 crc kubenswrapper[4768]: I0217 13:37:04.130409 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:04 crc kubenswrapper[4768]: I0217 13:37:04.130418 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:04Z","lastTransitionTime":"2026-02-17T13:37:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:04 crc kubenswrapper[4768]: I0217 13:37:04.232806 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:04 crc kubenswrapper[4768]: I0217 13:37:04.232849 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:04 crc kubenswrapper[4768]: I0217 13:37:04.232857 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:04 crc kubenswrapper[4768]: I0217 13:37:04.232876 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:04 crc kubenswrapper[4768]: I0217 13:37:04.232885 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:04Z","lastTransitionTime":"2026-02-17T13:37:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:04 crc kubenswrapper[4768]: I0217 13:37:04.335284 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:04 crc kubenswrapper[4768]: I0217 13:37:04.335325 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:04 crc kubenswrapper[4768]: I0217 13:37:04.335337 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:04 crc kubenswrapper[4768]: I0217 13:37:04.335352 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:04 crc kubenswrapper[4768]: I0217 13:37:04.335361 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:04Z","lastTransitionTime":"2026-02-17T13:37:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:04 crc kubenswrapper[4768]: I0217 13:37:04.437474 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:04 crc kubenswrapper[4768]: I0217 13:37:04.437526 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:04 crc kubenswrapper[4768]: I0217 13:37:04.437543 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:04 crc kubenswrapper[4768]: I0217 13:37:04.437559 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:04 crc kubenswrapper[4768]: I0217 13:37:04.437569 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:04Z","lastTransitionTime":"2026-02-17T13:37:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:04 crc kubenswrapper[4768]: I0217 13:37:04.533217 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5bxh7" Feb 17 13:37:04 crc kubenswrapper[4768]: E0217 13:37:04.533396 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5bxh7" podUID="8c8b1469-ed55-4743-9553-f81efd79e5f1" Feb 17 13:37:04 crc kubenswrapper[4768]: I0217 13:37:04.534275 4768 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-24 21:35:24.506466253 +0000 UTC Feb 17 13:37:04 crc kubenswrapper[4768]: I0217 13:37:04.540353 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:04 crc kubenswrapper[4768]: I0217 13:37:04.540383 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:04 crc kubenswrapper[4768]: I0217 13:37:04.540391 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:04 crc kubenswrapper[4768]: I0217 13:37:04.540408 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:04 crc kubenswrapper[4768]: I0217 13:37:04.540423 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:04Z","lastTransitionTime":"2026-02-17T13:37:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:04 crc kubenswrapper[4768]: I0217 13:37:04.643789 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:04 crc kubenswrapper[4768]: I0217 13:37:04.643853 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:04 crc kubenswrapper[4768]: I0217 13:37:04.643871 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:04 crc kubenswrapper[4768]: I0217 13:37:04.643897 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:04 crc kubenswrapper[4768]: I0217 13:37:04.643914 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:04Z","lastTransitionTime":"2026-02-17T13:37:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:04 crc kubenswrapper[4768]: I0217 13:37:04.746833 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:04 crc kubenswrapper[4768]: I0217 13:37:04.746877 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:04 crc kubenswrapper[4768]: I0217 13:37:04.746888 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:04 crc kubenswrapper[4768]: I0217 13:37:04.746905 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:04 crc kubenswrapper[4768]: I0217 13:37:04.746916 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:04Z","lastTransitionTime":"2026-02-17T13:37:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:04 crc kubenswrapper[4768]: I0217 13:37:04.849976 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:04 crc kubenswrapper[4768]: I0217 13:37:04.850028 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:04 crc kubenswrapper[4768]: I0217 13:37:04.850040 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:04 crc kubenswrapper[4768]: I0217 13:37:04.850062 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:04 crc kubenswrapper[4768]: I0217 13:37:04.850074 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:04Z","lastTransitionTime":"2026-02-17T13:37:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:04 crc kubenswrapper[4768]: I0217 13:37:04.952820 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:04 crc kubenswrapper[4768]: I0217 13:37:04.952865 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:04 crc kubenswrapper[4768]: I0217 13:37:04.952876 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:04 crc kubenswrapper[4768]: I0217 13:37:04.952894 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:04 crc kubenswrapper[4768]: I0217 13:37:04.952905 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:04Z","lastTransitionTime":"2026-02-17T13:37:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:05 crc kubenswrapper[4768]: I0217 13:37:05.055555 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:05 crc kubenswrapper[4768]: I0217 13:37:05.055592 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:05 crc kubenswrapper[4768]: I0217 13:37:05.055603 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:05 crc kubenswrapper[4768]: I0217 13:37:05.055619 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:05 crc kubenswrapper[4768]: I0217 13:37:05.055630 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:05Z","lastTransitionTime":"2026-02-17T13:37:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:05 crc kubenswrapper[4768]: I0217 13:37:05.158887 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:05 crc kubenswrapper[4768]: I0217 13:37:05.158919 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:05 crc kubenswrapper[4768]: I0217 13:37:05.158927 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:05 crc kubenswrapper[4768]: I0217 13:37:05.158942 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:05 crc kubenswrapper[4768]: I0217 13:37:05.158953 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:05Z","lastTransitionTime":"2026-02-17T13:37:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:05 crc kubenswrapper[4768]: I0217 13:37:05.261597 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:05 crc kubenswrapper[4768]: I0217 13:37:05.261654 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:05 crc kubenswrapper[4768]: I0217 13:37:05.261669 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:05 crc kubenswrapper[4768]: I0217 13:37:05.261688 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:05 crc kubenswrapper[4768]: I0217 13:37:05.261700 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:05Z","lastTransitionTime":"2026-02-17T13:37:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:05 crc kubenswrapper[4768]: I0217 13:37:05.364360 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:05 crc kubenswrapper[4768]: I0217 13:37:05.364407 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:05 crc kubenswrapper[4768]: I0217 13:37:05.364417 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:05 crc kubenswrapper[4768]: I0217 13:37:05.364433 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:05 crc kubenswrapper[4768]: I0217 13:37:05.364448 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:05Z","lastTransitionTime":"2026-02-17T13:37:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:05 crc kubenswrapper[4768]: I0217 13:37:05.466870 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:05 crc kubenswrapper[4768]: I0217 13:37:05.466916 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:05 crc kubenswrapper[4768]: I0217 13:37:05.466956 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:05 crc kubenswrapper[4768]: I0217 13:37:05.466979 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:05 crc kubenswrapper[4768]: I0217 13:37:05.466990 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:05Z","lastTransitionTime":"2026-02-17T13:37:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:05 crc kubenswrapper[4768]: I0217 13:37:05.534240 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 13:37:05 crc kubenswrapper[4768]: I0217 13:37:05.534272 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 13:37:05 crc kubenswrapper[4768]: E0217 13:37:05.534396 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 13:37:05 crc kubenswrapper[4768]: I0217 13:37:05.534240 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 13:37:05 crc kubenswrapper[4768]: I0217 13:37:05.534464 4768 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-26 19:19:16.803666329 +0000 UTC Feb 17 13:37:05 crc kubenswrapper[4768]: E0217 13:37:05.534535 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 13:37:05 crc kubenswrapper[4768]: E0217 13:37:05.534727 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 13:37:05 crc kubenswrapper[4768]: I0217 13:37:05.569242 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:05 crc kubenswrapper[4768]: I0217 13:37:05.569315 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:05 crc kubenswrapper[4768]: I0217 13:37:05.569342 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:05 crc kubenswrapper[4768]: I0217 13:37:05.569371 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:05 crc kubenswrapper[4768]: I0217 13:37:05.569392 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:05Z","lastTransitionTime":"2026-02-17T13:37:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:05 crc kubenswrapper[4768]: I0217 13:37:05.671761 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:05 crc kubenswrapper[4768]: I0217 13:37:05.671818 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:05 crc kubenswrapper[4768]: I0217 13:37:05.671832 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:05 crc kubenswrapper[4768]: I0217 13:37:05.671854 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:05 crc kubenswrapper[4768]: I0217 13:37:05.671867 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:05Z","lastTransitionTime":"2026-02-17T13:37:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:05 crc kubenswrapper[4768]: I0217 13:37:05.774280 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:05 crc kubenswrapper[4768]: I0217 13:37:05.774328 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:05 crc kubenswrapper[4768]: I0217 13:37:05.774341 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:05 crc kubenswrapper[4768]: I0217 13:37:05.774362 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:05 crc kubenswrapper[4768]: I0217 13:37:05.774373 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:05Z","lastTransitionTime":"2026-02-17T13:37:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:05 crc kubenswrapper[4768]: I0217 13:37:05.876645 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:05 crc kubenswrapper[4768]: I0217 13:37:05.876695 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:05 crc kubenswrapper[4768]: I0217 13:37:05.876708 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:05 crc kubenswrapper[4768]: I0217 13:37:05.876729 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:05 crc kubenswrapper[4768]: I0217 13:37:05.876741 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:05Z","lastTransitionTime":"2026-02-17T13:37:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:05 crc kubenswrapper[4768]: I0217 13:37:05.979073 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:05 crc kubenswrapper[4768]: I0217 13:37:05.979133 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:05 crc kubenswrapper[4768]: I0217 13:37:05.979145 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:05 crc kubenswrapper[4768]: I0217 13:37:05.979161 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:05 crc kubenswrapper[4768]: I0217 13:37:05.979174 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:05Z","lastTransitionTime":"2026-02-17T13:37:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:06 crc kubenswrapper[4768]: I0217 13:37:06.081930 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:06 crc kubenswrapper[4768]: I0217 13:37:06.081976 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:06 crc kubenswrapper[4768]: I0217 13:37:06.081989 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:06 crc kubenswrapper[4768]: I0217 13:37:06.082006 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:06 crc kubenswrapper[4768]: I0217 13:37:06.082018 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:06Z","lastTransitionTime":"2026-02-17T13:37:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:06 crc kubenswrapper[4768]: I0217 13:37:06.184518 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:06 crc kubenswrapper[4768]: I0217 13:37:06.184569 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:06 crc kubenswrapper[4768]: I0217 13:37:06.184580 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:06 crc kubenswrapper[4768]: I0217 13:37:06.184596 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:06 crc kubenswrapper[4768]: I0217 13:37:06.184608 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:06Z","lastTransitionTime":"2026-02-17T13:37:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:06 crc kubenswrapper[4768]: I0217 13:37:06.286882 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:06 crc kubenswrapper[4768]: I0217 13:37:06.286946 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:06 crc kubenswrapper[4768]: I0217 13:37:06.286969 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:06 crc kubenswrapper[4768]: I0217 13:37:06.286997 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:06 crc kubenswrapper[4768]: I0217 13:37:06.287017 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:06Z","lastTransitionTime":"2026-02-17T13:37:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:06 crc kubenswrapper[4768]: I0217 13:37:06.389747 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:06 crc kubenswrapper[4768]: I0217 13:37:06.389787 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:06 crc kubenswrapper[4768]: I0217 13:37:06.389798 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:06 crc kubenswrapper[4768]: I0217 13:37:06.389816 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:06 crc kubenswrapper[4768]: I0217 13:37:06.389828 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:06Z","lastTransitionTime":"2026-02-17T13:37:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:06 crc kubenswrapper[4768]: I0217 13:37:06.493134 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:06 crc kubenswrapper[4768]: I0217 13:37:06.493197 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:06 crc kubenswrapper[4768]: I0217 13:37:06.493210 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:06 crc kubenswrapper[4768]: I0217 13:37:06.493226 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:06 crc kubenswrapper[4768]: I0217 13:37:06.493234 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:06Z","lastTransitionTime":"2026-02-17T13:37:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:06 crc kubenswrapper[4768]: I0217 13:37:06.534330 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5bxh7" Feb 17 13:37:06 crc kubenswrapper[4768]: E0217 13:37:06.534813 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5bxh7" podUID="8c8b1469-ed55-4743-9553-f81efd79e5f1" Feb 17 13:37:06 crc kubenswrapper[4768]: I0217 13:37:06.534962 4768 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-29 20:45:37.109438999 +0000 UTC Feb 17 13:37:06 crc kubenswrapper[4768]: I0217 13:37:06.595564 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:06 crc kubenswrapper[4768]: I0217 13:37:06.595601 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:06 crc kubenswrapper[4768]: I0217 13:37:06.595612 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:06 crc kubenswrapper[4768]: I0217 13:37:06.595629 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:06 crc kubenswrapper[4768]: I0217 13:37:06.595640 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:06Z","lastTransitionTime":"2026-02-17T13:37:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:06 crc kubenswrapper[4768]: I0217 13:37:06.697874 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:06 crc kubenswrapper[4768]: I0217 13:37:06.697926 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:06 crc kubenswrapper[4768]: I0217 13:37:06.697937 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:06 crc kubenswrapper[4768]: I0217 13:37:06.697954 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:06 crc kubenswrapper[4768]: I0217 13:37:06.697966 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:06Z","lastTransitionTime":"2026-02-17T13:37:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:06 crc kubenswrapper[4768]: I0217 13:37:06.800834 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:06 crc kubenswrapper[4768]: I0217 13:37:06.800880 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:06 crc kubenswrapper[4768]: I0217 13:37:06.800891 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:06 crc kubenswrapper[4768]: I0217 13:37:06.800911 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:06 crc kubenswrapper[4768]: I0217 13:37:06.800923 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:06Z","lastTransitionTime":"2026-02-17T13:37:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:06 crc kubenswrapper[4768]: I0217 13:37:06.903915 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:06 crc kubenswrapper[4768]: I0217 13:37:06.903950 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:06 crc kubenswrapper[4768]: I0217 13:37:06.903958 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:06 crc kubenswrapper[4768]: I0217 13:37:06.903975 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:06 crc kubenswrapper[4768]: I0217 13:37:06.903984 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:06Z","lastTransitionTime":"2026-02-17T13:37:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:07 crc kubenswrapper[4768]: I0217 13:37:07.005920 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:07 crc kubenswrapper[4768]: I0217 13:37:07.005987 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:07 crc kubenswrapper[4768]: I0217 13:37:07.006008 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:07 crc kubenswrapper[4768]: I0217 13:37:07.006033 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:07 crc kubenswrapper[4768]: I0217 13:37:07.006049 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:07Z","lastTransitionTime":"2026-02-17T13:37:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:07 crc kubenswrapper[4768]: I0217 13:37:07.108679 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:07 crc kubenswrapper[4768]: I0217 13:37:07.108730 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:07 crc kubenswrapper[4768]: I0217 13:37:07.108742 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:07 crc kubenswrapper[4768]: I0217 13:37:07.108760 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:07 crc kubenswrapper[4768]: I0217 13:37:07.108772 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:07Z","lastTransitionTime":"2026-02-17T13:37:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:07 crc kubenswrapper[4768]: I0217 13:37:07.210857 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:07 crc kubenswrapper[4768]: I0217 13:37:07.210894 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:07 crc kubenswrapper[4768]: I0217 13:37:07.210904 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:07 crc kubenswrapper[4768]: I0217 13:37:07.210919 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:07 crc kubenswrapper[4768]: I0217 13:37:07.210928 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:07Z","lastTransitionTime":"2026-02-17T13:37:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:07 crc kubenswrapper[4768]: I0217 13:37:07.313554 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:07 crc kubenswrapper[4768]: I0217 13:37:07.313588 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:07 crc kubenswrapper[4768]: I0217 13:37:07.313596 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:07 crc kubenswrapper[4768]: I0217 13:37:07.313612 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:07 crc kubenswrapper[4768]: I0217 13:37:07.313622 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:07Z","lastTransitionTime":"2026-02-17T13:37:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:07 crc kubenswrapper[4768]: I0217 13:37:07.415368 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:07 crc kubenswrapper[4768]: I0217 13:37:07.415413 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:07 crc kubenswrapper[4768]: I0217 13:37:07.415425 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:07 crc kubenswrapper[4768]: I0217 13:37:07.415441 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:07 crc kubenswrapper[4768]: I0217 13:37:07.415453 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:07Z","lastTransitionTime":"2026-02-17T13:37:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:07 crc kubenswrapper[4768]: I0217 13:37:07.518208 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:07 crc kubenswrapper[4768]: I0217 13:37:07.518269 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:07 crc kubenswrapper[4768]: I0217 13:37:07.518282 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:07 crc kubenswrapper[4768]: I0217 13:37:07.518297 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:07 crc kubenswrapper[4768]: I0217 13:37:07.518307 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:07Z","lastTransitionTime":"2026-02-17T13:37:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:07 crc kubenswrapper[4768]: I0217 13:37:07.533539 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 13:37:07 crc kubenswrapper[4768]: I0217 13:37:07.533546 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 13:37:07 crc kubenswrapper[4768]: E0217 13:37:07.533694 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 13:37:07 crc kubenswrapper[4768]: I0217 13:37:07.533733 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 13:37:07 crc kubenswrapper[4768]: E0217 13:37:07.533828 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 13:37:07 crc kubenswrapper[4768]: E0217 13:37:07.533905 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 13:37:07 crc kubenswrapper[4768]: I0217 13:37:07.535169 4768 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-22 01:01:42.999654433 +0000 UTC Feb 17 13:37:07 crc kubenswrapper[4768]: I0217 13:37:07.620889 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:07 crc kubenswrapper[4768]: I0217 13:37:07.620981 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:07 crc kubenswrapper[4768]: I0217 13:37:07.621005 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:07 crc kubenswrapper[4768]: I0217 13:37:07.621039 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:07 crc kubenswrapper[4768]: I0217 13:37:07.621061 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:07Z","lastTransitionTime":"2026-02-17T13:37:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:07 crc kubenswrapper[4768]: I0217 13:37:07.724885 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:07 crc kubenswrapper[4768]: I0217 13:37:07.724944 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:07 crc kubenswrapper[4768]: I0217 13:37:07.724967 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:07 crc kubenswrapper[4768]: I0217 13:37:07.724996 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:07 crc kubenswrapper[4768]: I0217 13:37:07.725018 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:07Z","lastTransitionTime":"2026-02-17T13:37:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:07 crc kubenswrapper[4768]: I0217 13:37:07.826786 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:07 crc kubenswrapper[4768]: I0217 13:37:07.826825 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:07 crc kubenswrapper[4768]: I0217 13:37:07.826833 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:07 crc kubenswrapper[4768]: I0217 13:37:07.826848 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:07 crc kubenswrapper[4768]: I0217 13:37:07.826859 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:07Z","lastTransitionTime":"2026-02-17T13:37:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:07 crc kubenswrapper[4768]: I0217 13:37:07.929920 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:07 crc kubenswrapper[4768]: I0217 13:37:07.929975 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:07 crc kubenswrapper[4768]: I0217 13:37:07.929989 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:07 crc kubenswrapper[4768]: I0217 13:37:07.930006 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:07 crc kubenswrapper[4768]: I0217 13:37:07.930018 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:07Z","lastTransitionTime":"2026-02-17T13:37:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:07 crc kubenswrapper[4768]: I0217 13:37:07.973279 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:07 crc kubenswrapper[4768]: I0217 13:37:07.973350 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:07 crc kubenswrapper[4768]: I0217 13:37:07.973364 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:07 crc kubenswrapper[4768]: I0217 13:37:07.973389 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:07 crc kubenswrapper[4768]: I0217 13:37:07.973403 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:07Z","lastTransitionTime":"2026-02-17T13:37:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:07 crc kubenswrapper[4768]: E0217 13:37:07.990147 4768 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T13:37:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T13:37:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T13:37:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T13:37:07Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T13:37:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T13:37:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T13:37:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T13:37:07Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"72b2b2e1-552d-4984-900d-b4db18ea60be\\\",\\\"systemUUID\\\":\\\"85ded5cb-c1f6-4d1b-b23e-ee8660dcd6ef\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:37:07Z is after 2025-08-24T17:21:41Z" Feb 17 13:37:07 crc kubenswrapper[4768]: I0217 13:37:07.995806 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:07 crc kubenswrapper[4768]: I0217 13:37:07.995900 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:07 crc kubenswrapper[4768]: I0217 13:37:07.995950 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:07 crc kubenswrapper[4768]: I0217 13:37:07.995986 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:07 crc kubenswrapper[4768]: I0217 13:37:07.996071 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:07Z","lastTransitionTime":"2026-02-17T13:37:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:08 crc kubenswrapper[4768]: E0217 13:37:08.013247 4768 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T13:37:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T13:37:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T13:37:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T13:37:07Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T13:37:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T13:37:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T13:37:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T13:37:07Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"72b2b2e1-552d-4984-900d-b4db18ea60be\\\",\\\"systemUUID\\\":\\\"85ded5cb-c1f6-4d1b-b23e-ee8660dcd6ef\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:37:08Z is after 2025-08-24T17:21:41Z" Feb 17 13:37:08 crc kubenswrapper[4768]: I0217 13:37:08.017374 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:08 crc kubenswrapper[4768]: I0217 13:37:08.017460 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:08 crc kubenswrapper[4768]: I0217 13:37:08.017479 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:08 crc kubenswrapper[4768]: I0217 13:37:08.017507 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:08 crc kubenswrapper[4768]: I0217 13:37:08.017571 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:08Z","lastTransitionTime":"2026-02-17T13:37:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:08 crc kubenswrapper[4768]: E0217 13:37:08.041192 4768 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T13:37:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T13:37:08Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T13:37:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T13:37:08Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T13:37:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T13:37:08Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T13:37:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T13:37:08Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"72b2b2e1-552d-4984-900d-b4db18ea60be\\\",\\\"systemUUID\\\":\\\"85ded5cb-c1f6-4d1b-b23e-ee8660dcd6ef\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:37:08Z is after 2025-08-24T17:21:41Z" Feb 17 13:37:08 crc kubenswrapper[4768]: I0217 13:37:08.046891 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:08 crc kubenswrapper[4768]: I0217 13:37:08.046963 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:08 crc kubenswrapper[4768]: I0217 13:37:08.046974 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:08 crc kubenswrapper[4768]: I0217 13:37:08.047048 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:08 crc kubenswrapper[4768]: I0217 13:37:08.047069 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:08Z","lastTransitionTime":"2026-02-17T13:37:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:08 crc kubenswrapper[4768]: E0217 13:37:08.066701 4768 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T13:37:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T13:37:08Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T13:37:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T13:37:08Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T13:37:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T13:37:08Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T13:37:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T13:37:08Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"72b2b2e1-552d-4984-900d-b4db18ea60be\\\",\\\"systemUUID\\\":\\\"85ded5cb-c1f6-4d1b-b23e-ee8660dcd6ef\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:37:08Z is after 2025-08-24T17:21:41Z" Feb 17 13:37:08 crc kubenswrapper[4768]: I0217 13:37:08.072423 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:08 crc kubenswrapper[4768]: I0217 13:37:08.072484 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:08 crc kubenswrapper[4768]: I0217 13:37:08.072495 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:08 crc kubenswrapper[4768]: I0217 13:37:08.072519 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:08 crc kubenswrapper[4768]: I0217 13:37:08.072530 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:08Z","lastTransitionTime":"2026-02-17T13:37:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:08 crc kubenswrapper[4768]: E0217 13:37:08.088166 4768 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T13:37:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T13:37:08Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T13:37:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T13:37:08Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T13:37:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T13:37:08Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T13:37:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T13:37:08Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"72b2b2e1-552d-4984-900d-b4db18ea60be\\\",\\\"systemUUID\\\":\\\"85ded5cb-c1f6-4d1b-b23e-ee8660dcd6ef\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:37:08Z is after 2025-08-24T17:21:41Z" Feb 17 13:37:08 crc kubenswrapper[4768]: E0217 13:37:08.088371 4768 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 17 13:37:08 crc kubenswrapper[4768]: I0217 13:37:08.090206 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:08 crc kubenswrapper[4768]: I0217 13:37:08.090300 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:08 crc kubenswrapper[4768]: I0217 13:37:08.090315 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:08 crc kubenswrapper[4768]: I0217 13:37:08.090342 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:08 crc kubenswrapper[4768]: I0217 13:37:08.090354 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:08Z","lastTransitionTime":"2026-02-17T13:37:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:08 crc kubenswrapper[4768]: I0217 13:37:08.193436 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:08 crc kubenswrapper[4768]: I0217 13:37:08.193501 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:08 crc kubenswrapper[4768]: I0217 13:37:08.193517 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:08 crc kubenswrapper[4768]: I0217 13:37:08.193543 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:08 crc kubenswrapper[4768]: I0217 13:37:08.193560 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:08Z","lastTransitionTime":"2026-02-17T13:37:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:08 crc kubenswrapper[4768]: I0217 13:37:08.296078 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:08 crc kubenswrapper[4768]: I0217 13:37:08.296174 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:08 crc kubenswrapper[4768]: I0217 13:37:08.296187 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:08 crc kubenswrapper[4768]: I0217 13:37:08.296205 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:08 crc kubenswrapper[4768]: I0217 13:37:08.296215 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:08Z","lastTransitionTime":"2026-02-17T13:37:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:08 crc kubenswrapper[4768]: I0217 13:37:08.398883 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:08 crc kubenswrapper[4768]: I0217 13:37:08.398928 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:08 crc kubenswrapper[4768]: I0217 13:37:08.398941 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:08 crc kubenswrapper[4768]: I0217 13:37:08.398958 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:08 crc kubenswrapper[4768]: I0217 13:37:08.398969 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:08Z","lastTransitionTime":"2026-02-17T13:37:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:08 crc kubenswrapper[4768]: I0217 13:37:08.501545 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:08 crc kubenswrapper[4768]: I0217 13:37:08.501585 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:08 crc kubenswrapper[4768]: I0217 13:37:08.501594 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:08 crc kubenswrapper[4768]: I0217 13:37:08.501610 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:08 crc kubenswrapper[4768]: I0217 13:37:08.501620 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:08Z","lastTransitionTime":"2026-02-17T13:37:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:08 crc kubenswrapper[4768]: I0217 13:37:08.534272 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5bxh7" Feb 17 13:37:08 crc kubenswrapper[4768]: E0217 13:37:08.534669 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5bxh7" podUID="8c8b1469-ed55-4743-9553-f81efd79e5f1" Feb 17 13:37:08 crc kubenswrapper[4768]: I0217 13:37:08.534930 4768 scope.go:117] "RemoveContainer" containerID="e937c78628c81e51ca2970f2890cadee5475242e013b3d67f8ef3eefb2276705" Feb 17 13:37:08 crc kubenswrapper[4768]: I0217 13:37:08.535254 4768 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-07 21:09:51.345889402 +0000 UTC Feb 17 13:37:08 crc kubenswrapper[4768]: I0217 13:37:08.561796 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"99a4349d-f5a6-431b-b8d6-9de1f9bbe63a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f45aa99773693c07f27add72aa8305d47c48deeb24382aa9ac8bca232fd26ff3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://47db5ef94c07ecce7b368bb809a736c9532fe3e47081cfa7e248b8b40d1d243f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0adb0161ea24422b8b1443a81f00e9c38f60bf8d0718075ce189158200b28a78\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22e1c78ff231c7a76ebb06589f5f14caed8c0b72475083230c80b4512ae2a048\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:36:21Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:37:08Z is after 2025-08-24T17:21:41Z" Feb 17 13:37:08 crc kubenswrapper[4768]: I0217 13:37:08.576986 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f9b2d1282c12fcb8597c7b4c60dc1bab1049afc179eaddbc8a72203b392f78d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:37:08Z is after 2025-08-24T17:21:41Z" Feb 17 13:37:08 crc kubenswrapper[4768]: I0217 13:37:08.594073 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-p97z4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"10c685ba-8fe0-425c-958c-3fb6754d3d84\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d56cb97a69888610458dcdfe5fa47e01b6641b0ab8ce5b11c01042001017d633\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b2h62\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://69b662556c0f67ed5b73c21d8dda6de43cd7b0e7c3dc7a57578f3ce94a4be7cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b2h62\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:36:42Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-p97z4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:37:08Z is after 2025-08-24T17:21:41Z" Feb 17 13:37:08 crc kubenswrapper[4768]: I0217 13:37:08.604363 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:08 crc kubenswrapper[4768]: I0217 13:37:08.604403 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:08 crc kubenswrapper[4768]: I0217 13:37:08.604413 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:08 crc kubenswrapper[4768]: I0217 13:37:08.604430 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:08 crc kubenswrapper[4768]: I0217 13:37:08.604441 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:08Z","lastTransitionTime":"2026-02-17T13:37:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:08 crc kubenswrapper[4768]: I0217 13:37:08.609152 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-jjjqj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e044bf1f-26b2-4a39-86e6-0440eff3eaa9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://19bc9ea99c2b3b0015849f2a4c730a14e048c35cf69f5b4025a4d51d707cf9f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jlkgb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:36:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-jjjqj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:37:08Z is after 2025-08-24T17:21:41Z" Feb 17 13:37:08 crc kubenswrapper[4768]: I0217 13:37:08.628952 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5cplg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"742e6df8-2a68-426e-982c-ef825c6efca3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://88f4574f4bb95ef3e580f5e3e98feb55cd1381df1cc14a3797f040f3a8d92e86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg6ql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c29effffff4df037300ce8f32cae5337e3c60634d26fb82fde99ad7994a4fd4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg6ql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bca83097ac9d4b621e9bffa8f2f815a6060db34cb0272281b765aa7705d3ab5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg6ql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aaa535fc7f87d47d5bb54797f60b4adcabe1adc1d5d79720acc7a9ba7bb355c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg6ql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e14dfa7f4c26b9ae731392907658bf0a72b811243137b32320a1be70a334a1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg6ql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0849644593ce9a9b0ce3191eb62e47c5986c4b20f93313dd9fa721c7879ede5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg6ql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e937c78628c81e51ca2970f2890cadee5475242e013b3d67f8ef3eefb2276705\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e937c78628c81e51ca2970f2890cadee5475242e013b3d67f8ef3eefb2276705\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-17T13:36:55Z\\\",\\\"message\\\":\\\" not yet valid: current time 2026-02-17T13:36:55Z is after 2025-08-24T17:21:41Z]\\\\nI0217 13:36:55.571984 6269 services_controller.go:360] Finished syncing service apiserver on namespace openshift-kube-apiserver for network=default : 2.624546ms\\\\nI0217 13:36:55.571947 6269 services_controller.go:434] Service openshift-machine-config-operator/machine-config-operator retrieved from lister for network=default: \\\\u0026Service{ObjectMeta:{machine-config-operator openshift-machine-config-operator 8bc1afc2-8724-4135-84df-aee09f23af4c 4514 0 2025-02-23 05:12:24 +0000 UTC \\\\u003cnil\\\\u003e \\\\u003cnil\\\\u003e map[k8s-app:machine-config-operator] map[include.release.openshift.io/ibm-cloud-managed:true include.release.openshift.io/self-managed-high-availability:true include.release.openshift.io/single-node-developer:true service.alpha.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1740288168 service.beta.openshift.io/serving-cert-secret-name:mco-proxy-tls service.beta.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1740288168] [{config.openshift.io/v1 ClusterVersion version 9101b518-476b-4eea-8fa6-69b0534e5caa 0xc00757033b \\\\u003cnil\\\\u003e}] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:metrics,Protocol:TCP,Port:9001,TargetPort:{0 9001 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{k8s-app: machine-config-operator,},Cluster\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T13:36:54Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-5cplg_openshift-ovn-kubernetes(742e6df8-2a68-426e-982c-ef825c6efca3)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg6ql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8e5ca206927b1e2a6968ce42ecdb1441da4c68ad11b65c657a68889ac73af37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg6ql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6fcb3dc45db5b2c74f9bdd4c7c453815a392de1f2afe91ebed618327cfe5cb7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6fcb3dc45db5b2c74f9bdd4c7c453815a392de1f2afe91ebed618327cfe5cb7c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:36:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg6ql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:36:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-5cplg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:37:08Z is after 2025-08-24T17:21:41Z" Feb 17 13:37:08 crc kubenswrapper[4768]: I0217 13:37:08.638804 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-hngsc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a9682c-5dd5-49ce-bd8c-60e91527ec2a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db596eafde85d9ad74007393b8576b424cd8561f68f694af82cde80f997a5688\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fbjbb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:36:46Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-hngsc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:37:08Z is after 2025-08-24T17:21:41Z" Feb 17 13:37:08 crc kubenswrapper[4768]: I0217 13:37:08.651888 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-6xvnz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"68ae92b3-aced-409b-901b-252d2364cc01\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c3e6dc9e6ef1908b60694a76b89114accbccc8e33d2334e751125f348ff68191\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42dxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ccce43f2773621b91f6782909e74d6c1cb29f3f2e0d4280753e2719f256d694\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3ccce43f2773621b91f6782909e74d6c1cb29f3f2e0d4280753e2719f256d694\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:36:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:36:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42dxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8fa44e73498e2987dca421d954e3c6b71c36138496d25e3498934a0ae03c25de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8fa44e73498e2987dca421d954e3c6b71c36138496d25e3498934a0ae03c25de\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:36:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:36:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42dxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8eee5a030051c479dc3e7d690a338260f3a886f6c8b6764037de0a4b3dcea101\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8eee5a030051c479dc3e7d690a338260f3a886f6c8b6764037de0a4b3dcea101\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:36:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:36:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42dxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://473b18f574df0c69d4f70ded0dee7c42ee64daab422dff19c6a58a0cde52d045\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://473b18f574df0c69d4f70ded0dee7c42ee64daab422dff19c6a58a0cde52d045\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:36:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:36:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42dxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a84122e995b4744d706f85250dc149036364dc1e476c38b54f08494e0cb41ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3a84122e995b4744d706f85250dc149036364dc1e476c38b54f08494e0cb41ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:36:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:36:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42dxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f77e278dccb14a5e674dad398d194080a57e6949529914ddfa9f8a63dbea8a34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f77e278dccb14a5e674dad398d194080a57e6949529914ddfa9f8a63dbea8a34\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:36:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:36:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42dxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:36:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-6xvnz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:37:08Z is after 2025-08-24T17:21:41Z" Feb 17 13:37:08 crc kubenswrapper[4768]: I0217 13:37:08.664647 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-62mzv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f7e6dba9-bf9f-464a-9842-f4f2a793dedf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25c892cb5b274e124423e1da0778d7b3415022fb0280eed30a6b48055d44c3bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nz8rj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4dade5f6bbbe0473f9330ea16398a4522405ca6b22a0e65851a98fe57943b38f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nz8rj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:36:53Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-62mzv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:37:08Z is after 2025-08-24T17:21:41Z" Feb 17 13:37:08 crc kubenswrapper[4768]: I0217 13:37:08.678823 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"88441663-1346-4328-a433-63cb4d9b6722\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1dd90501c9cf3ad0b27fefe4999afdfeaf0a8262728c8d47cd4714b0ef71d340\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://52dd97c46a2e371a68dcd9558be0f94d3bdd20c7a6ba6a65178fc09cdceea903\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4f0320f35632987c0e1f7eb6655c7221d7c159f1473088fa143a7c69a60f5b0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1670eca5569373c8bcdf22d19faf8794c43207f7711d502eeb2ade60d643f5dd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9ecebe409455190192bdd157c1aa1c3eaec5d7930ccfaa6c456a1d04be7fff2f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-17T13:36:41Z\\\",\\\"message\\\":\\\"le observer\\\\nW0217 13:36:40.862072 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0217 13:36:40.862247 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0217 13:36:40.862863 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1936381186/tls.crt::/tmp/serving-cert-1936381186/tls.key\\\\\\\"\\\\nI0217 13:36:41.350711 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0217 13:36:41.368959 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0217 13:36:41.368982 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0217 13:36:41.369003 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0217 13:36:41.369008 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0217 13:36:41.374960 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0217 13:36:41.374983 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 13:36:41.374989 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0217 13:36:41.374990 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0217 13:36:41.374994 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0217 13:36:41.375023 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0217 13:36:41.375028 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0217 13:36:41.375032 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0217 13:36:41.377284 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T13:36:25Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7641c2023d65064decc69606567dff6eff07116a6cc62f9d1b1cddb34a39a407\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:24Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9dffd0f3cc3b89ababe812ea2e550f319242913da2908f466cbb8d9ac541e4b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9dffd0f3cc3b89ababe812ea2e550f319242913da2908f466cbb8d9ac541e4b9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:36:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:36:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:36:21Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:37:08Z is after 2025-08-24T17:21:41Z" Feb 17 13:37:08 crc kubenswrapper[4768]: I0217 13:37:08.692289 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c0c5a46beaa5e6c9df638b288967eaff1605591faafc373623753e3c434c6ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a28c91cfb782b4976153d2afced3117498aaabaafa5bba655b020c13aebb1dc5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:37:08Z is after 2025-08-24T17:21:41Z" Feb 17 13:37:08 crc kubenswrapper[4768]: I0217 13:37:08.707239 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:08 crc kubenswrapper[4768]: I0217 13:37:08.707276 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:08 crc kubenswrapper[4768]: I0217 13:37:08.707287 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:08 crc kubenswrapper[4768]: I0217 13:37:08.707301 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:08 crc kubenswrapper[4768]: I0217 13:37:08.707311 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:08Z","lastTransitionTime":"2026-02-17T13:37:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:08 crc kubenswrapper[4768]: I0217 13:37:08.709213 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:37:08Z is after 2025-08-24T17:21:41Z" Feb 17 13:37:08 crc kubenswrapper[4768]: I0217 13:37:08.720533 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-5bxh7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8c8b1469-ed55-4743-9553-f81efd79e5f1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ltsbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ltsbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:36:55Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-5bxh7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:37:08Z is after 2025-08-24T17:21:41Z" Feb 17 13:37:08 crc kubenswrapper[4768]: I0217 13:37:08.731787 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://07aa34038d6332ca2ddf610df510586d343655b1e38a2912ea1513cca35f4dcc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:37:08Z is after 2025-08-24T17:21:41Z" Feb 17 13:37:08 crc kubenswrapper[4768]: I0217 13:37:08.743584 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:41Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:37:08Z is after 2025-08-24T17:21:41Z" Feb 17 13:37:08 crc kubenswrapper[4768]: I0217 13:37:08.753692 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:41Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:37:08Z is after 2025-08-24T17:21:41Z" Feb 17 13:37:08 crc kubenswrapper[4768]: I0217 13:37:08.762428 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6l7rv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dac64b2f-4b0b-454c-96e0-fc7d563d300f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://903f9d31bd5f357a5a1f439a158788c1930d0f63cd50e809028689cfa7884e96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7njgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:36:41Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6l7rv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:37:08Z is after 2025-08-24T17:21:41Z" Feb 17 13:37:08 crc kubenswrapper[4768]: I0217 13:37:08.809401 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:08 crc kubenswrapper[4768]: I0217 13:37:08.809445 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:08 crc kubenswrapper[4768]: I0217 13:37:08.809455 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:08 crc kubenswrapper[4768]: I0217 13:37:08.809473 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:08 crc kubenswrapper[4768]: I0217 13:37:08.809484 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:08Z","lastTransitionTime":"2026-02-17T13:37:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:08 crc kubenswrapper[4768]: I0217 13:37:08.840408 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-5cplg_742e6df8-2a68-426e-982c-ef825c6efca3/ovnkube-controller/1.log" Feb 17 13:37:08 crc kubenswrapper[4768]: I0217 13:37:08.842163 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5cplg" event={"ID":"742e6df8-2a68-426e-982c-ef825c6efca3","Type":"ContainerStarted","Data":"c2be51c8707a26e2f124761117543e359c32c8fb8d69d1aa54cea3f1b9cfb11b"} Feb 17 13:37:08 crc kubenswrapper[4768]: I0217 13:37:08.842299 4768 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 17 13:37:08 crc kubenswrapper[4768]: I0217 13:37:08.857015 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"88441663-1346-4328-a433-63cb4d9b6722\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1dd90501c9cf3ad0b27fefe4999afdfeaf0a8262728c8d47cd4714b0ef71d340\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://52dd97c46a2e371a68dcd9558be0f94d3bdd20c7a6ba6a65178fc09cdceea903\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4f0320f35632987c0e1f7eb6655c7221d7c159f1473088fa143a7c69a60f5b0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1670eca5569373c8bcdf22d19faf8794c43207f7711d502eeb2ade60d643f5dd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9ecebe409455190192bdd157c1aa1c3eaec5d7930ccfaa6c456a1d04be7fff2f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-17T13:36:41Z\\\",\\\"message\\\":\\\"le observer\\\\nW0217 13:36:40.862072 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0217 13:36:40.862247 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0217 13:36:40.862863 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1936381186/tls.crt::/tmp/serving-cert-1936381186/tls.key\\\\\\\"\\\\nI0217 13:36:41.350711 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0217 13:36:41.368959 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0217 13:36:41.368982 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0217 13:36:41.369003 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0217 13:36:41.369008 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0217 13:36:41.374960 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0217 13:36:41.374983 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 13:36:41.374989 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0217 13:36:41.374990 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0217 13:36:41.374994 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0217 13:36:41.375023 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0217 13:36:41.375028 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0217 13:36:41.375032 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0217 13:36:41.377284 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T13:36:25Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7641c2023d65064decc69606567dff6eff07116a6cc62f9d1b1cddb34a39a407\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:24Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9dffd0f3cc3b89ababe812ea2e550f319242913da2908f466cbb8d9ac541e4b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9dffd0f3cc3b89ababe812ea2e550f319242913da2908f466cbb8d9ac541e4b9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:36:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:36:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:36:21Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:37:08Z is after 2025-08-24T17:21:41Z" Feb 17 13:37:08 crc kubenswrapper[4768]: I0217 13:37:08.867496 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c0c5a46beaa5e6c9df638b288967eaff1605591faafc373623753e3c434c6ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a28c91cfb782b4976153d2afced3117498aaabaafa5bba655b020c13aebb1dc5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:37:08Z is after 2025-08-24T17:21:41Z" Feb 17 13:37:08 crc kubenswrapper[4768]: I0217 13:37:08.877974 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:37:08Z is after 2025-08-24T17:21:41Z" Feb 17 13:37:08 crc kubenswrapper[4768]: I0217 13:37:08.887953 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-5bxh7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8c8b1469-ed55-4743-9553-f81efd79e5f1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ltsbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ltsbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:36:55Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-5bxh7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:37:08Z is after 2025-08-24T17:21:41Z" Feb 17 13:37:08 crc kubenswrapper[4768]: I0217 13:37:08.898166 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://07aa34038d6332ca2ddf610df510586d343655b1e38a2912ea1513cca35f4dcc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:37:08Z is after 2025-08-24T17:21:41Z" Feb 17 13:37:08 crc kubenswrapper[4768]: I0217 13:37:08.908687 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:41Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:37:08Z is after 2025-08-24T17:21:41Z" Feb 17 13:37:08 crc kubenswrapper[4768]: I0217 13:37:08.911480 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:08 crc kubenswrapper[4768]: I0217 13:37:08.911519 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:08 crc kubenswrapper[4768]: I0217 13:37:08.911528 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:08 crc kubenswrapper[4768]: I0217 13:37:08.911544 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:08 crc kubenswrapper[4768]: I0217 13:37:08.911553 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:08Z","lastTransitionTime":"2026-02-17T13:37:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:08 crc kubenswrapper[4768]: I0217 13:37:08.921928 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:41Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:37:08Z is after 2025-08-24T17:21:41Z" Feb 17 13:37:08 crc kubenswrapper[4768]: I0217 13:37:08.931756 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6l7rv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dac64b2f-4b0b-454c-96e0-fc7d563d300f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://903f9d31bd5f357a5a1f439a158788c1930d0f63cd50e809028689cfa7884e96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7njgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:36:41Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6l7rv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:37:08Z is after 2025-08-24T17:21:41Z" Feb 17 13:37:08 crc kubenswrapper[4768]: I0217 13:37:08.940766 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-hngsc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a9682c-5dd5-49ce-bd8c-60e91527ec2a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db596eafde85d9ad74007393b8576b424cd8561f68f694af82cde80f997a5688\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fbjbb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:36:46Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-hngsc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:37:08Z is after 2025-08-24T17:21:41Z" Feb 17 13:37:08 crc kubenswrapper[4768]: I0217 13:37:08.951985 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"99a4349d-f5a6-431b-b8d6-9de1f9bbe63a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f45aa99773693c07f27add72aa8305d47c48deeb24382aa9ac8bca232fd26ff3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://47db5ef94c07ecce7b368bb809a736c9532fe3e47081cfa7e248b8b40d1d243f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0adb0161ea24422b8b1443a81f00e9c38f60bf8d0718075ce189158200b28a78\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22e1c78ff231c7a76ebb06589f5f14caed8c0b72475083230c80b4512ae2a048\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:36:21Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:37:08Z is after 2025-08-24T17:21:41Z" Feb 17 13:37:08 crc kubenswrapper[4768]: I0217 13:37:08.965020 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f9b2d1282c12fcb8597c7b4c60dc1bab1049afc179eaddbc8a72203b392f78d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:37:08Z is after 2025-08-24T17:21:41Z" Feb 17 13:37:08 crc kubenswrapper[4768]: I0217 13:37:08.975808 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-p97z4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"10c685ba-8fe0-425c-958c-3fb6754d3d84\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d56cb97a69888610458dcdfe5fa47e01b6641b0ab8ce5b11c01042001017d633\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b2h62\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://69b662556c0f67ed5b73c21d8dda6de43cd7b0e7c3dc7a57578f3ce94a4be7cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b2h62\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:36:42Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-p97z4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:37:08Z is after 2025-08-24T17:21:41Z" Feb 17 13:37:08 crc kubenswrapper[4768]: I0217 13:37:08.992200 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-jjjqj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e044bf1f-26b2-4a39-86e6-0440eff3eaa9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://19bc9ea99c2b3b0015849f2a4c730a14e048c35cf69f5b4025a4d51d707cf9f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jlkgb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:36:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-jjjqj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:37:08Z is after 2025-08-24T17:21:41Z" Feb 17 13:37:09 crc kubenswrapper[4768]: I0217 13:37:09.013286 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:09 crc kubenswrapper[4768]: I0217 13:37:09.013319 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:09 crc kubenswrapper[4768]: I0217 13:37:09.013327 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:09 crc kubenswrapper[4768]: I0217 13:37:09.013340 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:09 crc kubenswrapper[4768]: I0217 13:37:09.013349 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:09Z","lastTransitionTime":"2026-02-17T13:37:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:09 crc kubenswrapper[4768]: I0217 13:37:09.014187 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5cplg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"742e6df8-2a68-426e-982c-ef825c6efca3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://88f4574f4bb95ef3e580f5e3e98feb55cd1381df1cc14a3797f040f3a8d92e86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg6ql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c29effffff4df037300ce8f32cae5337e3c60634d26fb82fde99ad7994a4fd4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg6ql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bca83097ac9d4b621e9bffa8f2f815a6060db34cb0272281b765aa7705d3ab5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg6ql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aaa535fc7f87d47d5bb54797f60b4adcabe1adc1d5d79720acc7a9ba7bb355c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg6ql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e14dfa7f4c26b9ae731392907658bf0a72b811243137b32320a1be70a334a1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg6ql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0849644593ce9a9b0ce3191eb62e47c5986c4b20f93313dd9fa721c7879ede5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg6ql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c2be51c8707a26e2f124761117543e359c32c8fb8d69d1aa54cea3f1b9cfb11b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e937c78628c81e51ca2970f2890cadee5475242e013b3d67f8ef3eefb2276705\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-17T13:36:55Z\\\",\\\"message\\\":\\\" not yet valid: current time 2026-02-17T13:36:55Z is after 2025-08-24T17:21:41Z]\\\\nI0217 13:36:55.571984 6269 services_controller.go:360] Finished syncing service apiserver on namespace openshift-kube-apiserver for network=default : 2.624546ms\\\\nI0217 13:36:55.571947 6269 services_controller.go:434] Service openshift-machine-config-operator/machine-config-operator retrieved from lister for network=default: \\\\u0026Service{ObjectMeta:{machine-config-operator openshift-machine-config-operator 8bc1afc2-8724-4135-84df-aee09f23af4c 4514 0 2025-02-23 05:12:24 +0000 UTC \\\\u003cnil\\\\u003e \\\\u003cnil\\\\u003e map[k8s-app:machine-config-operator] map[include.release.openshift.io/ibm-cloud-managed:true include.release.openshift.io/self-managed-high-availability:true include.release.openshift.io/single-node-developer:true service.alpha.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1740288168 service.beta.openshift.io/serving-cert-secret-name:mco-proxy-tls service.beta.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1740288168] [{config.openshift.io/v1 ClusterVersion version 9101b518-476b-4eea-8fa6-69b0534e5caa 0xc00757033b \\\\u003cnil\\\\u003e}] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:metrics,Protocol:TCP,Port:9001,TargetPort:{0 9001 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{k8s-app: machine-config-operator,},Cluster\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T13:36:54Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:37:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg6ql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8e5ca206927b1e2a6968ce42ecdb1441da4c68ad11b65c657a68889ac73af37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg6ql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6fcb3dc45db5b2c74f9bdd4c7c453815a392de1f2afe91ebed618327cfe5cb7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6fcb3dc45db5b2c74f9bdd4c7c453815a392de1f2afe91ebed618327cfe5cb7c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:36:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg6ql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:36:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-5cplg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:37:09Z is after 2025-08-24T17:21:41Z" Feb 17 13:37:09 crc kubenswrapper[4768]: I0217 13:37:09.032728 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-6xvnz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"68ae92b3-aced-409b-901b-252d2364cc01\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c3e6dc9e6ef1908b60694a76b89114accbccc8e33d2334e751125f348ff68191\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42dxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ccce43f2773621b91f6782909e74d6c1cb29f3f2e0d4280753e2719f256d694\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3ccce43f2773621b91f6782909e74d6c1cb29f3f2e0d4280753e2719f256d694\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:36:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:36:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42dxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8fa44e73498e2987dca421d954e3c6b71c36138496d25e3498934a0ae03c25de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8fa44e73498e2987dca421d954e3c6b71c36138496d25e3498934a0ae03c25de\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:36:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:36:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42dxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8eee5a030051c479dc3e7d690a338260f3a886f6c8b6764037de0a4b3dcea101\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8eee5a030051c479dc3e7d690a338260f3a886f6c8b6764037de0a4b3dcea101\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:36:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:36:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42dxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://473b18f574df0c69d4f70ded0dee7c42ee64daab422dff19c6a58a0cde52d045\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://473b18f574df0c69d4f70ded0dee7c42ee64daab422dff19c6a58a0cde52d045\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:36:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:36:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42dxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a84122e995b4744d706f85250dc149036364dc1e476c38b54f08494e0cb41ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3a84122e995b4744d706f85250dc149036364dc1e476c38b54f08494e0cb41ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:36:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:36:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42dxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f77e278dccb14a5e674dad398d194080a57e6949529914ddfa9f8a63dbea8a34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f77e278dccb14a5e674dad398d194080a57e6949529914ddfa9f8a63dbea8a34\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:36:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:36:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42dxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:36:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-6xvnz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:37:09Z is after 2025-08-24T17:21:41Z" Feb 17 13:37:09 crc kubenswrapper[4768]: I0217 13:37:09.046405 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-62mzv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f7e6dba9-bf9f-464a-9842-f4f2a793dedf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25c892cb5b274e124423e1da0778d7b3415022fb0280eed30a6b48055d44c3bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nz8rj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4dade5f6bbbe0473f9330ea16398a4522405ca6b22a0e65851a98fe57943b38f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nz8rj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:36:53Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-62mzv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:37:09Z is after 2025-08-24T17:21:41Z" Feb 17 13:37:09 crc kubenswrapper[4768]: I0217 13:37:09.115612 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:09 crc kubenswrapper[4768]: I0217 13:37:09.115695 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:09 crc kubenswrapper[4768]: I0217 13:37:09.115719 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:09 crc kubenswrapper[4768]: I0217 13:37:09.115748 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:09 crc kubenswrapper[4768]: I0217 13:37:09.115770 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:09Z","lastTransitionTime":"2026-02-17T13:37:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:09 crc kubenswrapper[4768]: I0217 13:37:09.218276 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:09 crc kubenswrapper[4768]: I0217 13:37:09.218310 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:09 crc kubenswrapper[4768]: I0217 13:37:09.218319 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:09 crc kubenswrapper[4768]: I0217 13:37:09.218335 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:09 crc kubenswrapper[4768]: I0217 13:37:09.218344 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:09Z","lastTransitionTime":"2026-02-17T13:37:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:09 crc kubenswrapper[4768]: I0217 13:37:09.320094 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:09 crc kubenswrapper[4768]: I0217 13:37:09.320154 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:09 crc kubenswrapper[4768]: I0217 13:37:09.320165 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:09 crc kubenswrapper[4768]: I0217 13:37:09.320184 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:09 crc kubenswrapper[4768]: I0217 13:37:09.320195 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:09Z","lastTransitionTime":"2026-02-17T13:37:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:09 crc kubenswrapper[4768]: I0217 13:37:09.422538 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:09 crc kubenswrapper[4768]: I0217 13:37:09.422591 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:09 crc kubenswrapper[4768]: I0217 13:37:09.422602 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:09 crc kubenswrapper[4768]: I0217 13:37:09.422621 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:09 crc kubenswrapper[4768]: I0217 13:37:09.422633 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:09Z","lastTransitionTime":"2026-02-17T13:37:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:09 crc kubenswrapper[4768]: I0217 13:37:09.525346 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:09 crc kubenswrapper[4768]: I0217 13:37:09.525388 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:09 crc kubenswrapper[4768]: I0217 13:37:09.525396 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:09 crc kubenswrapper[4768]: I0217 13:37:09.525428 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:09 crc kubenswrapper[4768]: I0217 13:37:09.525447 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:09Z","lastTransitionTime":"2026-02-17T13:37:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:09 crc kubenswrapper[4768]: I0217 13:37:09.533816 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 13:37:09 crc kubenswrapper[4768]: I0217 13:37:09.533855 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 13:37:09 crc kubenswrapper[4768]: I0217 13:37:09.533925 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 13:37:09 crc kubenswrapper[4768]: E0217 13:37:09.534015 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 13:37:09 crc kubenswrapper[4768]: E0217 13:37:09.534191 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 13:37:09 crc kubenswrapper[4768]: E0217 13:37:09.534492 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 13:37:09 crc kubenswrapper[4768]: I0217 13:37:09.535583 4768 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-05 11:29:04.976727175 +0000 UTC Feb 17 13:37:09 crc kubenswrapper[4768]: I0217 13:37:09.627813 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:09 crc kubenswrapper[4768]: I0217 13:37:09.627884 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:09 crc kubenswrapper[4768]: I0217 13:37:09.627894 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:09 crc kubenswrapper[4768]: I0217 13:37:09.627910 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:09 crc kubenswrapper[4768]: I0217 13:37:09.627920 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:09Z","lastTransitionTime":"2026-02-17T13:37:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:09 crc kubenswrapper[4768]: I0217 13:37:09.729917 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:09 crc kubenswrapper[4768]: I0217 13:37:09.729954 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:09 crc kubenswrapper[4768]: I0217 13:37:09.729964 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:09 crc kubenswrapper[4768]: I0217 13:37:09.729999 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:09 crc kubenswrapper[4768]: I0217 13:37:09.730015 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:09Z","lastTransitionTime":"2026-02-17T13:37:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:09 crc kubenswrapper[4768]: I0217 13:37:09.832071 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:09 crc kubenswrapper[4768]: I0217 13:37:09.832137 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:09 crc kubenswrapper[4768]: I0217 13:37:09.832155 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:09 crc kubenswrapper[4768]: I0217 13:37:09.832174 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:09 crc kubenswrapper[4768]: I0217 13:37:09.832186 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:09Z","lastTransitionTime":"2026-02-17T13:37:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:09 crc kubenswrapper[4768]: I0217 13:37:09.847662 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-5cplg_742e6df8-2a68-426e-982c-ef825c6efca3/ovnkube-controller/2.log" Feb 17 13:37:09 crc kubenswrapper[4768]: I0217 13:37:09.849213 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-5cplg_742e6df8-2a68-426e-982c-ef825c6efca3/ovnkube-controller/1.log" Feb 17 13:37:09 crc kubenswrapper[4768]: I0217 13:37:09.853040 4768 generic.go:334] "Generic (PLEG): container finished" podID="742e6df8-2a68-426e-982c-ef825c6efca3" containerID="c2be51c8707a26e2f124761117543e359c32c8fb8d69d1aa54cea3f1b9cfb11b" exitCode=1 Feb 17 13:37:09 crc kubenswrapper[4768]: I0217 13:37:09.853130 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5cplg" event={"ID":"742e6df8-2a68-426e-982c-ef825c6efca3","Type":"ContainerDied","Data":"c2be51c8707a26e2f124761117543e359c32c8fb8d69d1aa54cea3f1b9cfb11b"} Feb 17 13:37:09 crc kubenswrapper[4768]: I0217 13:37:09.853182 4768 scope.go:117] "RemoveContainer" containerID="e937c78628c81e51ca2970f2890cadee5475242e013b3d67f8ef3eefb2276705" Feb 17 13:37:09 crc kubenswrapper[4768]: I0217 13:37:09.854395 4768 scope.go:117] "RemoveContainer" containerID="c2be51c8707a26e2f124761117543e359c32c8fb8d69d1aa54cea3f1b9cfb11b" Feb 17 13:37:09 crc kubenswrapper[4768]: E0217 13:37:09.854675 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-5cplg_openshift-ovn-kubernetes(742e6df8-2a68-426e-982c-ef825c6efca3)\"" pod="openshift-ovn-kubernetes/ovnkube-node-5cplg" podUID="742e6df8-2a68-426e-982c-ef825c6efca3" Feb 17 13:37:09 crc kubenswrapper[4768]: I0217 13:37:09.873733 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-6xvnz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"68ae92b3-aced-409b-901b-252d2364cc01\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c3e6dc9e6ef1908b60694a76b89114accbccc8e33d2334e751125f348ff68191\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42dxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ccce43f2773621b91f6782909e74d6c1cb29f3f2e0d4280753e2719f256d694\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3ccce43f2773621b91f6782909e74d6c1cb29f3f2e0d4280753e2719f256d694\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:36:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:36:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42dxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8fa44e73498e2987dca421d954e3c6b71c36138496d25e3498934a0ae03c25de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8fa44e73498e2987dca421d954e3c6b71c36138496d25e3498934a0ae03c25de\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:36:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:36:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42dxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8eee5a030051c479dc3e7d690a338260f3a886f6c8b6764037de0a4b3dcea101\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8eee5a030051c479dc3e7d690a338260f3a886f6c8b6764037de0a4b3dcea101\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:36:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:36:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42dxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://473b18f574df0c69d4f70ded0dee7c42ee64daab422dff19c6a58a0cde52d045\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://473b18f574df0c69d4f70ded0dee7c42ee64daab422dff19c6a58a0cde52d045\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:36:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:36:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42dxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a84122e995b4744d706f85250dc149036364dc1e476c38b54f08494e0cb41ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3a84122e995b4744d706f85250dc149036364dc1e476c38b54f08494e0cb41ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:36:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:36:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42dxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f77e278dccb14a5e674dad398d194080a57e6949529914ddfa9f8a63dbea8a34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f77e278dccb14a5e674dad398d194080a57e6949529914ddfa9f8a63dbea8a34\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:36:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:36:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42dxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:36:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-6xvnz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:37:09Z is after 2025-08-24T17:21:41Z" Feb 17 13:37:09 crc kubenswrapper[4768]: I0217 13:37:09.887899 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-62mzv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f7e6dba9-bf9f-464a-9842-f4f2a793dedf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25c892cb5b274e124423e1da0778d7b3415022fb0280eed30a6b48055d44c3bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nz8rj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4dade5f6bbbe0473f9330ea16398a4522405ca6b22a0e65851a98fe57943b38f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nz8rj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:36:53Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-62mzv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:37:09Z is after 2025-08-24T17:21:41Z" Feb 17 13:37:09 crc kubenswrapper[4768]: I0217 13:37:09.902241 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"88441663-1346-4328-a433-63cb4d9b6722\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1dd90501c9cf3ad0b27fefe4999afdfeaf0a8262728c8d47cd4714b0ef71d340\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://52dd97c46a2e371a68dcd9558be0f94d3bdd20c7a6ba6a65178fc09cdceea903\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4f0320f35632987c0e1f7eb6655c7221d7c159f1473088fa143a7c69a60f5b0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1670eca5569373c8bcdf22d19faf8794c43207f7711d502eeb2ade60d643f5dd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9ecebe409455190192bdd157c1aa1c3eaec5d7930ccfaa6c456a1d04be7fff2f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-17T13:36:41Z\\\",\\\"message\\\":\\\"le observer\\\\nW0217 13:36:40.862072 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0217 13:36:40.862247 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0217 13:36:40.862863 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1936381186/tls.crt::/tmp/serving-cert-1936381186/tls.key\\\\\\\"\\\\nI0217 13:36:41.350711 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0217 13:36:41.368959 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0217 13:36:41.368982 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0217 13:36:41.369003 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0217 13:36:41.369008 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0217 13:36:41.374960 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0217 13:36:41.374983 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 13:36:41.374989 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0217 13:36:41.374990 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0217 13:36:41.374994 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0217 13:36:41.375023 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0217 13:36:41.375028 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0217 13:36:41.375032 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0217 13:36:41.377284 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T13:36:25Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7641c2023d65064decc69606567dff6eff07116a6cc62f9d1b1cddb34a39a407\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:24Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9dffd0f3cc3b89ababe812ea2e550f319242913da2908f466cbb8d9ac541e4b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9dffd0f3cc3b89ababe812ea2e550f319242913da2908f466cbb8d9ac541e4b9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:36:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:36:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:36:21Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:37:09Z is after 2025-08-24T17:21:41Z" Feb 17 13:37:09 crc kubenswrapper[4768]: I0217 13:37:09.918512 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c0c5a46beaa5e6c9df638b288967eaff1605591faafc373623753e3c434c6ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a28c91cfb782b4976153d2afced3117498aaabaafa5bba655b020c13aebb1dc5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:37:09Z is after 2025-08-24T17:21:41Z" Feb 17 13:37:09 crc kubenswrapper[4768]: I0217 13:37:09.931553 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:37:09Z is after 2025-08-24T17:21:41Z" Feb 17 13:37:09 crc kubenswrapper[4768]: I0217 13:37:09.934049 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:09 crc kubenswrapper[4768]: I0217 13:37:09.934087 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:09 crc kubenswrapper[4768]: I0217 13:37:09.934116 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:09 crc kubenswrapper[4768]: I0217 13:37:09.934136 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:09 crc kubenswrapper[4768]: I0217 13:37:09.934150 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:09Z","lastTransitionTime":"2026-02-17T13:37:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:09 crc kubenswrapper[4768]: I0217 13:37:09.944290 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-5bxh7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8c8b1469-ed55-4743-9553-f81efd79e5f1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ltsbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ltsbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:36:55Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-5bxh7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:37:09Z is after 2025-08-24T17:21:41Z" Feb 17 13:37:09 crc kubenswrapper[4768]: I0217 13:37:09.959363 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://07aa34038d6332ca2ddf610df510586d343655b1e38a2912ea1513cca35f4dcc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:37:09Z is after 2025-08-24T17:21:41Z" Feb 17 13:37:09 crc kubenswrapper[4768]: I0217 13:37:09.971045 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:41Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:37:09Z is after 2025-08-24T17:21:41Z" Feb 17 13:37:09 crc kubenswrapper[4768]: I0217 13:37:09.983949 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:41Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:37:09Z is after 2025-08-24T17:21:41Z" Feb 17 13:37:09 crc kubenswrapper[4768]: I0217 13:37:09.992809 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6l7rv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dac64b2f-4b0b-454c-96e0-fc7d563d300f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://903f9d31bd5f357a5a1f439a158788c1930d0f63cd50e809028689cfa7884e96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7njgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:36:41Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6l7rv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:37:09Z is after 2025-08-24T17:21:41Z" Feb 17 13:37:10 crc kubenswrapper[4768]: I0217 13:37:10.003492 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"99a4349d-f5a6-431b-b8d6-9de1f9bbe63a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f45aa99773693c07f27add72aa8305d47c48deeb24382aa9ac8bca232fd26ff3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://47db5ef94c07ecce7b368bb809a736c9532fe3e47081cfa7e248b8b40d1d243f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0adb0161ea24422b8b1443a81f00e9c38f60bf8d0718075ce189158200b28a78\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22e1c78ff231c7a76ebb06589f5f14caed8c0b72475083230c80b4512ae2a048\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:36:21Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:37:10Z is after 2025-08-24T17:21:41Z" Feb 17 13:37:10 crc kubenswrapper[4768]: I0217 13:37:10.017150 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f9b2d1282c12fcb8597c7b4c60dc1bab1049afc179eaddbc8a72203b392f78d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:37:10Z is after 2025-08-24T17:21:41Z" Feb 17 13:37:10 crc kubenswrapper[4768]: I0217 13:37:10.027866 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-p97z4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"10c685ba-8fe0-425c-958c-3fb6754d3d84\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d56cb97a69888610458dcdfe5fa47e01b6641b0ab8ce5b11c01042001017d633\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b2h62\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://69b662556c0f67ed5b73c21d8dda6de43cd7b0e7c3dc7a57578f3ce94a4be7cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b2h62\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:36:42Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-p97z4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:37:10Z is after 2025-08-24T17:21:41Z" Feb 17 13:37:10 crc kubenswrapper[4768]: I0217 13:37:10.036621 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:10 crc kubenswrapper[4768]: I0217 13:37:10.036664 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:10 crc kubenswrapper[4768]: I0217 13:37:10.036673 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:10 crc kubenswrapper[4768]: I0217 13:37:10.036688 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:10 crc kubenswrapper[4768]: I0217 13:37:10.036696 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:10Z","lastTransitionTime":"2026-02-17T13:37:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:10 crc kubenswrapper[4768]: I0217 13:37:10.039052 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-jjjqj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e044bf1f-26b2-4a39-86e6-0440eff3eaa9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://19bc9ea99c2b3b0015849f2a4c730a14e048c35cf69f5b4025a4d51d707cf9f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jlkgb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:36:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-jjjqj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:37:10Z is after 2025-08-24T17:21:41Z" Feb 17 13:37:10 crc kubenswrapper[4768]: I0217 13:37:10.055229 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5cplg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"742e6df8-2a68-426e-982c-ef825c6efca3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://88f4574f4bb95ef3e580f5e3e98feb55cd1381df1cc14a3797f040f3a8d92e86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg6ql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c29effffff4df037300ce8f32cae5337e3c60634d26fb82fde99ad7994a4fd4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg6ql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bca83097ac9d4b621e9bffa8f2f815a6060db34cb0272281b765aa7705d3ab5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg6ql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aaa535fc7f87d47d5bb54797f60b4adcabe1adc1d5d79720acc7a9ba7bb355c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg6ql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e14dfa7f4c26b9ae731392907658bf0a72b811243137b32320a1be70a334a1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg6ql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0849644593ce9a9b0ce3191eb62e47c5986c4b20f93313dd9fa721c7879ede5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg6ql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c2be51c8707a26e2f124761117543e359c32c8fb8d69d1aa54cea3f1b9cfb11b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e937c78628c81e51ca2970f2890cadee5475242e013b3d67f8ef3eefb2276705\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-17T13:36:55Z\\\",\\\"message\\\":\\\" not yet valid: current time 2026-02-17T13:36:55Z is after 2025-08-24T17:21:41Z]\\\\nI0217 13:36:55.571984 6269 services_controller.go:360] Finished syncing service apiserver on namespace openshift-kube-apiserver for network=default : 2.624546ms\\\\nI0217 13:36:55.571947 6269 services_controller.go:434] Service openshift-machine-config-operator/machine-config-operator retrieved from lister for network=default: \\\\u0026Service{ObjectMeta:{machine-config-operator openshift-machine-config-operator 8bc1afc2-8724-4135-84df-aee09f23af4c 4514 0 2025-02-23 05:12:24 +0000 UTC \\\\u003cnil\\\\u003e \\\\u003cnil\\\\u003e map[k8s-app:machine-config-operator] map[include.release.openshift.io/ibm-cloud-managed:true include.release.openshift.io/self-managed-high-availability:true include.release.openshift.io/single-node-developer:true service.alpha.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1740288168 service.beta.openshift.io/serving-cert-secret-name:mco-proxy-tls service.beta.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1740288168] [{config.openshift.io/v1 ClusterVersion version 9101b518-476b-4eea-8fa6-69b0534e5caa 0xc00757033b \\\\u003cnil\\\\u003e}] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:metrics,Protocol:TCP,Port:9001,TargetPort:{0 9001 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{k8s-app: machine-config-operator,},Cluster\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T13:36:54Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c2be51c8707a26e2f124761117543e359c32c8fb8d69d1aa54cea3f1b9cfb11b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-17T13:37:09Z\\\",\\\"message\\\":\\\" start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:37:09Z is after 2025-08-24T17:21:41Z]\\\\nI0217 13:37:09.670472 6446 services_controller.go:473] Services do not match for network=default, existing lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-marketplace/redhat-marketplace_TCP_cluster\\\\\\\", UUID:\\\\\\\"97b6e7b0-06ca-455e-8259-06895040cb0c\\\\\\\", Protocol:\\\\\\\"tcp\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-marketplace/redhat-marketplace\\\\\\\"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T13:37:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg6ql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8e5ca206927b1e2a6968ce42ecdb1441da4c68ad11b65c657a68889ac73af37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg6ql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6fcb3dc45db5b2c74f9bdd4c7c453815a392de1f2afe91ebed618327cfe5cb7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6fcb3dc45db5b2c74f9bdd4c7c453815a392de1f2afe91ebed618327cfe5cb7c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:36:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg6ql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:36:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-5cplg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:37:10Z is after 2025-08-24T17:21:41Z" Feb 17 13:37:10 crc kubenswrapper[4768]: I0217 13:37:10.063549 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-hngsc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a9682c-5dd5-49ce-bd8c-60e91527ec2a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db596eafde85d9ad74007393b8576b424cd8561f68f694af82cde80f997a5688\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fbjbb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:36:46Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-hngsc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:37:10Z is after 2025-08-24T17:21:41Z" Feb 17 13:37:10 crc kubenswrapper[4768]: I0217 13:37:10.138525 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:10 crc kubenswrapper[4768]: I0217 13:37:10.138562 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:10 crc kubenswrapper[4768]: I0217 13:37:10.138571 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:10 crc kubenswrapper[4768]: I0217 13:37:10.138589 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:10 crc kubenswrapper[4768]: I0217 13:37:10.138598 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:10Z","lastTransitionTime":"2026-02-17T13:37:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:10 crc kubenswrapper[4768]: I0217 13:37:10.242333 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:10 crc kubenswrapper[4768]: I0217 13:37:10.242407 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:10 crc kubenswrapper[4768]: I0217 13:37:10.242430 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:10 crc kubenswrapper[4768]: I0217 13:37:10.242461 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:10 crc kubenswrapper[4768]: I0217 13:37:10.242483 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:10Z","lastTransitionTime":"2026-02-17T13:37:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:10 crc kubenswrapper[4768]: I0217 13:37:10.268322 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 17 13:37:10 crc kubenswrapper[4768]: I0217 13:37:10.279581 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc"] Feb 17 13:37:10 crc kubenswrapper[4768]: I0217 13:37:10.289490 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"88441663-1346-4328-a433-63cb4d9b6722\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1dd90501c9cf3ad0b27fefe4999afdfeaf0a8262728c8d47cd4714b0ef71d340\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://52dd97c46a2e371a68dcd9558be0f94d3bdd20c7a6ba6a65178fc09cdceea903\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4f0320f35632987c0e1f7eb6655c7221d7c159f1473088fa143a7c69a60f5b0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1670eca5569373c8bcdf22d19faf8794c43207f7711d502eeb2ade60d643f5dd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9ecebe409455190192bdd157c1aa1c3eaec5d7930ccfaa6c456a1d04be7fff2f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-17T13:36:41Z\\\",\\\"message\\\":\\\"le observer\\\\nW0217 13:36:40.862072 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0217 13:36:40.862247 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0217 13:36:40.862863 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1936381186/tls.crt::/tmp/serving-cert-1936381186/tls.key\\\\\\\"\\\\nI0217 13:36:41.350711 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0217 13:36:41.368959 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0217 13:36:41.368982 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0217 13:36:41.369003 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0217 13:36:41.369008 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0217 13:36:41.374960 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0217 13:36:41.374983 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 13:36:41.374989 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0217 13:36:41.374990 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0217 13:36:41.374994 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0217 13:36:41.375023 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0217 13:36:41.375028 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0217 13:36:41.375032 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0217 13:36:41.377284 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T13:36:25Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7641c2023d65064decc69606567dff6eff07116a6cc62f9d1b1cddb34a39a407\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:24Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9dffd0f3cc3b89ababe812ea2e550f319242913da2908f466cbb8d9ac541e4b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9dffd0f3cc3b89ababe812ea2e550f319242913da2908f466cbb8d9ac541e4b9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:36:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:36:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:36:21Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:37:10Z is after 2025-08-24T17:21:41Z" Feb 17 13:37:10 crc kubenswrapper[4768]: I0217 13:37:10.301639 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c0c5a46beaa5e6c9df638b288967eaff1605591faafc373623753e3c434c6ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a28c91cfb782b4976153d2afced3117498aaabaafa5bba655b020c13aebb1dc5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:37:10Z is after 2025-08-24T17:21:41Z" Feb 17 13:37:10 crc kubenswrapper[4768]: I0217 13:37:10.313449 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:37:10Z is after 2025-08-24T17:21:41Z" Feb 17 13:37:10 crc kubenswrapper[4768]: I0217 13:37:10.321592 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-5bxh7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8c8b1469-ed55-4743-9553-f81efd79e5f1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ltsbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ltsbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:36:55Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-5bxh7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:37:10Z is after 2025-08-24T17:21:41Z" Feb 17 13:37:10 crc kubenswrapper[4768]: I0217 13:37:10.331257 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://07aa34038d6332ca2ddf610df510586d343655b1e38a2912ea1513cca35f4dcc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:37:10Z is after 2025-08-24T17:21:41Z" Feb 17 13:37:10 crc kubenswrapper[4768]: I0217 13:37:10.345039 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:10 crc kubenswrapper[4768]: I0217 13:37:10.345071 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:10 crc kubenswrapper[4768]: I0217 13:37:10.345079 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:10 crc kubenswrapper[4768]: I0217 13:37:10.345116 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:10 crc kubenswrapper[4768]: I0217 13:37:10.345128 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:10Z","lastTransitionTime":"2026-02-17T13:37:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:10 crc kubenswrapper[4768]: I0217 13:37:10.347647 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:41Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:37:10Z is after 2025-08-24T17:21:41Z" Feb 17 13:37:10 crc kubenswrapper[4768]: I0217 13:37:10.359306 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:41Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:37:10Z is after 2025-08-24T17:21:41Z" Feb 17 13:37:10 crc kubenswrapper[4768]: I0217 13:37:10.367583 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6l7rv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dac64b2f-4b0b-454c-96e0-fc7d563d300f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://903f9d31bd5f357a5a1f439a158788c1930d0f63cd50e809028689cfa7884e96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7njgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:36:41Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6l7rv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:37:10Z is after 2025-08-24T17:21:41Z" Feb 17 13:37:10 crc kubenswrapper[4768]: I0217 13:37:10.379235 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-jjjqj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e044bf1f-26b2-4a39-86e6-0440eff3eaa9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://19bc9ea99c2b3b0015849f2a4c730a14e048c35cf69f5b4025a4d51d707cf9f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jlkgb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:36:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-jjjqj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:37:10Z is after 2025-08-24T17:21:41Z" Feb 17 13:37:10 crc kubenswrapper[4768]: I0217 13:37:10.395877 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5cplg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"742e6df8-2a68-426e-982c-ef825c6efca3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://88f4574f4bb95ef3e580f5e3e98feb55cd1381df1cc14a3797f040f3a8d92e86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg6ql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c29effffff4df037300ce8f32cae5337e3c60634d26fb82fde99ad7994a4fd4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg6ql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bca83097ac9d4b621e9bffa8f2f815a6060db34cb0272281b765aa7705d3ab5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg6ql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aaa535fc7f87d47d5bb54797f60b4adcabe1adc1d5d79720acc7a9ba7bb355c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg6ql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e14dfa7f4c26b9ae731392907658bf0a72b811243137b32320a1be70a334a1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg6ql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0849644593ce9a9b0ce3191eb62e47c5986c4b20f93313dd9fa721c7879ede5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg6ql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c2be51c8707a26e2f124761117543e359c32c8fb8d69d1aa54cea3f1b9cfb11b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e937c78628c81e51ca2970f2890cadee5475242e013b3d67f8ef3eefb2276705\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-17T13:36:55Z\\\",\\\"message\\\":\\\" not yet valid: current time 2026-02-17T13:36:55Z is after 2025-08-24T17:21:41Z]\\\\nI0217 13:36:55.571984 6269 services_controller.go:360] Finished syncing service apiserver on namespace openshift-kube-apiserver for network=default : 2.624546ms\\\\nI0217 13:36:55.571947 6269 services_controller.go:434] Service openshift-machine-config-operator/machine-config-operator retrieved from lister for network=default: \\\\u0026Service{ObjectMeta:{machine-config-operator openshift-machine-config-operator 8bc1afc2-8724-4135-84df-aee09f23af4c 4514 0 2025-02-23 05:12:24 +0000 UTC \\\\u003cnil\\\\u003e \\\\u003cnil\\\\u003e map[k8s-app:machine-config-operator] map[include.release.openshift.io/ibm-cloud-managed:true include.release.openshift.io/self-managed-high-availability:true include.release.openshift.io/single-node-developer:true service.alpha.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1740288168 service.beta.openshift.io/serving-cert-secret-name:mco-proxy-tls service.beta.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1740288168] [{config.openshift.io/v1 ClusterVersion version 9101b518-476b-4eea-8fa6-69b0534e5caa 0xc00757033b \\\\u003cnil\\\\u003e}] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:metrics,Protocol:TCP,Port:9001,TargetPort:{0 9001 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{k8s-app: machine-config-operator,},Cluster\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T13:36:54Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c2be51c8707a26e2f124761117543e359c32c8fb8d69d1aa54cea3f1b9cfb11b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-17T13:37:09Z\\\",\\\"message\\\":\\\" start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:37:09Z is after 2025-08-24T17:21:41Z]\\\\nI0217 13:37:09.670472 6446 services_controller.go:473] Services do not match for network=default, existing lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-marketplace/redhat-marketplace_TCP_cluster\\\\\\\", UUID:\\\\\\\"97b6e7b0-06ca-455e-8259-06895040cb0c\\\\\\\", Protocol:\\\\\\\"tcp\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-marketplace/redhat-marketplace\\\\\\\"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T13:37:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg6ql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8e5ca206927b1e2a6968ce42ecdb1441da4c68ad11b65c657a68889ac73af37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg6ql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6fcb3dc45db5b2c74f9bdd4c7c453815a392de1f2afe91ebed618327cfe5cb7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6fcb3dc45db5b2c74f9bdd4c7c453815a392de1f2afe91ebed618327cfe5cb7c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:36:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg6ql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:36:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-5cplg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:37:10Z is after 2025-08-24T17:21:41Z" Feb 17 13:37:10 crc kubenswrapper[4768]: I0217 13:37:10.405615 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-hngsc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a9682c-5dd5-49ce-bd8c-60e91527ec2a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db596eafde85d9ad74007393b8576b424cd8561f68f694af82cde80f997a5688\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fbjbb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:36:46Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-hngsc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:37:10Z is after 2025-08-24T17:21:41Z" Feb 17 13:37:10 crc kubenswrapper[4768]: I0217 13:37:10.419636 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"99a4349d-f5a6-431b-b8d6-9de1f9bbe63a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f45aa99773693c07f27add72aa8305d47c48deeb24382aa9ac8bca232fd26ff3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://47db5ef94c07ecce7b368bb809a736c9532fe3e47081cfa7e248b8b40d1d243f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0adb0161ea24422b8b1443a81f00e9c38f60bf8d0718075ce189158200b28a78\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22e1c78ff231c7a76ebb06589f5f14caed8c0b72475083230c80b4512ae2a048\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:36:21Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:37:10Z is after 2025-08-24T17:21:41Z" Feb 17 13:37:10 crc kubenswrapper[4768]: I0217 13:37:10.431814 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f9b2d1282c12fcb8597c7b4c60dc1bab1049afc179eaddbc8a72203b392f78d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:37:10Z is after 2025-08-24T17:21:41Z" Feb 17 13:37:10 crc kubenswrapper[4768]: I0217 13:37:10.441446 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-p97z4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"10c685ba-8fe0-425c-958c-3fb6754d3d84\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d56cb97a69888610458dcdfe5fa47e01b6641b0ab8ce5b11c01042001017d633\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b2h62\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://69b662556c0f67ed5b73c21d8dda6de43cd7b0e7c3dc7a57578f3ce94a4be7cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b2h62\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:36:42Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-p97z4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:37:10Z is after 2025-08-24T17:21:41Z" Feb 17 13:37:10 crc kubenswrapper[4768]: I0217 13:37:10.447385 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:10 crc kubenswrapper[4768]: I0217 13:37:10.447413 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:10 crc kubenswrapper[4768]: I0217 13:37:10.447424 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:10 crc kubenswrapper[4768]: I0217 13:37:10.447440 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:10 crc kubenswrapper[4768]: I0217 13:37:10.447451 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:10Z","lastTransitionTime":"2026-02-17T13:37:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:10 crc kubenswrapper[4768]: I0217 13:37:10.458823 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-6xvnz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"68ae92b3-aced-409b-901b-252d2364cc01\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c3e6dc9e6ef1908b60694a76b89114accbccc8e33d2334e751125f348ff68191\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42dxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ccce43f2773621b91f6782909e74d6c1cb29f3f2e0d4280753e2719f256d694\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3ccce43f2773621b91f6782909e74d6c1cb29f3f2e0d4280753e2719f256d694\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:36:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:36:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42dxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8fa44e73498e2987dca421d954e3c6b71c36138496d25e3498934a0ae03c25de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8fa44e73498e2987dca421d954e3c6b71c36138496d25e3498934a0ae03c25de\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:36:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:36:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42dxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8eee5a030051c479dc3e7d690a338260f3a886f6c8b6764037de0a4b3dcea101\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8eee5a030051c479dc3e7d690a338260f3a886f6c8b6764037de0a4b3dcea101\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:36:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:36:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42dxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://473b18f574df0c69d4f70ded0dee7c42ee64daab422dff19c6a58a0cde52d045\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://473b18f574df0c69d4f70ded0dee7c42ee64daab422dff19c6a58a0cde52d045\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:36:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:36:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42dxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a84122e995b4744d706f85250dc149036364dc1e476c38b54f08494e0cb41ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3a84122e995b4744d706f85250dc149036364dc1e476c38b54f08494e0cb41ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:36:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:36:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42dxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f77e278dccb14a5e674dad398d194080a57e6949529914ddfa9f8a63dbea8a34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f77e278dccb14a5e674dad398d194080a57e6949529914ddfa9f8a63dbea8a34\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:36:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:36:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42dxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:36:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-6xvnz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:37:10Z is after 2025-08-24T17:21:41Z" Feb 17 13:37:10 crc kubenswrapper[4768]: I0217 13:37:10.464301 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-5cplg" Feb 17 13:37:10 crc kubenswrapper[4768]: I0217 13:37:10.471756 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-62mzv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f7e6dba9-bf9f-464a-9842-f4f2a793dedf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25c892cb5b274e124423e1da0778d7b3415022fb0280eed30a6b48055d44c3bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nz8rj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4dade5f6bbbe0473f9330ea16398a4522405ca6b22a0e65851a98fe57943b38f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nz8rj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:36:53Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-62mzv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:37:10Z is after 2025-08-24T17:21:41Z" Feb 17 13:37:10 crc kubenswrapper[4768]: I0217 13:37:10.533628 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5bxh7" Feb 17 13:37:10 crc kubenswrapper[4768]: E0217 13:37:10.533781 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5bxh7" podUID="8c8b1469-ed55-4743-9553-f81efd79e5f1" Feb 17 13:37:10 crc kubenswrapper[4768]: I0217 13:37:10.535900 4768 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-16 22:02:12.879568293 +0000 UTC Feb 17 13:37:10 crc kubenswrapper[4768]: I0217 13:37:10.550398 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:10 crc kubenswrapper[4768]: I0217 13:37:10.550436 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:10 crc kubenswrapper[4768]: I0217 13:37:10.550447 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:10 crc kubenswrapper[4768]: I0217 13:37:10.550463 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:10 crc kubenswrapper[4768]: I0217 13:37:10.550473 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:10Z","lastTransitionTime":"2026-02-17T13:37:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:10 crc kubenswrapper[4768]: I0217 13:37:10.653660 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:10 crc kubenswrapper[4768]: I0217 13:37:10.653734 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:10 crc kubenswrapper[4768]: I0217 13:37:10.653759 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:10 crc kubenswrapper[4768]: I0217 13:37:10.653791 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:10 crc kubenswrapper[4768]: I0217 13:37:10.653813 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:10Z","lastTransitionTime":"2026-02-17T13:37:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:10 crc kubenswrapper[4768]: I0217 13:37:10.757581 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:10 crc kubenswrapper[4768]: I0217 13:37:10.757658 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:10 crc kubenswrapper[4768]: I0217 13:37:10.757674 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:10 crc kubenswrapper[4768]: I0217 13:37:10.757693 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:10 crc kubenswrapper[4768]: I0217 13:37:10.757707 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:10Z","lastTransitionTime":"2026-02-17T13:37:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:10 crc kubenswrapper[4768]: I0217 13:37:10.858896 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-5cplg_742e6df8-2a68-426e-982c-ef825c6efca3/ovnkube-controller/2.log" Feb 17 13:37:10 crc kubenswrapper[4768]: I0217 13:37:10.859721 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:10 crc kubenswrapper[4768]: I0217 13:37:10.859785 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:10 crc kubenswrapper[4768]: I0217 13:37:10.859808 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:10 crc kubenswrapper[4768]: I0217 13:37:10.859833 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:10 crc kubenswrapper[4768]: I0217 13:37:10.859852 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:10Z","lastTransitionTime":"2026-02-17T13:37:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:10 crc kubenswrapper[4768]: I0217 13:37:10.864727 4768 scope.go:117] "RemoveContainer" containerID="c2be51c8707a26e2f124761117543e359c32c8fb8d69d1aa54cea3f1b9cfb11b" Feb 17 13:37:10 crc kubenswrapper[4768]: E0217 13:37:10.865016 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-5cplg_openshift-ovn-kubernetes(742e6df8-2a68-426e-982c-ef825c6efca3)\"" pod="openshift-ovn-kubernetes/ovnkube-node-5cplg" podUID="742e6df8-2a68-426e-982c-ef825c6efca3" Feb 17 13:37:10 crc kubenswrapper[4768]: I0217 13:37:10.877555 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:41Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:37:10Z is after 2025-08-24T17:21:41Z" Feb 17 13:37:10 crc kubenswrapper[4768]: I0217 13:37:10.891176 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6l7rv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dac64b2f-4b0b-454c-96e0-fc7d563d300f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://903f9d31bd5f357a5a1f439a158788c1930d0f63cd50e809028689cfa7884e96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7njgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:36:41Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6l7rv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:37:10Z is after 2025-08-24T17:21:41Z" Feb 17 13:37:10 crc kubenswrapper[4768]: I0217 13:37:10.907602 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://07aa34038d6332ca2ddf610df510586d343655b1e38a2912ea1513cca35f4dcc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:37:10Z is after 2025-08-24T17:21:41Z" Feb 17 13:37:10 crc kubenswrapper[4768]: I0217 13:37:10.919681 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:41Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:37:10Z is after 2025-08-24T17:21:41Z" Feb 17 13:37:10 crc kubenswrapper[4768]: I0217 13:37:10.932473 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f9b2d1282c12fcb8597c7b4c60dc1bab1049afc179eaddbc8a72203b392f78d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:37:10Z is after 2025-08-24T17:21:41Z" Feb 17 13:37:10 crc kubenswrapper[4768]: I0217 13:37:10.946187 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-p97z4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"10c685ba-8fe0-425c-958c-3fb6754d3d84\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d56cb97a69888610458dcdfe5fa47e01b6641b0ab8ce5b11c01042001017d633\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b2h62\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://69b662556c0f67ed5b73c21d8dda6de43cd7b0e7c3dc7a57578f3ce94a4be7cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b2h62\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:36:42Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-p97z4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:37:10Z is after 2025-08-24T17:21:41Z" Feb 17 13:37:10 crc kubenswrapper[4768]: I0217 13:37:10.957975 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-jjjqj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e044bf1f-26b2-4a39-86e6-0440eff3eaa9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://19bc9ea99c2b3b0015849f2a4c730a14e048c35cf69f5b4025a4d51d707cf9f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jlkgb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:36:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-jjjqj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:37:10Z is after 2025-08-24T17:21:41Z" Feb 17 13:37:10 crc kubenswrapper[4768]: I0217 13:37:10.961522 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:10 crc kubenswrapper[4768]: I0217 13:37:10.961558 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:10 crc kubenswrapper[4768]: I0217 13:37:10.961567 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:10 crc kubenswrapper[4768]: I0217 13:37:10.961583 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:10 crc kubenswrapper[4768]: I0217 13:37:10.961593 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:10Z","lastTransitionTime":"2026-02-17T13:37:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:10 crc kubenswrapper[4768]: I0217 13:37:10.974374 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5cplg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"742e6df8-2a68-426e-982c-ef825c6efca3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://88f4574f4bb95ef3e580f5e3e98feb55cd1381df1cc14a3797f040f3a8d92e86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg6ql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c29effffff4df037300ce8f32cae5337e3c60634d26fb82fde99ad7994a4fd4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg6ql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bca83097ac9d4b621e9bffa8f2f815a6060db34cb0272281b765aa7705d3ab5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg6ql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aaa535fc7f87d47d5bb54797f60b4adcabe1adc1d5d79720acc7a9ba7bb355c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg6ql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e14dfa7f4c26b9ae731392907658bf0a72b811243137b32320a1be70a334a1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg6ql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0849644593ce9a9b0ce3191eb62e47c5986c4b20f93313dd9fa721c7879ede5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg6ql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c2be51c8707a26e2f124761117543e359c32c8fb8d69d1aa54cea3f1b9cfb11b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c2be51c8707a26e2f124761117543e359c32c8fb8d69d1aa54cea3f1b9cfb11b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-17T13:37:09Z\\\",\\\"message\\\":\\\" start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:37:09Z is after 2025-08-24T17:21:41Z]\\\\nI0217 13:37:09.670472 6446 services_controller.go:473] Services do not match for network=default, existing lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-marketplace/redhat-marketplace_TCP_cluster\\\\\\\", UUID:\\\\\\\"97b6e7b0-06ca-455e-8259-06895040cb0c\\\\\\\", Protocol:\\\\\\\"tcp\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-marketplace/redhat-marketplace\\\\\\\"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T13:37:08Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-5cplg_openshift-ovn-kubernetes(742e6df8-2a68-426e-982c-ef825c6efca3)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg6ql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8e5ca206927b1e2a6968ce42ecdb1441da4c68ad11b65c657a68889ac73af37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg6ql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6fcb3dc45db5b2c74f9bdd4c7c453815a392de1f2afe91ebed618327cfe5cb7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6fcb3dc45db5b2c74f9bdd4c7c453815a392de1f2afe91ebed618327cfe5cb7c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:36:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg6ql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:36:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-5cplg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:37:10Z is after 2025-08-24T17:21:41Z" Feb 17 13:37:10 crc kubenswrapper[4768]: I0217 13:37:10.982439 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-hngsc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a9682c-5dd5-49ce-bd8c-60e91527ec2a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db596eafde85d9ad74007393b8576b424cd8561f68f694af82cde80f997a5688\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fbjbb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:36:46Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-hngsc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:37:10Z is after 2025-08-24T17:21:41Z" Feb 17 13:37:10 crc kubenswrapper[4768]: I0217 13:37:10.992216 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"99a4349d-f5a6-431b-b8d6-9de1f9bbe63a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f45aa99773693c07f27add72aa8305d47c48deeb24382aa9ac8bca232fd26ff3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://47db5ef94c07ecce7b368bb809a736c9532fe3e47081cfa7e248b8b40d1d243f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0adb0161ea24422b8b1443a81f00e9c38f60bf8d0718075ce189158200b28a78\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22e1c78ff231c7a76ebb06589f5f14caed8c0b72475083230c80b4512ae2a048\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:36:21Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:37:10Z is after 2025-08-24T17:21:41Z" Feb 17 13:37:11 crc kubenswrapper[4768]: I0217 13:37:11.001459 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1bbd8d51-f387-4de0-b841-e602f7249532\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:37:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:37:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a9d10a3cac9b0b43e1e7c3fb80828cd2d694a46314618dc023ffec8a43488c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e06287439e9f86ed8e81d6d3e8b07691439a8227d2eb2b819db1a7e6de078c18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://28f9a3cd67e4819d5c4e24b1b0d8576898a01c75184c958d65ce81f2523ce7f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3492da352fa4e1b54d266098578ea4f42e40cf1ebe2d6e511dc804473efd9e64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3492da352fa4e1b54d266098578ea4f42e40cf1ebe2d6e511dc804473efd9e64\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:36:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:36:22Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:36:21Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:37:11Z is after 2025-08-24T17:21:41Z" Feb 17 13:37:11 crc kubenswrapper[4768]: I0217 13:37:11.013658 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-6xvnz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"68ae92b3-aced-409b-901b-252d2364cc01\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c3e6dc9e6ef1908b60694a76b89114accbccc8e33d2334e751125f348ff68191\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42dxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ccce43f2773621b91f6782909e74d6c1cb29f3f2e0d4280753e2719f256d694\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3ccce43f2773621b91f6782909e74d6c1cb29f3f2e0d4280753e2719f256d694\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:36:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:36:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42dxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8fa44e73498e2987dca421d954e3c6b71c36138496d25e3498934a0ae03c25de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8fa44e73498e2987dca421d954e3c6b71c36138496d25e3498934a0ae03c25de\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:36:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:36:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42dxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8eee5a030051c479dc3e7d690a338260f3a886f6c8b6764037de0a4b3dcea101\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8eee5a030051c479dc3e7d690a338260f3a886f6c8b6764037de0a4b3dcea101\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:36:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:36:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42dxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://473b18f574df0c69d4f70ded0dee7c42ee64daab422dff19c6a58a0cde52d045\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://473b18f574df0c69d4f70ded0dee7c42ee64daab422dff19c6a58a0cde52d045\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:36:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:36:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42dxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a84122e995b4744d706f85250dc149036364dc1e476c38b54f08494e0cb41ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3a84122e995b4744d706f85250dc149036364dc1e476c38b54f08494e0cb41ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:36:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:36:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42dxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f77e278dccb14a5e674dad398d194080a57e6949529914ddfa9f8a63dbea8a34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f77e278dccb14a5e674dad398d194080a57e6949529914ddfa9f8a63dbea8a34\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:36:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:36:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42dxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:36:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-6xvnz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:37:11Z is after 2025-08-24T17:21:41Z" Feb 17 13:37:11 crc kubenswrapper[4768]: I0217 13:37:11.024028 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-62mzv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f7e6dba9-bf9f-464a-9842-f4f2a793dedf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25c892cb5b274e124423e1da0778d7b3415022fb0280eed30a6b48055d44c3bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nz8rj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4dade5f6bbbe0473f9330ea16398a4522405ca6b22a0e65851a98fe57943b38f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nz8rj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:36:53Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-62mzv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:37:11Z is after 2025-08-24T17:21:41Z" Feb 17 13:37:11 crc kubenswrapper[4768]: I0217 13:37:11.033875 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-5bxh7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8c8b1469-ed55-4743-9553-f81efd79e5f1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ltsbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ltsbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:36:55Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-5bxh7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:37:11Z is after 2025-08-24T17:21:41Z" Feb 17 13:37:11 crc kubenswrapper[4768]: I0217 13:37:11.051163 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"88441663-1346-4328-a433-63cb4d9b6722\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1dd90501c9cf3ad0b27fefe4999afdfeaf0a8262728c8d47cd4714b0ef71d340\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://52dd97c46a2e371a68dcd9558be0f94d3bdd20c7a6ba6a65178fc09cdceea903\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4f0320f35632987c0e1f7eb6655c7221d7c159f1473088fa143a7c69a60f5b0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1670eca5569373c8bcdf22d19faf8794c43207f7711d502eeb2ade60d643f5dd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9ecebe409455190192bdd157c1aa1c3eaec5d7930ccfaa6c456a1d04be7fff2f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-17T13:36:41Z\\\",\\\"message\\\":\\\"le observer\\\\nW0217 13:36:40.862072 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0217 13:36:40.862247 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0217 13:36:40.862863 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1936381186/tls.crt::/tmp/serving-cert-1936381186/tls.key\\\\\\\"\\\\nI0217 13:36:41.350711 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0217 13:36:41.368959 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0217 13:36:41.368982 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0217 13:36:41.369003 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0217 13:36:41.369008 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0217 13:36:41.374960 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0217 13:36:41.374983 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 13:36:41.374989 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0217 13:36:41.374990 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0217 13:36:41.374994 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0217 13:36:41.375023 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0217 13:36:41.375028 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0217 13:36:41.375032 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0217 13:36:41.377284 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T13:36:25Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7641c2023d65064decc69606567dff6eff07116a6cc62f9d1b1cddb34a39a407\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:24Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9dffd0f3cc3b89ababe812ea2e550f319242913da2908f466cbb8d9ac541e4b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9dffd0f3cc3b89ababe812ea2e550f319242913da2908f466cbb8d9ac541e4b9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:36:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:36:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:36:21Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:37:11Z is after 2025-08-24T17:21:41Z" Feb 17 13:37:11 crc kubenswrapper[4768]: I0217 13:37:11.063095 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c0c5a46beaa5e6c9df638b288967eaff1605591faafc373623753e3c434c6ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a28c91cfb782b4976153d2afced3117498aaabaafa5bba655b020c13aebb1dc5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:37:11Z is after 2025-08-24T17:21:41Z" Feb 17 13:37:11 crc kubenswrapper[4768]: I0217 13:37:11.064132 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:11 crc kubenswrapper[4768]: I0217 13:37:11.064191 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:11 crc kubenswrapper[4768]: I0217 13:37:11.064206 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:11 crc kubenswrapper[4768]: I0217 13:37:11.064228 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:11 crc kubenswrapper[4768]: I0217 13:37:11.064243 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:11Z","lastTransitionTime":"2026-02-17T13:37:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:11 crc kubenswrapper[4768]: I0217 13:37:11.075376 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:37:11Z is after 2025-08-24T17:21:41Z" Feb 17 13:37:11 crc kubenswrapper[4768]: I0217 13:37:11.167399 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:11 crc kubenswrapper[4768]: I0217 13:37:11.167447 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:11 crc kubenswrapper[4768]: I0217 13:37:11.167459 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:11 crc kubenswrapper[4768]: I0217 13:37:11.167477 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:11 crc kubenswrapper[4768]: I0217 13:37:11.167488 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:11Z","lastTransitionTime":"2026-02-17T13:37:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:11 crc kubenswrapper[4768]: I0217 13:37:11.242395 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/8c8b1469-ed55-4743-9553-f81efd79e5f1-metrics-certs\") pod \"network-metrics-daemon-5bxh7\" (UID: \"8c8b1469-ed55-4743-9553-f81efd79e5f1\") " pod="openshift-multus/network-metrics-daemon-5bxh7" Feb 17 13:37:11 crc kubenswrapper[4768]: E0217 13:37:11.242547 4768 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 17 13:37:11 crc kubenswrapper[4768]: E0217 13:37:11.242613 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8c8b1469-ed55-4743-9553-f81efd79e5f1-metrics-certs podName:8c8b1469-ed55-4743-9553-f81efd79e5f1 nodeName:}" failed. No retries permitted until 2026-02-17 13:37:27.242599385 +0000 UTC m=+66.521985827 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/8c8b1469-ed55-4743-9553-f81efd79e5f1-metrics-certs") pod "network-metrics-daemon-5bxh7" (UID: "8c8b1469-ed55-4743-9553-f81efd79e5f1") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 17 13:37:11 crc kubenswrapper[4768]: I0217 13:37:11.270164 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:11 crc kubenswrapper[4768]: I0217 13:37:11.270207 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:11 crc kubenswrapper[4768]: I0217 13:37:11.270219 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:11 crc kubenswrapper[4768]: I0217 13:37:11.270236 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:11 crc kubenswrapper[4768]: I0217 13:37:11.270251 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:11Z","lastTransitionTime":"2026-02-17T13:37:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:11 crc kubenswrapper[4768]: I0217 13:37:11.371995 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:11 crc kubenswrapper[4768]: I0217 13:37:11.372045 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:11 crc kubenswrapper[4768]: I0217 13:37:11.372054 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:11 crc kubenswrapper[4768]: I0217 13:37:11.372071 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:11 crc kubenswrapper[4768]: I0217 13:37:11.372083 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:11Z","lastTransitionTime":"2026-02-17T13:37:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:11 crc kubenswrapper[4768]: I0217 13:37:11.474168 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:11 crc kubenswrapper[4768]: I0217 13:37:11.474246 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:11 crc kubenswrapper[4768]: I0217 13:37:11.474256 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:11 crc kubenswrapper[4768]: I0217 13:37:11.474271 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:11 crc kubenswrapper[4768]: I0217 13:37:11.474280 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:11Z","lastTransitionTime":"2026-02-17T13:37:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:11 crc kubenswrapper[4768]: I0217 13:37:11.533539 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 13:37:11 crc kubenswrapper[4768]: E0217 13:37:11.533667 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 13:37:11 crc kubenswrapper[4768]: I0217 13:37:11.533539 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 13:37:11 crc kubenswrapper[4768]: I0217 13:37:11.533709 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 13:37:11 crc kubenswrapper[4768]: E0217 13:37:11.533741 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 13:37:11 crc kubenswrapper[4768]: E0217 13:37:11.533825 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 13:37:11 crc kubenswrapper[4768]: I0217 13:37:11.536002 4768 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-03 05:35:37.724836412 +0000 UTC Feb 17 13:37:11 crc kubenswrapper[4768]: I0217 13:37:11.544730 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://07aa34038d6332ca2ddf610df510586d343655b1e38a2912ea1513cca35f4dcc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:37:11Z is after 2025-08-24T17:21:41Z" Feb 17 13:37:11 crc kubenswrapper[4768]: I0217 13:37:11.556339 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:41Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:37:11Z is after 2025-08-24T17:21:41Z" Feb 17 13:37:11 crc kubenswrapper[4768]: I0217 13:37:11.570488 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:41Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:37:11Z is after 2025-08-24T17:21:41Z" Feb 17 13:37:11 crc kubenswrapper[4768]: I0217 13:37:11.576086 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:11 crc kubenswrapper[4768]: I0217 13:37:11.576145 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:11 crc kubenswrapper[4768]: I0217 13:37:11.576153 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:11 crc kubenswrapper[4768]: I0217 13:37:11.576171 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:11 crc kubenswrapper[4768]: I0217 13:37:11.576180 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:11Z","lastTransitionTime":"2026-02-17T13:37:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:11 crc kubenswrapper[4768]: I0217 13:37:11.581547 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6l7rv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dac64b2f-4b0b-454c-96e0-fc7d563d300f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://903f9d31bd5f357a5a1f439a158788c1930d0f63cd50e809028689cfa7884e96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7njgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:36:41Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6l7rv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:37:11Z is after 2025-08-24T17:21:41Z" Feb 17 13:37:11 crc kubenswrapper[4768]: I0217 13:37:11.593456 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1bbd8d51-f387-4de0-b841-e602f7249532\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:37:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:37:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a9d10a3cac9b0b43e1e7c3fb80828cd2d694a46314618dc023ffec8a43488c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e06287439e9f86ed8e81d6d3e8b07691439a8227d2eb2b819db1a7e6de078c18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://28f9a3cd67e4819d5c4e24b1b0d8576898a01c75184c958d65ce81f2523ce7f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3492da352fa4e1b54d266098578ea4f42e40cf1ebe2d6e511dc804473efd9e64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3492da352fa4e1b54d266098578ea4f42e40cf1ebe2d6e511dc804473efd9e64\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:36:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:36:22Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:36:21Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:37:11Z is after 2025-08-24T17:21:41Z" Feb 17 13:37:11 crc kubenswrapper[4768]: I0217 13:37:11.605300 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f9b2d1282c12fcb8597c7b4c60dc1bab1049afc179eaddbc8a72203b392f78d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:37:11Z is after 2025-08-24T17:21:41Z" Feb 17 13:37:11 crc kubenswrapper[4768]: I0217 13:37:11.622683 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-p97z4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"10c685ba-8fe0-425c-958c-3fb6754d3d84\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d56cb97a69888610458dcdfe5fa47e01b6641b0ab8ce5b11c01042001017d633\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b2h62\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://69b662556c0f67ed5b73c21d8dda6de43cd7b0e7c3dc7a57578f3ce94a4be7cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b2h62\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:36:42Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-p97z4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:37:11Z is after 2025-08-24T17:21:41Z" Feb 17 13:37:11 crc kubenswrapper[4768]: I0217 13:37:11.637140 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-jjjqj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e044bf1f-26b2-4a39-86e6-0440eff3eaa9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://19bc9ea99c2b3b0015849f2a4c730a14e048c35cf69f5b4025a4d51d707cf9f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jlkgb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:36:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-jjjqj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:37:11Z is after 2025-08-24T17:21:41Z" Feb 17 13:37:11 crc kubenswrapper[4768]: I0217 13:37:11.653671 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5cplg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"742e6df8-2a68-426e-982c-ef825c6efca3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://88f4574f4bb95ef3e580f5e3e98feb55cd1381df1cc14a3797f040f3a8d92e86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg6ql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c29effffff4df037300ce8f32cae5337e3c60634d26fb82fde99ad7994a4fd4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg6ql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bca83097ac9d4b621e9bffa8f2f815a6060db34cb0272281b765aa7705d3ab5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg6ql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aaa535fc7f87d47d5bb54797f60b4adcabe1adc1d5d79720acc7a9ba7bb355c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg6ql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e14dfa7f4c26b9ae731392907658bf0a72b811243137b32320a1be70a334a1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg6ql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0849644593ce9a9b0ce3191eb62e47c5986c4b20f93313dd9fa721c7879ede5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg6ql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c2be51c8707a26e2f124761117543e359c32c8fb8d69d1aa54cea3f1b9cfb11b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c2be51c8707a26e2f124761117543e359c32c8fb8d69d1aa54cea3f1b9cfb11b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-17T13:37:09Z\\\",\\\"message\\\":\\\" start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:37:09Z is after 2025-08-24T17:21:41Z]\\\\nI0217 13:37:09.670472 6446 services_controller.go:473] Services do not match for network=default, existing lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-marketplace/redhat-marketplace_TCP_cluster\\\\\\\", UUID:\\\\\\\"97b6e7b0-06ca-455e-8259-06895040cb0c\\\\\\\", Protocol:\\\\\\\"tcp\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-marketplace/redhat-marketplace\\\\\\\"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T13:37:08Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-5cplg_openshift-ovn-kubernetes(742e6df8-2a68-426e-982c-ef825c6efca3)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg6ql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8e5ca206927b1e2a6968ce42ecdb1441da4c68ad11b65c657a68889ac73af37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg6ql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6fcb3dc45db5b2c74f9bdd4c7c453815a392de1f2afe91ebed618327cfe5cb7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6fcb3dc45db5b2c74f9bdd4c7c453815a392de1f2afe91ebed618327cfe5cb7c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:36:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg6ql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:36:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-5cplg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:37:11Z is after 2025-08-24T17:21:41Z" Feb 17 13:37:11 crc kubenswrapper[4768]: I0217 13:37:11.664821 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-hngsc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a9682c-5dd5-49ce-bd8c-60e91527ec2a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db596eafde85d9ad74007393b8576b424cd8561f68f694af82cde80f997a5688\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fbjbb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:36:46Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-hngsc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:37:11Z is after 2025-08-24T17:21:41Z" Feb 17 13:37:11 crc kubenswrapper[4768]: I0217 13:37:11.677719 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"99a4349d-f5a6-431b-b8d6-9de1f9bbe63a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f45aa99773693c07f27add72aa8305d47c48deeb24382aa9ac8bca232fd26ff3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://47db5ef94c07ecce7b368bb809a736c9532fe3e47081cfa7e248b8b40d1d243f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0adb0161ea24422b8b1443a81f00e9c38f60bf8d0718075ce189158200b28a78\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22e1c78ff231c7a76ebb06589f5f14caed8c0b72475083230c80b4512ae2a048\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:36:21Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:37:11Z is after 2025-08-24T17:21:41Z" Feb 17 13:37:11 crc kubenswrapper[4768]: I0217 13:37:11.678278 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:11 crc kubenswrapper[4768]: I0217 13:37:11.678301 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:11 crc kubenswrapper[4768]: I0217 13:37:11.678309 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:11 crc kubenswrapper[4768]: I0217 13:37:11.678323 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:11 crc kubenswrapper[4768]: I0217 13:37:11.678333 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:11Z","lastTransitionTime":"2026-02-17T13:37:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:11 crc kubenswrapper[4768]: I0217 13:37:11.689150 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-62mzv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f7e6dba9-bf9f-464a-9842-f4f2a793dedf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25c892cb5b274e124423e1da0778d7b3415022fb0280eed30a6b48055d44c3bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nz8rj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4dade5f6bbbe0473f9330ea16398a4522405ca6b22a0e65851a98fe57943b38f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nz8rj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:36:53Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-62mzv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:37:11Z is after 2025-08-24T17:21:41Z" Feb 17 13:37:11 crc kubenswrapper[4768]: I0217 13:37:11.702427 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-6xvnz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"68ae92b3-aced-409b-901b-252d2364cc01\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c3e6dc9e6ef1908b60694a76b89114accbccc8e33d2334e751125f348ff68191\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42dxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ccce43f2773621b91f6782909e74d6c1cb29f3f2e0d4280753e2719f256d694\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3ccce43f2773621b91f6782909e74d6c1cb29f3f2e0d4280753e2719f256d694\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:36:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:36:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42dxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8fa44e73498e2987dca421d954e3c6b71c36138496d25e3498934a0ae03c25de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8fa44e73498e2987dca421d954e3c6b71c36138496d25e3498934a0ae03c25de\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:36:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:36:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42dxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8eee5a030051c479dc3e7d690a338260f3a886f6c8b6764037de0a4b3dcea101\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8eee5a030051c479dc3e7d690a338260f3a886f6c8b6764037de0a4b3dcea101\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:36:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:36:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42dxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://473b18f574df0c69d4f70ded0dee7c42ee64daab422dff19c6a58a0cde52d045\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://473b18f574df0c69d4f70ded0dee7c42ee64daab422dff19c6a58a0cde52d045\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:36:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:36:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42dxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a84122e995b4744d706f85250dc149036364dc1e476c38b54f08494e0cb41ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3a84122e995b4744d706f85250dc149036364dc1e476c38b54f08494e0cb41ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:36:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:36:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42dxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f77e278dccb14a5e674dad398d194080a57e6949529914ddfa9f8a63dbea8a34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f77e278dccb14a5e674dad398d194080a57e6949529914ddfa9f8a63dbea8a34\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:36:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:36:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42dxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:36:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-6xvnz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:37:11Z is after 2025-08-24T17:21:41Z" Feb 17 13:37:11 crc kubenswrapper[4768]: I0217 13:37:11.714035 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c0c5a46beaa5e6c9df638b288967eaff1605591faafc373623753e3c434c6ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a28c91cfb782b4976153d2afced3117498aaabaafa5bba655b020c13aebb1dc5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:37:11Z is after 2025-08-24T17:21:41Z" Feb 17 13:37:11 crc kubenswrapper[4768]: I0217 13:37:11.724307 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:37:11Z is after 2025-08-24T17:21:41Z" Feb 17 13:37:11 crc kubenswrapper[4768]: I0217 13:37:11.734138 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-5bxh7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8c8b1469-ed55-4743-9553-f81efd79e5f1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ltsbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ltsbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:36:55Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-5bxh7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:37:11Z is after 2025-08-24T17:21:41Z" Feb 17 13:37:11 crc kubenswrapper[4768]: I0217 13:37:11.745539 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"88441663-1346-4328-a433-63cb4d9b6722\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1dd90501c9cf3ad0b27fefe4999afdfeaf0a8262728c8d47cd4714b0ef71d340\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://52dd97c46a2e371a68dcd9558be0f94d3bdd20c7a6ba6a65178fc09cdceea903\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4f0320f35632987c0e1f7eb6655c7221d7c159f1473088fa143a7c69a60f5b0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1670eca5569373c8bcdf22d19faf8794c43207f7711d502eeb2ade60d643f5dd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9ecebe409455190192bdd157c1aa1c3eaec5d7930ccfaa6c456a1d04be7fff2f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-17T13:36:41Z\\\",\\\"message\\\":\\\"le observer\\\\nW0217 13:36:40.862072 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0217 13:36:40.862247 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0217 13:36:40.862863 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1936381186/tls.crt::/tmp/serving-cert-1936381186/tls.key\\\\\\\"\\\\nI0217 13:36:41.350711 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0217 13:36:41.368959 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0217 13:36:41.368982 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0217 13:36:41.369003 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0217 13:36:41.369008 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0217 13:36:41.374960 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0217 13:36:41.374983 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 13:36:41.374989 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0217 13:36:41.374990 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0217 13:36:41.374994 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0217 13:36:41.375023 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0217 13:36:41.375028 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0217 13:36:41.375032 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0217 13:36:41.377284 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T13:36:25Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7641c2023d65064decc69606567dff6eff07116a6cc62f9d1b1cddb34a39a407\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:24Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9dffd0f3cc3b89ababe812ea2e550f319242913da2908f466cbb8d9ac541e4b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9dffd0f3cc3b89ababe812ea2e550f319242913da2908f466cbb8d9ac541e4b9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:36:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:36:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:36:21Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:37:11Z is after 2025-08-24T17:21:41Z" Feb 17 13:37:11 crc kubenswrapper[4768]: I0217 13:37:11.780400 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:11 crc kubenswrapper[4768]: I0217 13:37:11.780443 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:11 crc kubenswrapper[4768]: I0217 13:37:11.780453 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:11 crc kubenswrapper[4768]: I0217 13:37:11.780470 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:11 crc kubenswrapper[4768]: I0217 13:37:11.780481 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:11Z","lastTransitionTime":"2026-02-17T13:37:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:11 crc kubenswrapper[4768]: I0217 13:37:11.882474 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:11 crc kubenswrapper[4768]: I0217 13:37:11.882504 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:11 crc kubenswrapper[4768]: I0217 13:37:11.882512 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:11 crc kubenswrapper[4768]: I0217 13:37:11.882524 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:11 crc kubenswrapper[4768]: I0217 13:37:11.882534 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:11Z","lastTransitionTime":"2026-02-17T13:37:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:11 crc kubenswrapper[4768]: I0217 13:37:11.904672 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 13:37:11 crc kubenswrapper[4768]: I0217 13:37:11.921158 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f9b2d1282c12fcb8597c7b4c60dc1bab1049afc179eaddbc8a72203b392f78d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:37:11Z is after 2025-08-24T17:21:41Z" Feb 17 13:37:11 crc kubenswrapper[4768]: I0217 13:37:11.932205 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-p97z4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"10c685ba-8fe0-425c-958c-3fb6754d3d84\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d56cb97a69888610458dcdfe5fa47e01b6641b0ab8ce5b11c01042001017d633\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b2h62\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://69b662556c0f67ed5b73c21d8dda6de43cd7b0e7c3dc7a57578f3ce94a4be7cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b2h62\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:36:42Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-p97z4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:37:11Z is after 2025-08-24T17:21:41Z" Feb 17 13:37:11 crc kubenswrapper[4768]: I0217 13:37:11.951044 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-jjjqj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e044bf1f-26b2-4a39-86e6-0440eff3eaa9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://19bc9ea99c2b3b0015849f2a4c730a14e048c35cf69f5b4025a4d51d707cf9f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jlkgb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:36:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-jjjqj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:37:11Z is after 2025-08-24T17:21:41Z" Feb 17 13:37:11 crc kubenswrapper[4768]: I0217 13:37:11.968444 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5cplg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"742e6df8-2a68-426e-982c-ef825c6efca3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://88f4574f4bb95ef3e580f5e3e98feb55cd1381df1cc14a3797f040f3a8d92e86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg6ql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c29effffff4df037300ce8f32cae5337e3c60634d26fb82fde99ad7994a4fd4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg6ql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bca83097ac9d4b621e9bffa8f2f815a6060db34cb0272281b765aa7705d3ab5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg6ql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aaa535fc7f87d47d5bb54797f60b4adcabe1adc1d5d79720acc7a9ba7bb355c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg6ql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e14dfa7f4c26b9ae731392907658bf0a72b811243137b32320a1be70a334a1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg6ql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0849644593ce9a9b0ce3191eb62e47c5986c4b20f93313dd9fa721c7879ede5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg6ql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c2be51c8707a26e2f124761117543e359c32c8fb8d69d1aa54cea3f1b9cfb11b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c2be51c8707a26e2f124761117543e359c32c8fb8d69d1aa54cea3f1b9cfb11b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-17T13:37:09Z\\\",\\\"message\\\":\\\" start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:37:09Z is after 2025-08-24T17:21:41Z]\\\\nI0217 13:37:09.670472 6446 services_controller.go:473] Services do not match for network=default, existing lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-marketplace/redhat-marketplace_TCP_cluster\\\\\\\", UUID:\\\\\\\"97b6e7b0-06ca-455e-8259-06895040cb0c\\\\\\\", Protocol:\\\\\\\"tcp\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-marketplace/redhat-marketplace\\\\\\\"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T13:37:08Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-5cplg_openshift-ovn-kubernetes(742e6df8-2a68-426e-982c-ef825c6efca3)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg6ql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8e5ca206927b1e2a6968ce42ecdb1441da4c68ad11b65c657a68889ac73af37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg6ql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6fcb3dc45db5b2c74f9bdd4c7c453815a392de1f2afe91ebed618327cfe5cb7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6fcb3dc45db5b2c74f9bdd4c7c453815a392de1f2afe91ebed618327cfe5cb7c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:36:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg6ql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:36:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-5cplg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:37:11Z is after 2025-08-24T17:21:41Z" Feb 17 13:37:11 crc kubenswrapper[4768]: I0217 13:37:11.979074 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-hngsc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a9682c-5dd5-49ce-bd8c-60e91527ec2a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db596eafde85d9ad74007393b8576b424cd8561f68f694af82cde80f997a5688\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fbjbb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:36:46Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-hngsc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:37:11Z is after 2025-08-24T17:21:41Z" Feb 17 13:37:11 crc kubenswrapper[4768]: I0217 13:37:11.984577 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:11 crc kubenswrapper[4768]: I0217 13:37:11.984656 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:11 crc kubenswrapper[4768]: I0217 13:37:11.984681 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:11 crc kubenswrapper[4768]: I0217 13:37:11.984713 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:11 crc kubenswrapper[4768]: I0217 13:37:11.984738 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:11Z","lastTransitionTime":"2026-02-17T13:37:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:11 crc kubenswrapper[4768]: I0217 13:37:11.992204 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"99a4349d-f5a6-431b-b8d6-9de1f9bbe63a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f45aa99773693c07f27add72aa8305d47c48deeb24382aa9ac8bca232fd26ff3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://47db5ef94c07ecce7b368bb809a736c9532fe3e47081cfa7e248b8b40d1d243f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0adb0161ea24422b8b1443a81f00e9c38f60bf8d0718075ce189158200b28a78\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22e1c78ff231c7a76ebb06589f5f14caed8c0b72475083230c80b4512ae2a048\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:36:21Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:37:11Z is after 2025-08-24T17:21:41Z" Feb 17 13:37:12 crc kubenswrapper[4768]: I0217 13:37:12.006186 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1bbd8d51-f387-4de0-b841-e602f7249532\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:37:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:37:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a9d10a3cac9b0b43e1e7c3fb80828cd2d694a46314618dc023ffec8a43488c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e06287439e9f86ed8e81d6d3e8b07691439a8227d2eb2b819db1a7e6de078c18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://28f9a3cd67e4819d5c4e24b1b0d8576898a01c75184c958d65ce81f2523ce7f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3492da352fa4e1b54d266098578ea4f42e40cf1ebe2d6e511dc804473efd9e64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3492da352fa4e1b54d266098578ea4f42e40cf1ebe2d6e511dc804473efd9e64\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:36:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:36:22Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:36:21Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:37:12Z is after 2025-08-24T17:21:41Z" Feb 17 13:37:12 crc kubenswrapper[4768]: I0217 13:37:12.024164 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-6xvnz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"68ae92b3-aced-409b-901b-252d2364cc01\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c3e6dc9e6ef1908b60694a76b89114accbccc8e33d2334e751125f348ff68191\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42dxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ccce43f2773621b91f6782909e74d6c1cb29f3f2e0d4280753e2719f256d694\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3ccce43f2773621b91f6782909e74d6c1cb29f3f2e0d4280753e2719f256d694\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:36:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:36:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42dxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8fa44e73498e2987dca421d954e3c6b71c36138496d25e3498934a0ae03c25de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8fa44e73498e2987dca421d954e3c6b71c36138496d25e3498934a0ae03c25de\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:36:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:36:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42dxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8eee5a030051c479dc3e7d690a338260f3a886f6c8b6764037de0a4b3dcea101\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8eee5a030051c479dc3e7d690a338260f3a886f6c8b6764037de0a4b3dcea101\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:36:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:36:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42dxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://473b18f574df0c69d4f70ded0dee7c42ee64daab422dff19c6a58a0cde52d045\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://473b18f574df0c69d4f70ded0dee7c42ee64daab422dff19c6a58a0cde52d045\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:36:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:36:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42dxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a84122e995b4744d706f85250dc149036364dc1e476c38b54f08494e0cb41ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3a84122e995b4744d706f85250dc149036364dc1e476c38b54f08494e0cb41ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:36:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:36:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42dxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f77e278dccb14a5e674dad398d194080a57e6949529914ddfa9f8a63dbea8a34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f77e278dccb14a5e674dad398d194080a57e6949529914ddfa9f8a63dbea8a34\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:36:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:36:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42dxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:36:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-6xvnz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:37:12Z is after 2025-08-24T17:21:41Z" Feb 17 13:37:12 crc kubenswrapper[4768]: I0217 13:37:12.039090 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-62mzv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f7e6dba9-bf9f-464a-9842-f4f2a793dedf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25c892cb5b274e124423e1da0778d7b3415022fb0280eed30a6b48055d44c3bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nz8rj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4dade5f6bbbe0473f9330ea16398a4522405ca6b22a0e65851a98fe57943b38f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nz8rj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:36:53Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-62mzv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:37:12Z is after 2025-08-24T17:21:41Z" Feb 17 13:37:12 crc kubenswrapper[4768]: I0217 13:37:12.052596 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-5bxh7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8c8b1469-ed55-4743-9553-f81efd79e5f1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ltsbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ltsbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:36:55Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-5bxh7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:37:12Z is after 2025-08-24T17:21:41Z" Feb 17 13:37:12 crc kubenswrapper[4768]: I0217 13:37:12.069822 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"88441663-1346-4328-a433-63cb4d9b6722\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:37:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:37:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1dd90501c9cf3ad0b27fefe4999afdfeaf0a8262728c8d47cd4714b0ef71d340\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://52dd97c46a2e371a68dcd9558be0f94d3bdd20c7a6ba6a65178fc09cdceea903\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4f0320f35632987c0e1f7eb6655c7221d7c159f1473088fa143a7c69a60f5b0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1670eca5569373c8bcdf22d19faf8794c43207f7711d502eeb2ade60d643f5dd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9ecebe409455190192bdd157c1aa1c3eaec5d7930ccfaa6c456a1d04be7fff2f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-17T13:36:41Z\\\",\\\"message\\\":\\\"le observer\\\\nW0217 13:36:40.862072 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0217 13:36:40.862247 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0217 13:36:40.862863 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1936381186/tls.crt::/tmp/serving-cert-1936381186/tls.key\\\\\\\"\\\\nI0217 13:36:41.350711 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0217 13:36:41.368959 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0217 13:36:41.368982 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0217 13:36:41.369003 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0217 13:36:41.369008 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0217 13:36:41.374960 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0217 13:36:41.374983 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 13:36:41.374989 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0217 13:36:41.374990 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0217 13:36:41.374994 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0217 13:36:41.375023 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0217 13:36:41.375028 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0217 13:36:41.375032 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0217 13:36:41.377284 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T13:36:25Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7641c2023d65064decc69606567dff6eff07116a6cc62f9d1b1cddb34a39a407\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:24Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9dffd0f3cc3b89ababe812ea2e550f319242913da2908f466cbb8d9ac541e4b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9dffd0f3cc3b89ababe812ea2e550f319242913da2908f466cbb8d9ac541e4b9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:36:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:36:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:36:21Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:37:12Z is after 2025-08-24T17:21:41Z" Feb 17 13:37:12 crc kubenswrapper[4768]: I0217 13:37:12.082343 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c0c5a46beaa5e6c9df638b288967eaff1605591faafc373623753e3c434c6ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a28c91cfb782b4976153d2afced3117498aaabaafa5bba655b020c13aebb1dc5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:37:12Z is after 2025-08-24T17:21:41Z" Feb 17 13:37:12 crc kubenswrapper[4768]: I0217 13:37:12.087375 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:12 crc kubenswrapper[4768]: I0217 13:37:12.087491 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:12 crc kubenswrapper[4768]: I0217 13:37:12.087564 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:12 crc kubenswrapper[4768]: I0217 13:37:12.087597 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:12 crc kubenswrapper[4768]: I0217 13:37:12.087682 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:12Z","lastTransitionTime":"2026-02-17T13:37:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:12 crc kubenswrapper[4768]: I0217 13:37:12.094476 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:37:12Z is after 2025-08-24T17:21:41Z" Feb 17 13:37:12 crc kubenswrapper[4768]: I0217 13:37:12.105520 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:41Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:37:12Z is after 2025-08-24T17:21:41Z" Feb 17 13:37:12 crc kubenswrapper[4768]: I0217 13:37:12.117055 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6l7rv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dac64b2f-4b0b-454c-96e0-fc7d563d300f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://903f9d31bd5f357a5a1f439a158788c1930d0f63cd50e809028689cfa7884e96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7njgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:36:41Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6l7rv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:37:12Z is after 2025-08-24T17:21:41Z" Feb 17 13:37:12 crc kubenswrapper[4768]: I0217 13:37:12.128640 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://07aa34038d6332ca2ddf610df510586d343655b1e38a2912ea1513cca35f4dcc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:37:12Z is after 2025-08-24T17:21:41Z" Feb 17 13:37:12 crc kubenswrapper[4768]: I0217 13:37:12.142297 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:41Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:37:12Z is after 2025-08-24T17:21:41Z" Feb 17 13:37:12 crc kubenswrapper[4768]: I0217 13:37:12.190109 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:12 crc kubenswrapper[4768]: I0217 13:37:12.190163 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:12 crc kubenswrapper[4768]: I0217 13:37:12.190175 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:12 crc kubenswrapper[4768]: I0217 13:37:12.190192 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:12 crc kubenswrapper[4768]: I0217 13:37:12.190201 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:12Z","lastTransitionTime":"2026-02-17T13:37:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:12 crc kubenswrapper[4768]: I0217 13:37:12.293212 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:12 crc kubenswrapper[4768]: I0217 13:37:12.293266 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:12 crc kubenswrapper[4768]: I0217 13:37:12.293278 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:12 crc kubenswrapper[4768]: I0217 13:37:12.293295 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:12 crc kubenswrapper[4768]: I0217 13:37:12.293306 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:12Z","lastTransitionTime":"2026-02-17T13:37:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:12 crc kubenswrapper[4768]: I0217 13:37:12.395875 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:12 crc kubenswrapper[4768]: I0217 13:37:12.395908 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:12 crc kubenswrapper[4768]: I0217 13:37:12.395917 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:12 crc kubenswrapper[4768]: I0217 13:37:12.395932 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:12 crc kubenswrapper[4768]: I0217 13:37:12.395941 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:12Z","lastTransitionTime":"2026-02-17T13:37:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:12 crc kubenswrapper[4768]: I0217 13:37:12.498326 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:12 crc kubenswrapper[4768]: I0217 13:37:12.498369 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:12 crc kubenswrapper[4768]: I0217 13:37:12.498378 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:12 crc kubenswrapper[4768]: I0217 13:37:12.498393 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:12 crc kubenswrapper[4768]: I0217 13:37:12.498403 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:12Z","lastTransitionTime":"2026-02-17T13:37:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:12 crc kubenswrapper[4768]: I0217 13:37:12.533657 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5bxh7" Feb 17 13:37:12 crc kubenswrapper[4768]: E0217 13:37:12.533778 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5bxh7" podUID="8c8b1469-ed55-4743-9553-f81efd79e5f1" Feb 17 13:37:12 crc kubenswrapper[4768]: I0217 13:37:12.536930 4768 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-24 09:09:39.881951864 +0000 UTC Feb 17 13:37:12 crc kubenswrapper[4768]: I0217 13:37:12.600628 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:12 crc kubenswrapper[4768]: I0217 13:37:12.600665 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:12 crc kubenswrapper[4768]: I0217 13:37:12.600675 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:12 crc kubenswrapper[4768]: I0217 13:37:12.600689 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:12 crc kubenswrapper[4768]: I0217 13:37:12.600698 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:12Z","lastTransitionTime":"2026-02-17T13:37:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:12 crc kubenswrapper[4768]: I0217 13:37:12.702607 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:12 crc kubenswrapper[4768]: I0217 13:37:12.702654 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:12 crc kubenswrapper[4768]: I0217 13:37:12.702663 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:12 crc kubenswrapper[4768]: I0217 13:37:12.702678 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:12 crc kubenswrapper[4768]: I0217 13:37:12.702687 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:12Z","lastTransitionTime":"2026-02-17T13:37:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:12 crc kubenswrapper[4768]: I0217 13:37:12.804492 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:12 crc kubenswrapper[4768]: I0217 13:37:12.804528 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:12 crc kubenswrapper[4768]: I0217 13:37:12.804538 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:12 crc kubenswrapper[4768]: I0217 13:37:12.804553 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:12 crc kubenswrapper[4768]: I0217 13:37:12.804562 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:12Z","lastTransitionTime":"2026-02-17T13:37:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:12 crc kubenswrapper[4768]: I0217 13:37:12.907802 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:12 crc kubenswrapper[4768]: I0217 13:37:12.907875 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:12 crc kubenswrapper[4768]: I0217 13:37:12.907906 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:12 crc kubenswrapper[4768]: I0217 13:37:12.907923 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:12 crc kubenswrapper[4768]: I0217 13:37:12.907934 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:12Z","lastTransitionTime":"2026-02-17T13:37:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:13 crc kubenswrapper[4768]: I0217 13:37:13.010592 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:13 crc kubenswrapper[4768]: I0217 13:37:13.010646 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:13 crc kubenswrapper[4768]: I0217 13:37:13.010655 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:13 crc kubenswrapper[4768]: I0217 13:37:13.010671 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:13 crc kubenswrapper[4768]: I0217 13:37:13.010680 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:13Z","lastTransitionTime":"2026-02-17T13:37:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:13 crc kubenswrapper[4768]: I0217 13:37:13.112844 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:13 crc kubenswrapper[4768]: I0217 13:37:13.112893 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:13 crc kubenswrapper[4768]: I0217 13:37:13.112905 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:13 crc kubenswrapper[4768]: I0217 13:37:13.112925 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:13 crc kubenswrapper[4768]: I0217 13:37:13.112938 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:13Z","lastTransitionTime":"2026-02-17T13:37:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:13 crc kubenswrapper[4768]: I0217 13:37:13.215691 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:13 crc kubenswrapper[4768]: I0217 13:37:13.216014 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:13 crc kubenswrapper[4768]: I0217 13:37:13.216094 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:13 crc kubenswrapper[4768]: I0217 13:37:13.216224 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:13 crc kubenswrapper[4768]: I0217 13:37:13.216337 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:13Z","lastTransitionTime":"2026-02-17T13:37:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:13 crc kubenswrapper[4768]: I0217 13:37:13.318531 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:13 crc kubenswrapper[4768]: I0217 13:37:13.319538 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:13 crc kubenswrapper[4768]: I0217 13:37:13.319721 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:13 crc kubenswrapper[4768]: I0217 13:37:13.319919 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:13 crc kubenswrapper[4768]: I0217 13:37:13.320200 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:13Z","lastTransitionTime":"2026-02-17T13:37:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:13 crc kubenswrapper[4768]: I0217 13:37:13.361835 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 13:37:13 crc kubenswrapper[4768]: E0217 13:37:13.362006 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 13:37:45.361969831 +0000 UTC m=+84.641356313 (durationBeforeRetry 32s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 13:37:13 crc kubenswrapper[4768]: I0217 13:37:13.362553 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 13:37:13 crc kubenswrapper[4768]: I0217 13:37:13.362706 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 13:37:13 crc kubenswrapper[4768]: I0217 13:37:13.362845 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 13:37:13 crc kubenswrapper[4768]: E0217 13:37:13.362765 4768 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 17 13:37:13 crc kubenswrapper[4768]: E0217 13:37:13.362876 4768 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 17 13:37:13 crc kubenswrapper[4768]: E0217 13:37:13.363165 4768 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 17 13:37:13 crc kubenswrapper[4768]: E0217 13:37:13.363188 4768 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 17 13:37:13 crc kubenswrapper[4768]: E0217 13:37:13.362957 4768 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 17 13:37:13 crc kubenswrapper[4768]: E0217 13:37:13.363386 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-17 13:37:45.363129104 +0000 UTC m=+84.642515606 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 17 13:37:13 crc kubenswrapper[4768]: E0217 13:37:13.363502 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-17 13:37:45.363486454 +0000 UTC m=+84.642872986 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 17 13:37:13 crc kubenswrapper[4768]: E0217 13:37:13.363624 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-17 13:37:45.363610728 +0000 UTC m=+84.642997170 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 17 13:37:13 crc kubenswrapper[4768]: I0217 13:37:13.423034 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:13 crc kubenswrapper[4768]: I0217 13:37:13.423155 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:13 crc kubenswrapper[4768]: I0217 13:37:13.423183 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:13 crc kubenswrapper[4768]: I0217 13:37:13.423216 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:13 crc kubenswrapper[4768]: I0217 13:37:13.423262 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:13Z","lastTransitionTime":"2026-02-17T13:37:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:13 crc kubenswrapper[4768]: I0217 13:37:13.464506 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 13:37:13 crc kubenswrapper[4768]: E0217 13:37:13.464768 4768 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 17 13:37:13 crc kubenswrapper[4768]: E0217 13:37:13.464857 4768 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 17 13:37:13 crc kubenswrapper[4768]: E0217 13:37:13.464882 4768 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 17 13:37:13 crc kubenswrapper[4768]: E0217 13:37:13.464979 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-17 13:37:45.464956236 +0000 UTC m=+84.744342698 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 17 13:37:13 crc kubenswrapper[4768]: I0217 13:37:13.526224 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:13 crc kubenswrapper[4768]: I0217 13:37:13.526337 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:13 crc kubenswrapper[4768]: I0217 13:37:13.526352 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:13 crc kubenswrapper[4768]: I0217 13:37:13.526372 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:13 crc kubenswrapper[4768]: I0217 13:37:13.526386 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:13Z","lastTransitionTime":"2026-02-17T13:37:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:13 crc kubenswrapper[4768]: I0217 13:37:13.533602 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 13:37:13 crc kubenswrapper[4768]: I0217 13:37:13.533656 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 13:37:13 crc kubenswrapper[4768]: E0217 13:37:13.533735 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 13:37:13 crc kubenswrapper[4768]: E0217 13:37:13.533859 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 13:37:13 crc kubenswrapper[4768]: I0217 13:37:13.533603 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 13:37:13 crc kubenswrapper[4768]: E0217 13:37:13.533971 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 13:37:13 crc kubenswrapper[4768]: I0217 13:37:13.537266 4768 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-14 02:59:29.273075743 +0000 UTC Feb 17 13:37:13 crc kubenswrapper[4768]: I0217 13:37:13.628549 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:13 crc kubenswrapper[4768]: I0217 13:37:13.628593 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:13 crc kubenswrapper[4768]: I0217 13:37:13.628602 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:13 crc kubenswrapper[4768]: I0217 13:37:13.628620 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:13 crc kubenswrapper[4768]: I0217 13:37:13.628631 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:13Z","lastTransitionTime":"2026-02-17T13:37:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:13 crc kubenswrapper[4768]: I0217 13:37:13.730879 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:13 crc kubenswrapper[4768]: I0217 13:37:13.730935 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:13 crc kubenswrapper[4768]: I0217 13:37:13.730954 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:13 crc kubenswrapper[4768]: I0217 13:37:13.730974 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:13 crc kubenswrapper[4768]: I0217 13:37:13.730989 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:13Z","lastTransitionTime":"2026-02-17T13:37:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:13 crc kubenswrapper[4768]: I0217 13:37:13.834176 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:13 crc kubenswrapper[4768]: I0217 13:37:13.834225 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:13 crc kubenswrapper[4768]: I0217 13:37:13.834242 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:13 crc kubenswrapper[4768]: I0217 13:37:13.834265 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:13 crc kubenswrapper[4768]: I0217 13:37:13.834281 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:13Z","lastTransitionTime":"2026-02-17T13:37:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:13 crc kubenswrapper[4768]: I0217 13:37:13.936602 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:13 crc kubenswrapper[4768]: I0217 13:37:13.936868 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:13 crc kubenswrapper[4768]: I0217 13:37:13.936962 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:13 crc kubenswrapper[4768]: I0217 13:37:13.937065 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:13 crc kubenswrapper[4768]: I0217 13:37:13.937251 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:13Z","lastTransitionTime":"2026-02-17T13:37:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:14 crc kubenswrapper[4768]: I0217 13:37:14.039967 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:14 crc kubenswrapper[4768]: I0217 13:37:14.040025 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:14 crc kubenswrapper[4768]: I0217 13:37:14.040034 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:14 crc kubenswrapper[4768]: I0217 13:37:14.040075 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:14 crc kubenswrapper[4768]: I0217 13:37:14.040084 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:14Z","lastTransitionTime":"2026-02-17T13:37:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:14 crc kubenswrapper[4768]: I0217 13:37:14.142419 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:14 crc kubenswrapper[4768]: I0217 13:37:14.142733 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:14 crc kubenswrapper[4768]: I0217 13:37:14.142832 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:14 crc kubenswrapper[4768]: I0217 13:37:14.142958 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:14 crc kubenswrapper[4768]: I0217 13:37:14.143062 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:14Z","lastTransitionTime":"2026-02-17T13:37:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:14 crc kubenswrapper[4768]: I0217 13:37:14.245762 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:14 crc kubenswrapper[4768]: I0217 13:37:14.245817 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:14 crc kubenswrapper[4768]: I0217 13:37:14.245829 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:14 crc kubenswrapper[4768]: I0217 13:37:14.245847 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:14 crc kubenswrapper[4768]: I0217 13:37:14.245859 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:14Z","lastTransitionTime":"2026-02-17T13:37:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:14 crc kubenswrapper[4768]: I0217 13:37:14.348585 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:14 crc kubenswrapper[4768]: I0217 13:37:14.348637 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:14 crc kubenswrapper[4768]: I0217 13:37:14.348653 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:14 crc kubenswrapper[4768]: I0217 13:37:14.348672 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:14 crc kubenswrapper[4768]: I0217 13:37:14.348684 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:14Z","lastTransitionTime":"2026-02-17T13:37:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:14 crc kubenswrapper[4768]: I0217 13:37:14.450643 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:14 crc kubenswrapper[4768]: I0217 13:37:14.450694 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:14 crc kubenswrapper[4768]: I0217 13:37:14.450711 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:14 crc kubenswrapper[4768]: I0217 13:37:14.450731 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:14 crc kubenswrapper[4768]: I0217 13:37:14.450746 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:14Z","lastTransitionTime":"2026-02-17T13:37:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:14 crc kubenswrapper[4768]: I0217 13:37:14.533900 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5bxh7" Feb 17 13:37:14 crc kubenswrapper[4768]: E0217 13:37:14.534056 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5bxh7" podUID="8c8b1469-ed55-4743-9553-f81efd79e5f1" Feb 17 13:37:14 crc kubenswrapper[4768]: I0217 13:37:14.538374 4768 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-07 17:58:55.329676291 +0000 UTC Feb 17 13:37:14 crc kubenswrapper[4768]: I0217 13:37:14.553872 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:14 crc kubenswrapper[4768]: I0217 13:37:14.553919 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:14 crc kubenswrapper[4768]: I0217 13:37:14.553931 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:14 crc kubenswrapper[4768]: I0217 13:37:14.553961 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:14 crc kubenswrapper[4768]: I0217 13:37:14.553974 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:14Z","lastTransitionTime":"2026-02-17T13:37:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:14 crc kubenswrapper[4768]: I0217 13:37:14.656374 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:14 crc kubenswrapper[4768]: I0217 13:37:14.656419 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:14 crc kubenswrapper[4768]: I0217 13:37:14.656429 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:14 crc kubenswrapper[4768]: I0217 13:37:14.656444 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:14 crc kubenswrapper[4768]: I0217 13:37:14.656456 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:14Z","lastTransitionTime":"2026-02-17T13:37:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:14 crc kubenswrapper[4768]: I0217 13:37:14.758731 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:14 crc kubenswrapper[4768]: I0217 13:37:14.758777 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:14 crc kubenswrapper[4768]: I0217 13:37:14.758792 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:14 crc kubenswrapper[4768]: I0217 13:37:14.758811 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:14 crc kubenswrapper[4768]: I0217 13:37:14.758823 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:14Z","lastTransitionTime":"2026-02-17T13:37:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:14 crc kubenswrapper[4768]: I0217 13:37:14.860790 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:14 crc kubenswrapper[4768]: I0217 13:37:14.860831 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:14 crc kubenswrapper[4768]: I0217 13:37:14.860843 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:14 crc kubenswrapper[4768]: I0217 13:37:14.860859 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:14 crc kubenswrapper[4768]: I0217 13:37:14.860868 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:14Z","lastTransitionTime":"2026-02-17T13:37:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:14 crc kubenswrapper[4768]: I0217 13:37:14.963383 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:14 crc kubenswrapper[4768]: I0217 13:37:14.963435 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:14 crc kubenswrapper[4768]: I0217 13:37:14.963448 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:14 crc kubenswrapper[4768]: I0217 13:37:14.963500 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:14 crc kubenswrapper[4768]: I0217 13:37:14.963513 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:14Z","lastTransitionTime":"2026-02-17T13:37:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:15 crc kubenswrapper[4768]: I0217 13:37:15.065986 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:15 crc kubenswrapper[4768]: I0217 13:37:15.066023 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:15 crc kubenswrapper[4768]: I0217 13:37:15.066031 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:15 crc kubenswrapper[4768]: I0217 13:37:15.066046 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:15 crc kubenswrapper[4768]: I0217 13:37:15.066055 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:15Z","lastTransitionTime":"2026-02-17T13:37:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:15 crc kubenswrapper[4768]: I0217 13:37:15.168728 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:15 crc kubenswrapper[4768]: I0217 13:37:15.168776 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:15 crc kubenswrapper[4768]: I0217 13:37:15.168787 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:15 crc kubenswrapper[4768]: I0217 13:37:15.168803 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:15 crc kubenswrapper[4768]: I0217 13:37:15.168814 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:15Z","lastTransitionTime":"2026-02-17T13:37:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:15 crc kubenswrapper[4768]: I0217 13:37:15.271589 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:15 crc kubenswrapper[4768]: I0217 13:37:15.271626 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:15 crc kubenswrapper[4768]: I0217 13:37:15.271634 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:15 crc kubenswrapper[4768]: I0217 13:37:15.271648 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:15 crc kubenswrapper[4768]: I0217 13:37:15.271656 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:15Z","lastTransitionTime":"2026-02-17T13:37:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:15 crc kubenswrapper[4768]: I0217 13:37:15.374967 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:15 crc kubenswrapper[4768]: I0217 13:37:15.375243 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:15 crc kubenswrapper[4768]: I0217 13:37:15.375252 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:15 crc kubenswrapper[4768]: I0217 13:37:15.375267 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:15 crc kubenswrapper[4768]: I0217 13:37:15.375275 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:15Z","lastTransitionTime":"2026-02-17T13:37:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:15 crc kubenswrapper[4768]: I0217 13:37:15.477676 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:15 crc kubenswrapper[4768]: I0217 13:37:15.477720 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:15 crc kubenswrapper[4768]: I0217 13:37:15.477732 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:15 crc kubenswrapper[4768]: I0217 13:37:15.477749 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:15 crc kubenswrapper[4768]: I0217 13:37:15.477760 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:15Z","lastTransitionTime":"2026-02-17T13:37:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:15 crc kubenswrapper[4768]: I0217 13:37:15.533274 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 13:37:15 crc kubenswrapper[4768]: E0217 13:37:15.533412 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 13:37:15 crc kubenswrapper[4768]: I0217 13:37:15.533471 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 13:37:15 crc kubenswrapper[4768]: I0217 13:37:15.533275 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 13:37:15 crc kubenswrapper[4768]: E0217 13:37:15.533744 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 13:37:15 crc kubenswrapper[4768]: E0217 13:37:15.533701 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 13:37:15 crc kubenswrapper[4768]: I0217 13:37:15.538534 4768 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-11 05:15:18.68499395 +0000 UTC Feb 17 13:37:15 crc kubenswrapper[4768]: I0217 13:37:15.580298 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:15 crc kubenswrapper[4768]: I0217 13:37:15.580370 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:15 crc kubenswrapper[4768]: I0217 13:37:15.580395 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:15 crc kubenswrapper[4768]: I0217 13:37:15.580424 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:15 crc kubenswrapper[4768]: I0217 13:37:15.580447 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:15Z","lastTransitionTime":"2026-02-17T13:37:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:15 crc kubenswrapper[4768]: I0217 13:37:15.682028 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:15 crc kubenswrapper[4768]: I0217 13:37:15.682063 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:15 crc kubenswrapper[4768]: I0217 13:37:15.682073 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:15 crc kubenswrapper[4768]: I0217 13:37:15.682087 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:15 crc kubenswrapper[4768]: I0217 13:37:15.682120 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:15Z","lastTransitionTime":"2026-02-17T13:37:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:15 crc kubenswrapper[4768]: I0217 13:37:15.785447 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:15 crc kubenswrapper[4768]: I0217 13:37:15.785528 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:15 crc kubenswrapper[4768]: I0217 13:37:15.785543 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:15 crc kubenswrapper[4768]: I0217 13:37:15.785560 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:15 crc kubenswrapper[4768]: I0217 13:37:15.785573 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:15Z","lastTransitionTime":"2026-02-17T13:37:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:15 crc kubenswrapper[4768]: I0217 13:37:15.888133 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:15 crc kubenswrapper[4768]: I0217 13:37:15.888175 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:15 crc kubenswrapper[4768]: I0217 13:37:15.888185 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:15 crc kubenswrapper[4768]: I0217 13:37:15.888202 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:15 crc kubenswrapper[4768]: I0217 13:37:15.888212 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:15Z","lastTransitionTime":"2026-02-17T13:37:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:15 crc kubenswrapper[4768]: I0217 13:37:15.990184 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:15 crc kubenswrapper[4768]: I0217 13:37:15.990239 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:15 crc kubenswrapper[4768]: I0217 13:37:15.990253 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:15 crc kubenswrapper[4768]: I0217 13:37:15.990278 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:15 crc kubenswrapper[4768]: I0217 13:37:15.990301 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:15Z","lastTransitionTime":"2026-02-17T13:37:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:16 crc kubenswrapper[4768]: I0217 13:37:16.094457 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:16 crc kubenswrapper[4768]: I0217 13:37:16.094499 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:16 crc kubenswrapper[4768]: I0217 13:37:16.094510 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:16 crc kubenswrapper[4768]: I0217 13:37:16.094527 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:16 crc kubenswrapper[4768]: I0217 13:37:16.094539 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:16Z","lastTransitionTime":"2026-02-17T13:37:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:16 crc kubenswrapper[4768]: I0217 13:37:16.196649 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:16 crc kubenswrapper[4768]: I0217 13:37:16.196936 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:16 crc kubenswrapper[4768]: I0217 13:37:16.197161 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:16 crc kubenswrapper[4768]: I0217 13:37:16.197559 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:16 crc kubenswrapper[4768]: I0217 13:37:16.197649 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:16Z","lastTransitionTime":"2026-02-17T13:37:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:16 crc kubenswrapper[4768]: I0217 13:37:16.300035 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:16 crc kubenswrapper[4768]: I0217 13:37:16.300065 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:16 crc kubenswrapper[4768]: I0217 13:37:16.300073 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:16 crc kubenswrapper[4768]: I0217 13:37:16.300088 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:16 crc kubenswrapper[4768]: I0217 13:37:16.300111 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:16Z","lastTransitionTime":"2026-02-17T13:37:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:16 crc kubenswrapper[4768]: I0217 13:37:16.402387 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:16 crc kubenswrapper[4768]: I0217 13:37:16.402438 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:16 crc kubenswrapper[4768]: I0217 13:37:16.402448 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:16 crc kubenswrapper[4768]: I0217 13:37:16.402464 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:16 crc kubenswrapper[4768]: I0217 13:37:16.402476 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:16Z","lastTransitionTime":"2026-02-17T13:37:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:16 crc kubenswrapper[4768]: I0217 13:37:16.504966 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:16 crc kubenswrapper[4768]: I0217 13:37:16.505011 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:16 crc kubenswrapper[4768]: I0217 13:37:16.505022 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:16 crc kubenswrapper[4768]: I0217 13:37:16.505039 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:16 crc kubenswrapper[4768]: I0217 13:37:16.505050 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:16Z","lastTransitionTime":"2026-02-17T13:37:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:16 crc kubenswrapper[4768]: I0217 13:37:16.533221 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5bxh7" Feb 17 13:37:16 crc kubenswrapper[4768]: E0217 13:37:16.533355 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5bxh7" podUID="8c8b1469-ed55-4743-9553-f81efd79e5f1" Feb 17 13:37:16 crc kubenswrapper[4768]: I0217 13:37:16.538759 4768 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-18 04:50:13.141969225 +0000 UTC Feb 17 13:37:16 crc kubenswrapper[4768]: I0217 13:37:16.607489 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:16 crc kubenswrapper[4768]: I0217 13:37:16.607556 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:16 crc kubenswrapper[4768]: I0217 13:37:16.607576 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:16 crc kubenswrapper[4768]: I0217 13:37:16.607599 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:16 crc kubenswrapper[4768]: I0217 13:37:16.607613 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:16Z","lastTransitionTime":"2026-02-17T13:37:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:16 crc kubenswrapper[4768]: I0217 13:37:16.710354 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:16 crc kubenswrapper[4768]: I0217 13:37:16.710395 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:16 crc kubenswrapper[4768]: I0217 13:37:16.710407 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:16 crc kubenswrapper[4768]: I0217 13:37:16.710423 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:16 crc kubenswrapper[4768]: I0217 13:37:16.710434 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:16Z","lastTransitionTime":"2026-02-17T13:37:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:16 crc kubenswrapper[4768]: I0217 13:37:16.813218 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:16 crc kubenswrapper[4768]: I0217 13:37:16.813264 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:16 crc kubenswrapper[4768]: I0217 13:37:16.813275 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:16 crc kubenswrapper[4768]: I0217 13:37:16.813292 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:16 crc kubenswrapper[4768]: I0217 13:37:16.813303 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:16Z","lastTransitionTime":"2026-02-17T13:37:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:16 crc kubenswrapper[4768]: I0217 13:37:16.916203 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:16 crc kubenswrapper[4768]: I0217 13:37:16.916246 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:16 crc kubenswrapper[4768]: I0217 13:37:16.916256 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:16 crc kubenswrapper[4768]: I0217 13:37:16.916271 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:16 crc kubenswrapper[4768]: I0217 13:37:16.916283 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:16Z","lastTransitionTime":"2026-02-17T13:37:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:17 crc kubenswrapper[4768]: I0217 13:37:17.018896 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:17 crc kubenswrapper[4768]: I0217 13:37:17.018932 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:17 crc kubenswrapper[4768]: I0217 13:37:17.018941 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:17 crc kubenswrapper[4768]: I0217 13:37:17.018960 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:17 crc kubenswrapper[4768]: I0217 13:37:17.018978 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:17Z","lastTransitionTime":"2026-02-17T13:37:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:17 crc kubenswrapper[4768]: I0217 13:37:17.121172 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:17 crc kubenswrapper[4768]: I0217 13:37:17.121212 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:17 crc kubenswrapper[4768]: I0217 13:37:17.121228 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:17 crc kubenswrapper[4768]: I0217 13:37:17.121247 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:17 crc kubenswrapper[4768]: I0217 13:37:17.121258 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:17Z","lastTransitionTime":"2026-02-17T13:37:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:17 crc kubenswrapper[4768]: I0217 13:37:17.223904 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:17 crc kubenswrapper[4768]: I0217 13:37:17.223943 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:17 crc kubenswrapper[4768]: I0217 13:37:17.223951 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:17 crc kubenswrapper[4768]: I0217 13:37:17.223965 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:17 crc kubenswrapper[4768]: I0217 13:37:17.223974 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:17Z","lastTransitionTime":"2026-02-17T13:37:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:17 crc kubenswrapper[4768]: I0217 13:37:17.326426 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:17 crc kubenswrapper[4768]: I0217 13:37:17.326466 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:17 crc kubenswrapper[4768]: I0217 13:37:17.326478 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:17 crc kubenswrapper[4768]: I0217 13:37:17.326494 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:17 crc kubenswrapper[4768]: I0217 13:37:17.326508 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:17Z","lastTransitionTime":"2026-02-17T13:37:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:17 crc kubenswrapper[4768]: I0217 13:37:17.428827 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:17 crc kubenswrapper[4768]: I0217 13:37:17.429401 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:17 crc kubenswrapper[4768]: I0217 13:37:17.429501 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:17 crc kubenswrapper[4768]: I0217 13:37:17.429594 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:17 crc kubenswrapper[4768]: I0217 13:37:17.429677 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:17Z","lastTransitionTime":"2026-02-17T13:37:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:17 crc kubenswrapper[4768]: I0217 13:37:17.531989 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:17 crc kubenswrapper[4768]: I0217 13:37:17.532048 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:17 crc kubenswrapper[4768]: I0217 13:37:17.532061 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:17 crc kubenswrapper[4768]: I0217 13:37:17.532082 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:17 crc kubenswrapper[4768]: I0217 13:37:17.532094 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:17Z","lastTransitionTime":"2026-02-17T13:37:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:17 crc kubenswrapper[4768]: I0217 13:37:17.533354 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 13:37:17 crc kubenswrapper[4768]: E0217 13:37:17.533441 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 13:37:17 crc kubenswrapper[4768]: I0217 13:37:17.533361 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 13:37:17 crc kubenswrapper[4768]: I0217 13:37:17.533511 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 13:37:17 crc kubenswrapper[4768]: E0217 13:37:17.533591 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 13:37:17 crc kubenswrapper[4768]: E0217 13:37:17.533649 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 13:37:17 crc kubenswrapper[4768]: I0217 13:37:17.539528 4768 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-08 02:51:15.991535077 +0000 UTC Feb 17 13:37:17 crc kubenswrapper[4768]: I0217 13:37:17.633816 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:17 crc kubenswrapper[4768]: I0217 13:37:17.633904 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:17 crc kubenswrapper[4768]: I0217 13:37:17.633921 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:17 crc kubenswrapper[4768]: I0217 13:37:17.633941 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:17 crc kubenswrapper[4768]: I0217 13:37:17.633953 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:17Z","lastTransitionTime":"2026-02-17T13:37:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:17 crc kubenswrapper[4768]: I0217 13:37:17.736588 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:17 crc kubenswrapper[4768]: I0217 13:37:17.736642 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:17 crc kubenswrapper[4768]: I0217 13:37:17.736652 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:17 crc kubenswrapper[4768]: I0217 13:37:17.736672 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:17 crc kubenswrapper[4768]: I0217 13:37:17.736682 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:17Z","lastTransitionTime":"2026-02-17T13:37:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:17 crc kubenswrapper[4768]: I0217 13:37:17.838837 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:17 crc kubenswrapper[4768]: I0217 13:37:17.838891 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:17 crc kubenswrapper[4768]: I0217 13:37:17.838908 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:17 crc kubenswrapper[4768]: I0217 13:37:17.838937 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:17 crc kubenswrapper[4768]: I0217 13:37:17.838951 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:17Z","lastTransitionTime":"2026-02-17T13:37:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:17 crc kubenswrapper[4768]: I0217 13:37:17.941656 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:17 crc kubenswrapper[4768]: I0217 13:37:17.941726 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:17 crc kubenswrapper[4768]: I0217 13:37:17.941738 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:17 crc kubenswrapper[4768]: I0217 13:37:17.941759 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:17 crc kubenswrapper[4768]: I0217 13:37:17.941771 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:17Z","lastTransitionTime":"2026-02-17T13:37:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:18 crc kubenswrapper[4768]: I0217 13:37:18.044770 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:18 crc kubenswrapper[4768]: I0217 13:37:18.044815 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:18 crc kubenswrapper[4768]: I0217 13:37:18.044827 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:18 crc kubenswrapper[4768]: I0217 13:37:18.044872 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:18 crc kubenswrapper[4768]: I0217 13:37:18.044888 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:18Z","lastTransitionTime":"2026-02-17T13:37:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:18 crc kubenswrapper[4768]: I0217 13:37:18.146688 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:18 crc kubenswrapper[4768]: I0217 13:37:18.146737 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:18 crc kubenswrapper[4768]: I0217 13:37:18.146751 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:18 crc kubenswrapper[4768]: I0217 13:37:18.146772 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:18 crc kubenswrapper[4768]: I0217 13:37:18.146788 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:18Z","lastTransitionTime":"2026-02-17T13:37:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:18 crc kubenswrapper[4768]: I0217 13:37:18.249885 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:18 crc kubenswrapper[4768]: I0217 13:37:18.249932 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:18 crc kubenswrapper[4768]: I0217 13:37:18.249940 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:18 crc kubenswrapper[4768]: I0217 13:37:18.249957 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:18 crc kubenswrapper[4768]: I0217 13:37:18.249975 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:18Z","lastTransitionTime":"2026-02-17T13:37:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:18 crc kubenswrapper[4768]: I0217 13:37:18.351920 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:18 crc kubenswrapper[4768]: I0217 13:37:18.351962 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:18 crc kubenswrapper[4768]: I0217 13:37:18.351970 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:18 crc kubenswrapper[4768]: I0217 13:37:18.351987 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:18 crc kubenswrapper[4768]: I0217 13:37:18.351996 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:18Z","lastTransitionTime":"2026-02-17T13:37:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:18 crc kubenswrapper[4768]: I0217 13:37:18.368743 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:18 crc kubenswrapper[4768]: I0217 13:37:18.368804 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:18 crc kubenswrapper[4768]: I0217 13:37:18.368824 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:18 crc kubenswrapper[4768]: I0217 13:37:18.368849 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:18 crc kubenswrapper[4768]: I0217 13:37:18.368868 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:18Z","lastTransitionTime":"2026-02-17T13:37:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:18 crc kubenswrapper[4768]: E0217 13:37:18.386654 4768 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T13:37:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T13:37:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T13:37:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T13:37:18Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T13:37:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T13:37:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T13:37:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T13:37:18Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"72b2b2e1-552d-4984-900d-b4db18ea60be\\\",\\\"systemUUID\\\":\\\"85ded5cb-c1f6-4d1b-b23e-ee8660dcd6ef\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:37:18Z is after 2025-08-24T17:21:41Z" Feb 17 13:37:18 crc kubenswrapper[4768]: I0217 13:37:18.390973 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:18 crc kubenswrapper[4768]: I0217 13:37:18.391261 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:18 crc kubenswrapper[4768]: I0217 13:37:18.391343 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:18 crc kubenswrapper[4768]: I0217 13:37:18.391434 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:18 crc kubenswrapper[4768]: I0217 13:37:18.391513 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:18Z","lastTransitionTime":"2026-02-17T13:37:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:18 crc kubenswrapper[4768]: E0217 13:37:18.403947 4768 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T13:37:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T13:37:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T13:37:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T13:37:18Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T13:37:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T13:37:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T13:37:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T13:37:18Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"72b2b2e1-552d-4984-900d-b4db18ea60be\\\",\\\"systemUUID\\\":\\\"85ded5cb-c1f6-4d1b-b23e-ee8660dcd6ef\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:37:18Z is after 2025-08-24T17:21:41Z" Feb 17 13:37:18 crc kubenswrapper[4768]: I0217 13:37:18.407519 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:18 crc kubenswrapper[4768]: I0217 13:37:18.407583 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:18 crc kubenswrapper[4768]: I0217 13:37:18.407606 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:18 crc kubenswrapper[4768]: I0217 13:37:18.407638 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:18 crc kubenswrapper[4768]: I0217 13:37:18.407658 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:18Z","lastTransitionTime":"2026-02-17T13:37:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:18 crc kubenswrapper[4768]: E0217 13:37:18.430437 4768 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T13:37:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T13:37:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T13:37:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T13:37:18Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T13:37:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T13:37:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T13:37:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T13:37:18Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"72b2b2e1-552d-4984-900d-b4db18ea60be\\\",\\\"systemUUID\\\":\\\"85ded5cb-c1f6-4d1b-b23e-ee8660dcd6ef\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:37:18Z is after 2025-08-24T17:21:41Z" Feb 17 13:37:18 crc kubenswrapper[4768]: I0217 13:37:18.435509 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:18 crc kubenswrapper[4768]: I0217 13:37:18.435713 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:18 crc kubenswrapper[4768]: I0217 13:37:18.435775 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:18 crc kubenswrapper[4768]: I0217 13:37:18.435838 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:18 crc kubenswrapper[4768]: I0217 13:37:18.435898 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:18Z","lastTransitionTime":"2026-02-17T13:37:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:18 crc kubenswrapper[4768]: E0217 13:37:18.467284 4768 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T13:37:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T13:37:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T13:37:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T13:37:18Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T13:37:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T13:37:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T13:37:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T13:37:18Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"72b2b2e1-552d-4984-900d-b4db18ea60be\\\",\\\"systemUUID\\\":\\\"85ded5cb-c1f6-4d1b-b23e-ee8660dcd6ef\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:37:18Z is after 2025-08-24T17:21:41Z" Feb 17 13:37:18 crc kubenswrapper[4768]: I0217 13:37:18.474185 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:18 crc kubenswrapper[4768]: I0217 13:37:18.474236 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:18 crc kubenswrapper[4768]: I0217 13:37:18.474252 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:18 crc kubenswrapper[4768]: I0217 13:37:18.474278 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:18 crc kubenswrapper[4768]: I0217 13:37:18.474294 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:18Z","lastTransitionTime":"2026-02-17T13:37:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:18 crc kubenswrapper[4768]: E0217 13:37:18.494439 4768 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T13:37:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T13:37:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T13:37:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T13:37:18Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T13:37:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T13:37:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T13:37:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T13:37:18Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"72b2b2e1-552d-4984-900d-b4db18ea60be\\\",\\\"systemUUID\\\":\\\"85ded5cb-c1f6-4d1b-b23e-ee8660dcd6ef\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:37:18Z is after 2025-08-24T17:21:41Z" Feb 17 13:37:18 crc kubenswrapper[4768]: E0217 13:37:18.494645 4768 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 17 13:37:18 crc kubenswrapper[4768]: I0217 13:37:18.496243 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:18 crc kubenswrapper[4768]: I0217 13:37:18.496282 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:18 crc kubenswrapper[4768]: I0217 13:37:18.496297 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:18 crc kubenswrapper[4768]: I0217 13:37:18.496318 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:18 crc kubenswrapper[4768]: I0217 13:37:18.496331 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:18Z","lastTransitionTime":"2026-02-17T13:37:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:18 crc kubenswrapper[4768]: I0217 13:37:18.534034 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5bxh7" Feb 17 13:37:18 crc kubenswrapper[4768]: E0217 13:37:18.534212 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5bxh7" podUID="8c8b1469-ed55-4743-9553-f81efd79e5f1" Feb 17 13:37:18 crc kubenswrapper[4768]: I0217 13:37:18.540046 4768 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-19 00:29:44.430210741 +0000 UTC Feb 17 13:37:18 crc kubenswrapper[4768]: I0217 13:37:18.599211 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:18 crc kubenswrapper[4768]: I0217 13:37:18.599245 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:18 crc kubenswrapper[4768]: I0217 13:37:18.599254 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:18 crc kubenswrapper[4768]: I0217 13:37:18.599269 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:18 crc kubenswrapper[4768]: I0217 13:37:18.599278 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:18Z","lastTransitionTime":"2026-02-17T13:37:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:18 crc kubenswrapper[4768]: I0217 13:37:18.701484 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:18 crc kubenswrapper[4768]: I0217 13:37:18.701512 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:18 crc kubenswrapper[4768]: I0217 13:37:18.701522 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:18 crc kubenswrapper[4768]: I0217 13:37:18.701538 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:18 crc kubenswrapper[4768]: I0217 13:37:18.701548 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:18Z","lastTransitionTime":"2026-02-17T13:37:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:18 crc kubenswrapper[4768]: I0217 13:37:18.803761 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:18 crc kubenswrapper[4768]: I0217 13:37:18.803801 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:18 crc kubenswrapper[4768]: I0217 13:37:18.803813 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:18 crc kubenswrapper[4768]: I0217 13:37:18.803829 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:18 crc kubenswrapper[4768]: I0217 13:37:18.803840 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:18Z","lastTransitionTime":"2026-02-17T13:37:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:18 crc kubenswrapper[4768]: I0217 13:37:18.906087 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:18 crc kubenswrapper[4768]: I0217 13:37:18.906162 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:18 crc kubenswrapper[4768]: I0217 13:37:18.906179 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:18 crc kubenswrapper[4768]: I0217 13:37:18.906201 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:18 crc kubenswrapper[4768]: I0217 13:37:18.906217 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:18Z","lastTransitionTime":"2026-02-17T13:37:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:19 crc kubenswrapper[4768]: I0217 13:37:19.008935 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:19 crc kubenswrapper[4768]: I0217 13:37:19.008977 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:19 crc kubenswrapper[4768]: I0217 13:37:19.008985 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:19 crc kubenswrapper[4768]: I0217 13:37:19.008999 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:19 crc kubenswrapper[4768]: I0217 13:37:19.009008 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:19Z","lastTransitionTime":"2026-02-17T13:37:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:19 crc kubenswrapper[4768]: I0217 13:37:19.111279 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:19 crc kubenswrapper[4768]: I0217 13:37:19.111325 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:19 crc kubenswrapper[4768]: I0217 13:37:19.111335 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:19 crc kubenswrapper[4768]: I0217 13:37:19.111351 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:19 crc kubenswrapper[4768]: I0217 13:37:19.111362 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:19Z","lastTransitionTime":"2026-02-17T13:37:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:19 crc kubenswrapper[4768]: I0217 13:37:19.213317 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:19 crc kubenswrapper[4768]: I0217 13:37:19.213370 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:19 crc kubenswrapper[4768]: I0217 13:37:19.213386 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:19 crc kubenswrapper[4768]: I0217 13:37:19.213409 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:19 crc kubenswrapper[4768]: I0217 13:37:19.213427 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:19Z","lastTransitionTime":"2026-02-17T13:37:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:19 crc kubenswrapper[4768]: I0217 13:37:19.316685 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:19 crc kubenswrapper[4768]: I0217 13:37:19.316746 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:19 crc kubenswrapper[4768]: I0217 13:37:19.316765 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:19 crc kubenswrapper[4768]: I0217 13:37:19.316802 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:19 crc kubenswrapper[4768]: I0217 13:37:19.317388 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:19Z","lastTransitionTime":"2026-02-17T13:37:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:19 crc kubenswrapper[4768]: I0217 13:37:19.420974 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:19 crc kubenswrapper[4768]: I0217 13:37:19.421025 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:19 crc kubenswrapper[4768]: I0217 13:37:19.421044 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:19 crc kubenswrapper[4768]: I0217 13:37:19.421068 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:19 crc kubenswrapper[4768]: I0217 13:37:19.421087 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:19Z","lastTransitionTime":"2026-02-17T13:37:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:19 crc kubenswrapper[4768]: I0217 13:37:19.524269 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:19 crc kubenswrapper[4768]: I0217 13:37:19.524320 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:19 crc kubenswrapper[4768]: I0217 13:37:19.524335 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:19 crc kubenswrapper[4768]: I0217 13:37:19.524359 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:19 crc kubenswrapper[4768]: I0217 13:37:19.524377 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:19Z","lastTransitionTime":"2026-02-17T13:37:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:19 crc kubenswrapper[4768]: I0217 13:37:19.533460 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 13:37:19 crc kubenswrapper[4768]: E0217 13:37:19.533610 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 13:37:19 crc kubenswrapper[4768]: I0217 13:37:19.533673 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 13:37:19 crc kubenswrapper[4768]: I0217 13:37:19.533795 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 13:37:19 crc kubenswrapper[4768]: E0217 13:37:19.534001 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 13:37:19 crc kubenswrapper[4768]: E0217 13:37:19.534201 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 13:37:19 crc kubenswrapper[4768]: I0217 13:37:19.540616 4768 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-16 16:04:26.361526809 +0000 UTC Feb 17 13:37:19 crc kubenswrapper[4768]: I0217 13:37:19.626929 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:19 crc kubenswrapper[4768]: I0217 13:37:19.627016 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:19 crc kubenswrapper[4768]: I0217 13:37:19.627045 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:19 crc kubenswrapper[4768]: I0217 13:37:19.627076 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:19 crc kubenswrapper[4768]: I0217 13:37:19.627086 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:19Z","lastTransitionTime":"2026-02-17T13:37:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:19 crc kubenswrapper[4768]: I0217 13:37:19.729342 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:19 crc kubenswrapper[4768]: I0217 13:37:19.729372 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:19 crc kubenswrapper[4768]: I0217 13:37:19.729380 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:19 crc kubenswrapper[4768]: I0217 13:37:19.729411 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:19 crc kubenswrapper[4768]: I0217 13:37:19.729420 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:19Z","lastTransitionTime":"2026-02-17T13:37:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:19 crc kubenswrapper[4768]: I0217 13:37:19.831657 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:19 crc kubenswrapper[4768]: I0217 13:37:19.831704 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:19 crc kubenswrapper[4768]: I0217 13:37:19.831719 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:19 crc kubenswrapper[4768]: I0217 13:37:19.831744 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:19 crc kubenswrapper[4768]: I0217 13:37:19.831763 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:19Z","lastTransitionTime":"2026-02-17T13:37:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:19 crc kubenswrapper[4768]: I0217 13:37:19.934151 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:19 crc kubenswrapper[4768]: I0217 13:37:19.934197 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:19 crc kubenswrapper[4768]: I0217 13:37:19.934209 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:19 crc kubenswrapper[4768]: I0217 13:37:19.934228 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:19 crc kubenswrapper[4768]: I0217 13:37:19.934241 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:19Z","lastTransitionTime":"2026-02-17T13:37:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:20 crc kubenswrapper[4768]: I0217 13:37:20.037200 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:20 crc kubenswrapper[4768]: I0217 13:37:20.037260 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:20 crc kubenswrapper[4768]: I0217 13:37:20.037273 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:20 crc kubenswrapper[4768]: I0217 13:37:20.037290 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:20 crc kubenswrapper[4768]: I0217 13:37:20.037303 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:20Z","lastTransitionTime":"2026-02-17T13:37:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:20 crc kubenswrapper[4768]: I0217 13:37:20.139388 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:20 crc kubenswrapper[4768]: I0217 13:37:20.139432 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:20 crc kubenswrapper[4768]: I0217 13:37:20.139442 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:20 crc kubenswrapper[4768]: I0217 13:37:20.139462 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:20 crc kubenswrapper[4768]: I0217 13:37:20.139476 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:20Z","lastTransitionTime":"2026-02-17T13:37:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:20 crc kubenswrapper[4768]: I0217 13:37:20.241523 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:20 crc kubenswrapper[4768]: I0217 13:37:20.241563 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:20 crc kubenswrapper[4768]: I0217 13:37:20.241575 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:20 crc kubenswrapper[4768]: I0217 13:37:20.241591 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:20 crc kubenswrapper[4768]: I0217 13:37:20.241603 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:20Z","lastTransitionTime":"2026-02-17T13:37:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:20 crc kubenswrapper[4768]: I0217 13:37:20.343793 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:20 crc kubenswrapper[4768]: I0217 13:37:20.343835 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:20 crc kubenswrapper[4768]: I0217 13:37:20.343846 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:20 crc kubenswrapper[4768]: I0217 13:37:20.343862 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:20 crc kubenswrapper[4768]: I0217 13:37:20.343873 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:20Z","lastTransitionTime":"2026-02-17T13:37:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:20 crc kubenswrapper[4768]: I0217 13:37:20.446915 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:20 crc kubenswrapper[4768]: I0217 13:37:20.446959 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:20 crc kubenswrapper[4768]: I0217 13:37:20.446968 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:20 crc kubenswrapper[4768]: I0217 13:37:20.447009 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:20 crc kubenswrapper[4768]: I0217 13:37:20.447019 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:20Z","lastTransitionTime":"2026-02-17T13:37:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:20 crc kubenswrapper[4768]: I0217 13:37:20.533808 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5bxh7" Feb 17 13:37:20 crc kubenswrapper[4768]: E0217 13:37:20.534005 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5bxh7" podUID="8c8b1469-ed55-4743-9553-f81efd79e5f1" Feb 17 13:37:20 crc kubenswrapper[4768]: I0217 13:37:20.541261 4768 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-15 11:16:49.013738547 +0000 UTC Feb 17 13:37:20 crc kubenswrapper[4768]: I0217 13:37:20.549677 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:20 crc kubenswrapper[4768]: I0217 13:37:20.549740 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:20 crc kubenswrapper[4768]: I0217 13:37:20.549757 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:20 crc kubenswrapper[4768]: I0217 13:37:20.549781 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:20 crc kubenswrapper[4768]: I0217 13:37:20.549798 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:20Z","lastTransitionTime":"2026-02-17T13:37:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:20 crc kubenswrapper[4768]: I0217 13:37:20.652669 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:20 crc kubenswrapper[4768]: I0217 13:37:20.652715 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:20 crc kubenswrapper[4768]: I0217 13:37:20.652729 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:20 crc kubenswrapper[4768]: I0217 13:37:20.652745 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:20 crc kubenswrapper[4768]: I0217 13:37:20.652754 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:20Z","lastTransitionTime":"2026-02-17T13:37:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:20 crc kubenswrapper[4768]: I0217 13:37:20.754923 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:20 crc kubenswrapper[4768]: I0217 13:37:20.754969 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:20 crc kubenswrapper[4768]: I0217 13:37:20.754983 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:20 crc kubenswrapper[4768]: I0217 13:37:20.755001 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:20 crc kubenswrapper[4768]: I0217 13:37:20.755014 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:20Z","lastTransitionTime":"2026-02-17T13:37:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:20 crc kubenswrapper[4768]: I0217 13:37:20.858415 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:20 crc kubenswrapper[4768]: I0217 13:37:20.858828 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:20 crc kubenswrapper[4768]: I0217 13:37:20.858967 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:20 crc kubenswrapper[4768]: I0217 13:37:20.859145 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:20 crc kubenswrapper[4768]: I0217 13:37:20.859280 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:20Z","lastTransitionTime":"2026-02-17T13:37:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:20 crc kubenswrapper[4768]: I0217 13:37:20.962622 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:20 crc kubenswrapper[4768]: I0217 13:37:20.962735 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:20 crc kubenswrapper[4768]: I0217 13:37:20.962761 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:20 crc kubenswrapper[4768]: I0217 13:37:20.963266 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:20 crc kubenswrapper[4768]: I0217 13:37:20.963291 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:20Z","lastTransitionTime":"2026-02-17T13:37:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:21 crc kubenswrapper[4768]: I0217 13:37:21.066553 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:21 crc kubenswrapper[4768]: I0217 13:37:21.066614 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:21 crc kubenswrapper[4768]: I0217 13:37:21.066636 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:21 crc kubenswrapper[4768]: I0217 13:37:21.066665 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:21 crc kubenswrapper[4768]: I0217 13:37:21.066687 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:21Z","lastTransitionTime":"2026-02-17T13:37:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:21 crc kubenswrapper[4768]: I0217 13:37:21.169176 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:21 crc kubenswrapper[4768]: I0217 13:37:21.169216 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:21 crc kubenswrapper[4768]: I0217 13:37:21.169229 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:21 crc kubenswrapper[4768]: I0217 13:37:21.169248 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:21 crc kubenswrapper[4768]: I0217 13:37:21.169260 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:21Z","lastTransitionTime":"2026-02-17T13:37:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:21 crc kubenswrapper[4768]: I0217 13:37:21.272258 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:21 crc kubenswrapper[4768]: I0217 13:37:21.272332 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:21 crc kubenswrapper[4768]: I0217 13:37:21.272346 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:21 crc kubenswrapper[4768]: I0217 13:37:21.272363 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:21 crc kubenswrapper[4768]: I0217 13:37:21.272374 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:21Z","lastTransitionTime":"2026-02-17T13:37:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:21 crc kubenswrapper[4768]: I0217 13:37:21.374968 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:21 crc kubenswrapper[4768]: I0217 13:37:21.375009 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:21 crc kubenswrapper[4768]: I0217 13:37:21.375021 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:21 crc kubenswrapper[4768]: I0217 13:37:21.375037 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:21 crc kubenswrapper[4768]: I0217 13:37:21.375067 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:21Z","lastTransitionTime":"2026-02-17T13:37:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:21 crc kubenswrapper[4768]: I0217 13:37:21.477151 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:21 crc kubenswrapper[4768]: I0217 13:37:21.477184 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:21 crc kubenswrapper[4768]: I0217 13:37:21.477203 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:21 crc kubenswrapper[4768]: I0217 13:37:21.477222 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:21 crc kubenswrapper[4768]: I0217 13:37:21.477235 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:21Z","lastTransitionTime":"2026-02-17T13:37:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:21 crc kubenswrapper[4768]: I0217 13:37:21.533263 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 13:37:21 crc kubenswrapper[4768]: I0217 13:37:21.534064 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 13:37:21 crc kubenswrapper[4768]: E0217 13:37:21.534241 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 13:37:21 crc kubenswrapper[4768]: I0217 13:37:21.534563 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 13:37:21 crc kubenswrapper[4768]: E0217 13:37:21.534681 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 13:37:21 crc kubenswrapper[4768]: E0217 13:37:21.534932 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 13:37:21 crc kubenswrapper[4768]: I0217 13:37:21.541729 4768 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-02 13:09:27.449988407 +0000 UTC Feb 17 13:37:21 crc kubenswrapper[4768]: I0217 13:37:21.545764 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-hngsc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a9682c-5dd5-49ce-bd8c-60e91527ec2a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db596eafde85d9ad74007393b8576b424cd8561f68f694af82cde80f997a5688\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fbjbb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:36:46Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-hngsc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:37:21Z is after 2025-08-24T17:21:41Z" Feb 17 13:37:21 crc kubenswrapper[4768]: I0217 13:37:21.558688 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"99a4349d-f5a6-431b-b8d6-9de1f9bbe63a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f45aa99773693c07f27add72aa8305d47c48deeb24382aa9ac8bca232fd26ff3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://47db5ef94c07ecce7b368bb809a736c9532fe3e47081cfa7e248b8b40d1d243f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0adb0161ea24422b8b1443a81f00e9c38f60bf8d0718075ce189158200b28a78\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22e1c78ff231c7a76ebb06589f5f14caed8c0b72475083230c80b4512ae2a048\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:36:21Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:37:21Z is after 2025-08-24T17:21:41Z" Feb 17 13:37:21 crc kubenswrapper[4768]: I0217 13:37:21.572245 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1bbd8d51-f387-4de0-b841-e602f7249532\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:37:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:37:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a9d10a3cac9b0b43e1e7c3fb80828cd2d694a46314618dc023ffec8a43488c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e06287439e9f86ed8e81d6d3e8b07691439a8227d2eb2b819db1a7e6de078c18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://28f9a3cd67e4819d5c4e24b1b0d8576898a01c75184c958d65ce81f2523ce7f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3492da352fa4e1b54d266098578ea4f42e40cf1ebe2d6e511dc804473efd9e64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3492da352fa4e1b54d266098578ea4f42e40cf1ebe2d6e511dc804473efd9e64\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:36:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:36:22Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:36:21Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:37:21Z is after 2025-08-24T17:21:41Z" Feb 17 13:37:21 crc kubenswrapper[4768]: I0217 13:37:21.579959 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:21 crc kubenswrapper[4768]: I0217 13:37:21.580000 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:21 crc kubenswrapper[4768]: I0217 13:37:21.580009 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:21 crc kubenswrapper[4768]: I0217 13:37:21.580026 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:21 crc kubenswrapper[4768]: I0217 13:37:21.580037 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:21Z","lastTransitionTime":"2026-02-17T13:37:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:21 crc kubenswrapper[4768]: I0217 13:37:21.589078 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f9b2d1282c12fcb8597c7b4c60dc1bab1049afc179eaddbc8a72203b392f78d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:37:21Z is after 2025-08-24T17:21:41Z" Feb 17 13:37:21 crc kubenswrapper[4768]: I0217 13:37:21.601524 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-p97z4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"10c685ba-8fe0-425c-958c-3fb6754d3d84\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d56cb97a69888610458dcdfe5fa47e01b6641b0ab8ce5b11c01042001017d633\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b2h62\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://69b662556c0f67ed5b73c21d8dda6de43cd7b0e7c3dc7a57578f3ce94a4be7cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b2h62\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:36:42Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-p97z4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:37:21Z is after 2025-08-24T17:21:41Z" Feb 17 13:37:21 crc kubenswrapper[4768]: I0217 13:37:21.616258 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-jjjqj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e044bf1f-26b2-4a39-86e6-0440eff3eaa9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://19bc9ea99c2b3b0015849f2a4c730a14e048c35cf69f5b4025a4d51d707cf9f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jlkgb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:36:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-jjjqj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:37:21Z is after 2025-08-24T17:21:41Z" Feb 17 13:37:21 crc kubenswrapper[4768]: I0217 13:37:21.635481 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5cplg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"742e6df8-2a68-426e-982c-ef825c6efca3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://88f4574f4bb95ef3e580f5e3e98feb55cd1381df1cc14a3797f040f3a8d92e86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg6ql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c29effffff4df037300ce8f32cae5337e3c60634d26fb82fde99ad7994a4fd4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg6ql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bca83097ac9d4b621e9bffa8f2f815a6060db34cb0272281b765aa7705d3ab5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg6ql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aaa535fc7f87d47d5bb54797f60b4adcabe1adc1d5d79720acc7a9ba7bb355c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg6ql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e14dfa7f4c26b9ae731392907658bf0a72b811243137b32320a1be70a334a1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg6ql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0849644593ce9a9b0ce3191eb62e47c5986c4b20f93313dd9fa721c7879ede5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg6ql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c2be51c8707a26e2f124761117543e359c32c8fb8d69d1aa54cea3f1b9cfb11b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c2be51c8707a26e2f124761117543e359c32c8fb8d69d1aa54cea3f1b9cfb11b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-17T13:37:09Z\\\",\\\"message\\\":\\\" start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:37:09Z is after 2025-08-24T17:21:41Z]\\\\nI0217 13:37:09.670472 6446 services_controller.go:473] Services do not match for network=default, existing lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-marketplace/redhat-marketplace_TCP_cluster\\\\\\\", UUID:\\\\\\\"97b6e7b0-06ca-455e-8259-06895040cb0c\\\\\\\", Protocol:\\\\\\\"tcp\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-marketplace/redhat-marketplace\\\\\\\"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T13:37:08Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-5cplg_openshift-ovn-kubernetes(742e6df8-2a68-426e-982c-ef825c6efca3)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg6ql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8e5ca206927b1e2a6968ce42ecdb1441da4c68ad11b65c657a68889ac73af37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg6ql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6fcb3dc45db5b2c74f9bdd4c7c453815a392de1f2afe91ebed618327cfe5cb7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6fcb3dc45db5b2c74f9bdd4c7c453815a392de1f2afe91ebed618327cfe5cb7c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:36:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg6ql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:36:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-5cplg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:37:21Z is after 2025-08-24T17:21:41Z" Feb 17 13:37:21 crc kubenswrapper[4768]: I0217 13:37:21.648912 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-6xvnz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"68ae92b3-aced-409b-901b-252d2364cc01\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c3e6dc9e6ef1908b60694a76b89114accbccc8e33d2334e751125f348ff68191\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42dxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ccce43f2773621b91f6782909e74d6c1cb29f3f2e0d4280753e2719f256d694\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3ccce43f2773621b91f6782909e74d6c1cb29f3f2e0d4280753e2719f256d694\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:36:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:36:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42dxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8fa44e73498e2987dca421d954e3c6b71c36138496d25e3498934a0ae03c25de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8fa44e73498e2987dca421d954e3c6b71c36138496d25e3498934a0ae03c25de\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:36:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:36:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42dxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8eee5a030051c479dc3e7d690a338260f3a886f6c8b6764037de0a4b3dcea101\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8eee5a030051c479dc3e7d690a338260f3a886f6c8b6764037de0a4b3dcea101\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:36:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:36:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42dxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://473b18f574df0c69d4f70ded0dee7c42ee64daab422dff19c6a58a0cde52d045\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://473b18f574df0c69d4f70ded0dee7c42ee64daab422dff19c6a58a0cde52d045\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:36:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:36:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42dxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a84122e995b4744d706f85250dc149036364dc1e476c38b54f08494e0cb41ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3a84122e995b4744d706f85250dc149036364dc1e476c38b54f08494e0cb41ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:36:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:36:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42dxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f77e278dccb14a5e674dad398d194080a57e6949529914ddfa9f8a63dbea8a34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f77e278dccb14a5e674dad398d194080a57e6949529914ddfa9f8a63dbea8a34\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:36:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:36:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42dxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:36:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-6xvnz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:37:21Z is after 2025-08-24T17:21:41Z" Feb 17 13:37:21 crc kubenswrapper[4768]: I0217 13:37:21.659904 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-62mzv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f7e6dba9-bf9f-464a-9842-f4f2a793dedf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25c892cb5b274e124423e1da0778d7b3415022fb0280eed30a6b48055d44c3bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nz8rj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4dade5f6bbbe0473f9330ea16398a4522405ca6b22a0e65851a98fe57943b38f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nz8rj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:36:53Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-62mzv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:37:21Z is after 2025-08-24T17:21:41Z" Feb 17 13:37:21 crc kubenswrapper[4768]: I0217 13:37:21.673683 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"88441663-1346-4328-a433-63cb4d9b6722\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:37:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:37:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1dd90501c9cf3ad0b27fefe4999afdfeaf0a8262728c8d47cd4714b0ef71d340\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://52dd97c46a2e371a68dcd9558be0f94d3bdd20c7a6ba6a65178fc09cdceea903\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4f0320f35632987c0e1f7eb6655c7221d7c159f1473088fa143a7c69a60f5b0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1670eca5569373c8bcdf22d19faf8794c43207f7711d502eeb2ade60d643f5dd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9ecebe409455190192bdd157c1aa1c3eaec5d7930ccfaa6c456a1d04be7fff2f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-17T13:36:41Z\\\",\\\"message\\\":\\\"le observer\\\\nW0217 13:36:40.862072 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0217 13:36:40.862247 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0217 13:36:40.862863 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1936381186/tls.crt::/tmp/serving-cert-1936381186/tls.key\\\\\\\"\\\\nI0217 13:36:41.350711 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0217 13:36:41.368959 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0217 13:36:41.368982 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0217 13:36:41.369003 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0217 13:36:41.369008 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0217 13:36:41.374960 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0217 13:36:41.374983 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 13:36:41.374989 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0217 13:36:41.374990 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0217 13:36:41.374994 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0217 13:36:41.375023 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0217 13:36:41.375028 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0217 13:36:41.375032 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0217 13:36:41.377284 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T13:36:25Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7641c2023d65064decc69606567dff6eff07116a6cc62f9d1b1cddb34a39a407\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:24Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9dffd0f3cc3b89ababe812ea2e550f319242913da2908f466cbb8d9ac541e4b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9dffd0f3cc3b89ababe812ea2e550f319242913da2908f466cbb8d9ac541e4b9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:36:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:36:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:36:21Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:37:21Z is after 2025-08-24T17:21:41Z" Feb 17 13:37:21 crc kubenswrapper[4768]: I0217 13:37:21.682317 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:21 crc kubenswrapper[4768]: I0217 13:37:21.682350 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:21 crc kubenswrapper[4768]: I0217 13:37:21.682362 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:21 crc kubenswrapper[4768]: I0217 13:37:21.682377 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:21 crc kubenswrapper[4768]: I0217 13:37:21.682389 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:21Z","lastTransitionTime":"2026-02-17T13:37:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:21 crc kubenswrapper[4768]: I0217 13:37:21.686276 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c0c5a46beaa5e6c9df638b288967eaff1605591faafc373623753e3c434c6ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a28c91cfb782b4976153d2afced3117498aaabaafa5bba655b020c13aebb1dc5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:37:21Z is after 2025-08-24T17:21:41Z" Feb 17 13:37:21 crc kubenswrapper[4768]: I0217 13:37:21.697183 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:37:21Z is after 2025-08-24T17:21:41Z" Feb 17 13:37:21 crc kubenswrapper[4768]: I0217 13:37:21.706242 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-5bxh7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8c8b1469-ed55-4743-9553-f81efd79e5f1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ltsbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ltsbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:36:55Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-5bxh7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:37:21Z is after 2025-08-24T17:21:41Z" Feb 17 13:37:21 crc kubenswrapper[4768]: I0217 13:37:21.716901 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://07aa34038d6332ca2ddf610df510586d343655b1e38a2912ea1513cca35f4dcc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:37:21Z is after 2025-08-24T17:21:41Z" Feb 17 13:37:21 crc kubenswrapper[4768]: I0217 13:37:21.728761 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:41Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:37:21Z is after 2025-08-24T17:21:41Z" Feb 17 13:37:21 crc kubenswrapper[4768]: I0217 13:37:21.740014 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:41Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:37:21Z is after 2025-08-24T17:21:41Z" Feb 17 13:37:21 crc kubenswrapper[4768]: I0217 13:37:21.749051 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6l7rv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dac64b2f-4b0b-454c-96e0-fc7d563d300f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://903f9d31bd5f357a5a1f439a158788c1930d0f63cd50e809028689cfa7884e96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7njgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:36:41Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6l7rv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:37:21Z is after 2025-08-24T17:21:41Z" Feb 17 13:37:21 crc kubenswrapper[4768]: I0217 13:37:21.784081 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:21 crc kubenswrapper[4768]: I0217 13:37:21.784144 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:21 crc kubenswrapper[4768]: I0217 13:37:21.784156 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:21 crc kubenswrapper[4768]: I0217 13:37:21.784175 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:21 crc kubenswrapper[4768]: I0217 13:37:21.784185 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:21Z","lastTransitionTime":"2026-02-17T13:37:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:21 crc kubenswrapper[4768]: I0217 13:37:21.887066 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:21 crc kubenswrapper[4768]: I0217 13:37:21.887136 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:21 crc kubenswrapper[4768]: I0217 13:37:21.887146 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:21 crc kubenswrapper[4768]: I0217 13:37:21.887164 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:21 crc kubenswrapper[4768]: I0217 13:37:21.887176 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:21Z","lastTransitionTime":"2026-02-17T13:37:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:21 crc kubenswrapper[4768]: I0217 13:37:21.989862 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:21 crc kubenswrapper[4768]: I0217 13:37:21.989903 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:21 crc kubenswrapper[4768]: I0217 13:37:21.989919 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:21 crc kubenswrapper[4768]: I0217 13:37:21.989938 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:21 crc kubenswrapper[4768]: I0217 13:37:21.989957 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:21Z","lastTransitionTime":"2026-02-17T13:37:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:22 crc kubenswrapper[4768]: I0217 13:37:22.092448 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:22 crc kubenswrapper[4768]: I0217 13:37:22.092500 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:22 crc kubenswrapper[4768]: I0217 13:37:22.092510 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:22 crc kubenswrapper[4768]: I0217 13:37:22.092524 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:22 crc kubenswrapper[4768]: I0217 13:37:22.092532 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:22Z","lastTransitionTime":"2026-02-17T13:37:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:22 crc kubenswrapper[4768]: I0217 13:37:22.194772 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:22 crc kubenswrapper[4768]: I0217 13:37:22.194821 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:22 crc kubenswrapper[4768]: I0217 13:37:22.194832 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:22 crc kubenswrapper[4768]: I0217 13:37:22.194849 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:22 crc kubenswrapper[4768]: I0217 13:37:22.194862 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:22Z","lastTransitionTime":"2026-02-17T13:37:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:22 crc kubenswrapper[4768]: I0217 13:37:22.296797 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:22 crc kubenswrapper[4768]: I0217 13:37:22.296838 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:22 crc kubenswrapper[4768]: I0217 13:37:22.296848 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:22 crc kubenswrapper[4768]: I0217 13:37:22.296864 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:22 crc kubenswrapper[4768]: I0217 13:37:22.296877 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:22Z","lastTransitionTime":"2026-02-17T13:37:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:22 crc kubenswrapper[4768]: I0217 13:37:22.398989 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:22 crc kubenswrapper[4768]: I0217 13:37:22.399031 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:22 crc kubenswrapper[4768]: I0217 13:37:22.399043 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:22 crc kubenswrapper[4768]: I0217 13:37:22.399062 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:22 crc kubenswrapper[4768]: I0217 13:37:22.399074 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:22Z","lastTransitionTime":"2026-02-17T13:37:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:22 crc kubenswrapper[4768]: I0217 13:37:22.501613 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:22 crc kubenswrapper[4768]: I0217 13:37:22.501644 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:22 crc kubenswrapper[4768]: I0217 13:37:22.501656 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:22 crc kubenswrapper[4768]: I0217 13:37:22.501671 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:22 crc kubenswrapper[4768]: I0217 13:37:22.501682 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:22Z","lastTransitionTime":"2026-02-17T13:37:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:22 crc kubenswrapper[4768]: I0217 13:37:22.534214 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5bxh7" Feb 17 13:37:22 crc kubenswrapper[4768]: E0217 13:37:22.534632 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5bxh7" podUID="8c8b1469-ed55-4743-9553-f81efd79e5f1" Feb 17 13:37:22 crc kubenswrapper[4768]: I0217 13:37:22.534916 4768 scope.go:117] "RemoveContainer" containerID="c2be51c8707a26e2f124761117543e359c32c8fb8d69d1aa54cea3f1b9cfb11b" Feb 17 13:37:22 crc kubenswrapper[4768]: E0217 13:37:22.535210 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-5cplg_openshift-ovn-kubernetes(742e6df8-2a68-426e-982c-ef825c6efca3)\"" pod="openshift-ovn-kubernetes/ovnkube-node-5cplg" podUID="742e6df8-2a68-426e-982c-ef825c6efca3" Feb 17 13:37:22 crc kubenswrapper[4768]: I0217 13:37:22.541925 4768 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-09 19:18:26.013021377 +0000 UTC Feb 17 13:37:22 crc kubenswrapper[4768]: I0217 13:37:22.603739 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:22 crc kubenswrapper[4768]: I0217 13:37:22.603777 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:22 crc kubenswrapper[4768]: I0217 13:37:22.603785 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:22 crc kubenswrapper[4768]: I0217 13:37:22.603799 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:22 crc kubenswrapper[4768]: I0217 13:37:22.603808 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:22Z","lastTransitionTime":"2026-02-17T13:37:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:22 crc kubenswrapper[4768]: I0217 13:37:22.706577 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:22 crc kubenswrapper[4768]: I0217 13:37:22.706643 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:22 crc kubenswrapper[4768]: I0217 13:37:22.706653 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:22 crc kubenswrapper[4768]: I0217 13:37:22.706669 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:22 crc kubenswrapper[4768]: I0217 13:37:22.706678 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:22Z","lastTransitionTime":"2026-02-17T13:37:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:22 crc kubenswrapper[4768]: I0217 13:37:22.809275 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:22 crc kubenswrapper[4768]: I0217 13:37:22.809677 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:22 crc kubenswrapper[4768]: I0217 13:37:22.809796 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:22 crc kubenswrapper[4768]: I0217 13:37:22.809900 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:22 crc kubenswrapper[4768]: I0217 13:37:22.809966 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:22Z","lastTransitionTime":"2026-02-17T13:37:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:22 crc kubenswrapper[4768]: I0217 13:37:22.911646 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:22 crc kubenswrapper[4768]: I0217 13:37:22.911682 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:22 crc kubenswrapper[4768]: I0217 13:37:22.911691 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:22 crc kubenswrapper[4768]: I0217 13:37:22.911706 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:22 crc kubenswrapper[4768]: I0217 13:37:22.911715 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:22Z","lastTransitionTime":"2026-02-17T13:37:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:23 crc kubenswrapper[4768]: I0217 13:37:23.014032 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:23 crc kubenswrapper[4768]: I0217 13:37:23.014276 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:23 crc kubenswrapper[4768]: I0217 13:37:23.014361 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:23 crc kubenswrapper[4768]: I0217 13:37:23.014491 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:23 crc kubenswrapper[4768]: I0217 13:37:23.014573 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:23Z","lastTransitionTime":"2026-02-17T13:37:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:23 crc kubenswrapper[4768]: I0217 13:37:23.117286 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:23 crc kubenswrapper[4768]: I0217 13:37:23.117326 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:23 crc kubenswrapper[4768]: I0217 13:37:23.117357 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:23 crc kubenswrapper[4768]: I0217 13:37:23.117374 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:23 crc kubenswrapper[4768]: I0217 13:37:23.117386 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:23Z","lastTransitionTime":"2026-02-17T13:37:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:23 crc kubenswrapper[4768]: I0217 13:37:23.219892 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:23 crc kubenswrapper[4768]: I0217 13:37:23.219925 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:23 crc kubenswrapper[4768]: I0217 13:37:23.219934 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:23 crc kubenswrapper[4768]: I0217 13:37:23.219948 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:23 crc kubenswrapper[4768]: I0217 13:37:23.219956 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:23Z","lastTransitionTime":"2026-02-17T13:37:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:23 crc kubenswrapper[4768]: I0217 13:37:23.322237 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:23 crc kubenswrapper[4768]: I0217 13:37:23.322285 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:23 crc kubenswrapper[4768]: I0217 13:37:23.322301 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:23 crc kubenswrapper[4768]: I0217 13:37:23.322324 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:23 crc kubenswrapper[4768]: I0217 13:37:23.322338 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:23Z","lastTransitionTime":"2026-02-17T13:37:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:23 crc kubenswrapper[4768]: I0217 13:37:23.425569 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:23 crc kubenswrapper[4768]: I0217 13:37:23.425712 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:23 crc kubenswrapper[4768]: I0217 13:37:23.425741 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:23 crc kubenswrapper[4768]: I0217 13:37:23.425775 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:23 crc kubenswrapper[4768]: I0217 13:37:23.425795 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:23Z","lastTransitionTime":"2026-02-17T13:37:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:23 crc kubenswrapper[4768]: I0217 13:37:23.528496 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:23 crc kubenswrapper[4768]: I0217 13:37:23.528547 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:23 crc kubenswrapper[4768]: I0217 13:37:23.528563 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:23 crc kubenswrapper[4768]: I0217 13:37:23.528583 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:23 crc kubenswrapper[4768]: I0217 13:37:23.528599 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:23Z","lastTransitionTime":"2026-02-17T13:37:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:23 crc kubenswrapper[4768]: I0217 13:37:23.533923 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 13:37:23 crc kubenswrapper[4768]: I0217 13:37:23.534007 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 13:37:23 crc kubenswrapper[4768]: E0217 13:37:23.534252 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 13:37:23 crc kubenswrapper[4768]: I0217 13:37:23.534297 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 13:37:23 crc kubenswrapper[4768]: E0217 13:37:23.534371 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 13:37:23 crc kubenswrapper[4768]: E0217 13:37:23.534450 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 13:37:23 crc kubenswrapper[4768]: I0217 13:37:23.542278 4768 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-20 19:46:54.92552214 +0000 UTC Feb 17 13:37:23 crc kubenswrapper[4768]: I0217 13:37:23.631686 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:23 crc kubenswrapper[4768]: I0217 13:37:23.631720 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:23 crc kubenswrapper[4768]: I0217 13:37:23.631732 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:23 crc kubenswrapper[4768]: I0217 13:37:23.631748 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:23 crc kubenswrapper[4768]: I0217 13:37:23.631760 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:23Z","lastTransitionTime":"2026-02-17T13:37:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:23 crc kubenswrapper[4768]: I0217 13:37:23.733809 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:23 crc kubenswrapper[4768]: I0217 13:37:23.733841 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:23 crc kubenswrapper[4768]: I0217 13:37:23.733923 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:23 crc kubenswrapper[4768]: I0217 13:37:23.733942 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:23 crc kubenswrapper[4768]: I0217 13:37:23.733953 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:23Z","lastTransitionTime":"2026-02-17T13:37:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:23 crc kubenswrapper[4768]: I0217 13:37:23.836296 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:23 crc kubenswrapper[4768]: I0217 13:37:23.836325 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:23 crc kubenswrapper[4768]: I0217 13:37:23.836334 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:23 crc kubenswrapper[4768]: I0217 13:37:23.836348 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:23 crc kubenswrapper[4768]: I0217 13:37:23.836358 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:23Z","lastTransitionTime":"2026-02-17T13:37:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:23 crc kubenswrapper[4768]: I0217 13:37:23.938252 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:23 crc kubenswrapper[4768]: I0217 13:37:23.938305 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:23 crc kubenswrapper[4768]: I0217 13:37:23.938315 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:23 crc kubenswrapper[4768]: I0217 13:37:23.938336 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:23 crc kubenswrapper[4768]: I0217 13:37:23.938354 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:23Z","lastTransitionTime":"2026-02-17T13:37:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:24 crc kubenswrapper[4768]: I0217 13:37:24.041452 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:24 crc kubenswrapper[4768]: I0217 13:37:24.041503 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:24 crc kubenswrapper[4768]: I0217 13:37:24.041515 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:24 crc kubenswrapper[4768]: I0217 13:37:24.041535 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:24 crc kubenswrapper[4768]: I0217 13:37:24.041547 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:24Z","lastTransitionTime":"2026-02-17T13:37:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:24 crc kubenswrapper[4768]: I0217 13:37:24.143364 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:24 crc kubenswrapper[4768]: I0217 13:37:24.143415 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:24 crc kubenswrapper[4768]: I0217 13:37:24.143428 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:24 crc kubenswrapper[4768]: I0217 13:37:24.143447 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:24 crc kubenswrapper[4768]: I0217 13:37:24.143459 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:24Z","lastTransitionTime":"2026-02-17T13:37:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:24 crc kubenswrapper[4768]: I0217 13:37:24.245954 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:24 crc kubenswrapper[4768]: I0217 13:37:24.246020 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:24 crc kubenswrapper[4768]: I0217 13:37:24.246031 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:24 crc kubenswrapper[4768]: I0217 13:37:24.246047 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:24 crc kubenswrapper[4768]: I0217 13:37:24.246055 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:24Z","lastTransitionTime":"2026-02-17T13:37:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:24 crc kubenswrapper[4768]: I0217 13:37:24.348452 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:24 crc kubenswrapper[4768]: I0217 13:37:24.348488 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:24 crc kubenswrapper[4768]: I0217 13:37:24.348504 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:24 crc kubenswrapper[4768]: I0217 13:37:24.348524 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:24 crc kubenswrapper[4768]: I0217 13:37:24.348536 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:24Z","lastTransitionTime":"2026-02-17T13:37:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:24 crc kubenswrapper[4768]: I0217 13:37:24.451153 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:24 crc kubenswrapper[4768]: I0217 13:37:24.451218 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:24 crc kubenswrapper[4768]: I0217 13:37:24.451245 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:24 crc kubenswrapper[4768]: I0217 13:37:24.451276 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:24 crc kubenswrapper[4768]: I0217 13:37:24.451297 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:24Z","lastTransitionTime":"2026-02-17T13:37:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:24 crc kubenswrapper[4768]: I0217 13:37:24.533663 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5bxh7" Feb 17 13:37:24 crc kubenswrapper[4768]: E0217 13:37:24.533816 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5bxh7" podUID="8c8b1469-ed55-4743-9553-f81efd79e5f1" Feb 17 13:37:24 crc kubenswrapper[4768]: I0217 13:37:24.542848 4768 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-10 05:27:14.606504638 +0000 UTC Feb 17 13:37:24 crc kubenswrapper[4768]: I0217 13:37:24.554532 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:24 crc kubenswrapper[4768]: I0217 13:37:24.554580 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:24 crc kubenswrapper[4768]: I0217 13:37:24.554592 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:24 crc kubenswrapper[4768]: I0217 13:37:24.554611 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:24 crc kubenswrapper[4768]: I0217 13:37:24.554623 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:24Z","lastTransitionTime":"2026-02-17T13:37:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:24 crc kubenswrapper[4768]: I0217 13:37:24.657597 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:24 crc kubenswrapper[4768]: I0217 13:37:24.657630 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:24 crc kubenswrapper[4768]: I0217 13:37:24.657641 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:24 crc kubenswrapper[4768]: I0217 13:37:24.657659 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:24 crc kubenswrapper[4768]: I0217 13:37:24.657670 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:24Z","lastTransitionTime":"2026-02-17T13:37:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:24 crc kubenswrapper[4768]: I0217 13:37:24.759791 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:24 crc kubenswrapper[4768]: I0217 13:37:24.759843 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:24 crc kubenswrapper[4768]: I0217 13:37:24.759853 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:24 crc kubenswrapper[4768]: I0217 13:37:24.759870 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:24 crc kubenswrapper[4768]: I0217 13:37:24.759884 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:24Z","lastTransitionTime":"2026-02-17T13:37:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:24 crc kubenswrapper[4768]: I0217 13:37:24.862575 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:24 crc kubenswrapper[4768]: I0217 13:37:24.862617 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:24 crc kubenswrapper[4768]: I0217 13:37:24.862626 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:24 crc kubenswrapper[4768]: I0217 13:37:24.862641 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:24 crc kubenswrapper[4768]: I0217 13:37:24.862651 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:24Z","lastTransitionTime":"2026-02-17T13:37:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:24 crc kubenswrapper[4768]: I0217 13:37:24.965453 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:24 crc kubenswrapper[4768]: I0217 13:37:24.965493 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:24 crc kubenswrapper[4768]: I0217 13:37:24.965504 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:24 crc kubenswrapper[4768]: I0217 13:37:24.965522 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:24 crc kubenswrapper[4768]: I0217 13:37:24.965535 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:24Z","lastTransitionTime":"2026-02-17T13:37:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:25 crc kubenswrapper[4768]: I0217 13:37:25.068647 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:25 crc kubenswrapper[4768]: I0217 13:37:25.068698 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:25 crc kubenswrapper[4768]: I0217 13:37:25.068716 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:25 crc kubenswrapper[4768]: I0217 13:37:25.068742 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:25 crc kubenswrapper[4768]: I0217 13:37:25.068759 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:25Z","lastTransitionTime":"2026-02-17T13:37:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:25 crc kubenswrapper[4768]: I0217 13:37:25.170738 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:25 crc kubenswrapper[4768]: I0217 13:37:25.170783 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:25 crc kubenswrapper[4768]: I0217 13:37:25.170798 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:25 crc kubenswrapper[4768]: I0217 13:37:25.170818 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:25 crc kubenswrapper[4768]: I0217 13:37:25.170831 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:25Z","lastTransitionTime":"2026-02-17T13:37:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:25 crc kubenswrapper[4768]: I0217 13:37:25.272950 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:25 crc kubenswrapper[4768]: I0217 13:37:25.272993 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:25 crc kubenswrapper[4768]: I0217 13:37:25.273004 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:25 crc kubenswrapper[4768]: I0217 13:37:25.273023 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:25 crc kubenswrapper[4768]: I0217 13:37:25.273033 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:25Z","lastTransitionTime":"2026-02-17T13:37:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:25 crc kubenswrapper[4768]: I0217 13:37:25.376481 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:25 crc kubenswrapper[4768]: I0217 13:37:25.376528 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:25 crc kubenswrapper[4768]: I0217 13:37:25.376542 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:25 crc kubenswrapper[4768]: I0217 13:37:25.376558 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:25 crc kubenswrapper[4768]: I0217 13:37:25.376570 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:25Z","lastTransitionTime":"2026-02-17T13:37:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:25 crc kubenswrapper[4768]: I0217 13:37:25.478950 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:25 crc kubenswrapper[4768]: I0217 13:37:25.479055 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:25 crc kubenswrapper[4768]: I0217 13:37:25.479078 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:25 crc kubenswrapper[4768]: I0217 13:37:25.479148 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:25 crc kubenswrapper[4768]: I0217 13:37:25.479173 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:25Z","lastTransitionTime":"2026-02-17T13:37:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:25 crc kubenswrapper[4768]: I0217 13:37:25.533491 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 13:37:25 crc kubenswrapper[4768]: I0217 13:37:25.533585 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 13:37:25 crc kubenswrapper[4768]: E0217 13:37:25.533621 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 13:37:25 crc kubenswrapper[4768]: I0217 13:37:25.533691 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 13:37:25 crc kubenswrapper[4768]: E0217 13:37:25.533792 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 13:37:25 crc kubenswrapper[4768]: E0217 13:37:25.533965 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 13:37:25 crc kubenswrapper[4768]: I0217 13:37:25.543168 4768 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-09 06:13:22.327921159 +0000 UTC Feb 17 13:37:25 crc kubenswrapper[4768]: I0217 13:37:25.581995 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:25 crc kubenswrapper[4768]: I0217 13:37:25.582034 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:25 crc kubenswrapper[4768]: I0217 13:37:25.582045 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:25 crc kubenswrapper[4768]: I0217 13:37:25.582061 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:25 crc kubenswrapper[4768]: I0217 13:37:25.582072 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:25Z","lastTransitionTime":"2026-02-17T13:37:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:25 crc kubenswrapper[4768]: I0217 13:37:25.684521 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:25 crc kubenswrapper[4768]: I0217 13:37:25.684587 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:25 crc kubenswrapper[4768]: I0217 13:37:25.684610 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:25 crc kubenswrapper[4768]: I0217 13:37:25.684638 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:25 crc kubenswrapper[4768]: I0217 13:37:25.684659 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:25Z","lastTransitionTime":"2026-02-17T13:37:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:25 crc kubenswrapper[4768]: I0217 13:37:25.787067 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:25 crc kubenswrapper[4768]: I0217 13:37:25.787133 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:25 crc kubenswrapper[4768]: I0217 13:37:25.787154 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:25 crc kubenswrapper[4768]: I0217 13:37:25.787177 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:25 crc kubenswrapper[4768]: I0217 13:37:25.787191 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:25Z","lastTransitionTime":"2026-02-17T13:37:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:25 crc kubenswrapper[4768]: I0217 13:37:25.890857 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:25 crc kubenswrapper[4768]: I0217 13:37:25.890893 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:25 crc kubenswrapper[4768]: I0217 13:37:25.890903 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:25 crc kubenswrapper[4768]: I0217 13:37:25.890921 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:25 crc kubenswrapper[4768]: I0217 13:37:25.890930 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:25Z","lastTransitionTime":"2026-02-17T13:37:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:25 crc kubenswrapper[4768]: I0217 13:37:25.992963 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:25 crc kubenswrapper[4768]: I0217 13:37:25.993050 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:25 crc kubenswrapper[4768]: I0217 13:37:25.993063 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:25 crc kubenswrapper[4768]: I0217 13:37:25.993079 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:25 crc kubenswrapper[4768]: I0217 13:37:25.993088 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:25Z","lastTransitionTime":"2026-02-17T13:37:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:26 crc kubenswrapper[4768]: I0217 13:37:26.095721 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:26 crc kubenswrapper[4768]: I0217 13:37:26.095767 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:26 crc kubenswrapper[4768]: I0217 13:37:26.095779 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:26 crc kubenswrapper[4768]: I0217 13:37:26.095797 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:26 crc kubenswrapper[4768]: I0217 13:37:26.095810 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:26Z","lastTransitionTime":"2026-02-17T13:37:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:26 crc kubenswrapper[4768]: I0217 13:37:26.197969 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:26 crc kubenswrapper[4768]: I0217 13:37:26.198000 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:26 crc kubenswrapper[4768]: I0217 13:37:26.198007 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:26 crc kubenswrapper[4768]: I0217 13:37:26.198022 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:26 crc kubenswrapper[4768]: I0217 13:37:26.198032 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:26Z","lastTransitionTime":"2026-02-17T13:37:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:26 crc kubenswrapper[4768]: I0217 13:37:26.300913 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:26 crc kubenswrapper[4768]: I0217 13:37:26.300958 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:26 crc kubenswrapper[4768]: I0217 13:37:26.300973 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:26 crc kubenswrapper[4768]: I0217 13:37:26.300989 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:26 crc kubenswrapper[4768]: I0217 13:37:26.301001 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:26Z","lastTransitionTime":"2026-02-17T13:37:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:26 crc kubenswrapper[4768]: I0217 13:37:26.404167 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:26 crc kubenswrapper[4768]: I0217 13:37:26.404201 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:26 crc kubenswrapper[4768]: I0217 13:37:26.404208 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:26 crc kubenswrapper[4768]: I0217 13:37:26.404224 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:26 crc kubenswrapper[4768]: I0217 13:37:26.404235 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:26Z","lastTransitionTime":"2026-02-17T13:37:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:26 crc kubenswrapper[4768]: I0217 13:37:26.506753 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:26 crc kubenswrapper[4768]: I0217 13:37:26.506802 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:26 crc kubenswrapper[4768]: I0217 13:37:26.506811 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:26 crc kubenswrapper[4768]: I0217 13:37:26.506827 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:26 crc kubenswrapper[4768]: I0217 13:37:26.506836 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:26Z","lastTransitionTime":"2026-02-17T13:37:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:26 crc kubenswrapper[4768]: I0217 13:37:26.533367 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5bxh7" Feb 17 13:37:26 crc kubenswrapper[4768]: E0217 13:37:26.533481 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5bxh7" podUID="8c8b1469-ed55-4743-9553-f81efd79e5f1" Feb 17 13:37:26 crc kubenswrapper[4768]: I0217 13:37:26.543832 4768 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-07 18:56:09.773584955 +0000 UTC Feb 17 13:37:26 crc kubenswrapper[4768]: I0217 13:37:26.608832 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:26 crc kubenswrapper[4768]: I0217 13:37:26.608877 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:26 crc kubenswrapper[4768]: I0217 13:37:26.608899 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:26 crc kubenswrapper[4768]: I0217 13:37:26.608922 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:26 crc kubenswrapper[4768]: I0217 13:37:26.608936 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:26Z","lastTransitionTime":"2026-02-17T13:37:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:26 crc kubenswrapper[4768]: I0217 13:37:26.711480 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:26 crc kubenswrapper[4768]: I0217 13:37:26.711526 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:26 crc kubenswrapper[4768]: I0217 13:37:26.711539 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:26 crc kubenswrapper[4768]: I0217 13:37:26.711556 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:26 crc kubenswrapper[4768]: I0217 13:37:26.711568 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:26Z","lastTransitionTime":"2026-02-17T13:37:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:26 crc kubenswrapper[4768]: I0217 13:37:26.813460 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:26 crc kubenswrapper[4768]: I0217 13:37:26.813773 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:26 crc kubenswrapper[4768]: I0217 13:37:26.813850 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:26 crc kubenswrapper[4768]: I0217 13:37:26.813913 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:26 crc kubenswrapper[4768]: I0217 13:37:26.813981 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:26Z","lastTransitionTime":"2026-02-17T13:37:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:26 crc kubenswrapper[4768]: I0217 13:37:26.916154 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:26 crc kubenswrapper[4768]: I0217 13:37:26.916389 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:26 crc kubenswrapper[4768]: I0217 13:37:26.916485 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:26 crc kubenswrapper[4768]: I0217 13:37:26.916549 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:26 crc kubenswrapper[4768]: I0217 13:37:26.916610 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:26Z","lastTransitionTime":"2026-02-17T13:37:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:27 crc kubenswrapper[4768]: I0217 13:37:27.018320 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:27 crc kubenswrapper[4768]: I0217 13:37:27.018572 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:27 crc kubenswrapper[4768]: I0217 13:37:27.018634 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:27 crc kubenswrapper[4768]: I0217 13:37:27.018697 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:27 crc kubenswrapper[4768]: I0217 13:37:27.018752 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:27Z","lastTransitionTime":"2026-02-17T13:37:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:27 crc kubenswrapper[4768]: I0217 13:37:27.121929 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:27 crc kubenswrapper[4768]: I0217 13:37:27.121966 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:27 crc kubenswrapper[4768]: I0217 13:37:27.121976 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:27 crc kubenswrapper[4768]: I0217 13:37:27.121993 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:27 crc kubenswrapper[4768]: I0217 13:37:27.122004 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:27Z","lastTransitionTime":"2026-02-17T13:37:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:27 crc kubenswrapper[4768]: I0217 13:37:27.224813 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:27 crc kubenswrapper[4768]: I0217 13:37:27.225156 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:27 crc kubenswrapper[4768]: I0217 13:37:27.225263 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:27 crc kubenswrapper[4768]: I0217 13:37:27.225386 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:27 crc kubenswrapper[4768]: I0217 13:37:27.225506 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:27Z","lastTransitionTime":"2026-02-17T13:37:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:27 crc kubenswrapper[4768]: I0217 13:37:27.314687 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/8c8b1469-ed55-4743-9553-f81efd79e5f1-metrics-certs\") pod \"network-metrics-daemon-5bxh7\" (UID: \"8c8b1469-ed55-4743-9553-f81efd79e5f1\") " pod="openshift-multus/network-metrics-daemon-5bxh7" Feb 17 13:37:27 crc kubenswrapper[4768]: E0217 13:37:27.315123 4768 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 17 13:37:27 crc kubenswrapper[4768]: E0217 13:37:27.315412 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8c8b1469-ed55-4743-9553-f81efd79e5f1-metrics-certs podName:8c8b1469-ed55-4743-9553-f81efd79e5f1 nodeName:}" failed. No retries permitted until 2026-02-17 13:37:59.315393056 +0000 UTC m=+98.594779498 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/8c8b1469-ed55-4743-9553-f81efd79e5f1-metrics-certs") pod "network-metrics-daemon-5bxh7" (UID: "8c8b1469-ed55-4743-9553-f81efd79e5f1") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 17 13:37:27 crc kubenswrapper[4768]: I0217 13:37:27.328286 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:27 crc kubenswrapper[4768]: I0217 13:37:27.328328 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:27 crc kubenswrapper[4768]: I0217 13:37:27.328337 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:27 crc kubenswrapper[4768]: I0217 13:37:27.328349 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:27 crc kubenswrapper[4768]: I0217 13:37:27.328360 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:27Z","lastTransitionTime":"2026-02-17T13:37:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:27 crc kubenswrapper[4768]: I0217 13:37:27.430163 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:27 crc kubenswrapper[4768]: I0217 13:37:27.430222 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:27 crc kubenswrapper[4768]: I0217 13:37:27.430234 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:27 crc kubenswrapper[4768]: I0217 13:37:27.430253 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:27 crc kubenswrapper[4768]: I0217 13:37:27.430630 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:27Z","lastTransitionTime":"2026-02-17T13:37:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:27 crc kubenswrapper[4768]: I0217 13:37:27.533290 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 13:37:27 crc kubenswrapper[4768]: E0217 13:37:27.533447 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 13:37:27 crc kubenswrapper[4768]: I0217 13:37:27.533511 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 13:37:27 crc kubenswrapper[4768]: I0217 13:37:27.533290 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 13:37:27 crc kubenswrapper[4768]: E0217 13:37:27.533710 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 13:37:27 crc kubenswrapper[4768]: E0217 13:37:27.533621 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 13:37:27 crc kubenswrapper[4768]: I0217 13:37:27.534027 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:27 crc kubenswrapper[4768]: I0217 13:37:27.534076 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:27 crc kubenswrapper[4768]: I0217 13:37:27.534162 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:27 crc kubenswrapper[4768]: I0217 13:37:27.534186 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:27 crc kubenswrapper[4768]: I0217 13:37:27.534203 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:27Z","lastTransitionTime":"2026-02-17T13:37:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:27 crc kubenswrapper[4768]: I0217 13:37:27.543939 4768 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-25 10:39:57.653536372 +0000 UTC Feb 17 13:37:27 crc kubenswrapper[4768]: I0217 13:37:27.636570 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:27 crc kubenswrapper[4768]: I0217 13:37:27.636652 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:27 crc kubenswrapper[4768]: I0217 13:37:27.636676 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:27 crc kubenswrapper[4768]: I0217 13:37:27.636705 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:27 crc kubenswrapper[4768]: I0217 13:37:27.636727 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:27Z","lastTransitionTime":"2026-02-17T13:37:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:27 crc kubenswrapper[4768]: I0217 13:37:27.738915 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:27 crc kubenswrapper[4768]: I0217 13:37:27.738965 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:27 crc kubenswrapper[4768]: I0217 13:37:27.738976 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:27 crc kubenswrapper[4768]: I0217 13:37:27.738993 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:27 crc kubenswrapper[4768]: I0217 13:37:27.739004 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:27Z","lastTransitionTime":"2026-02-17T13:37:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:27 crc kubenswrapper[4768]: I0217 13:37:27.841327 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:27 crc kubenswrapper[4768]: I0217 13:37:27.841393 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:27 crc kubenswrapper[4768]: I0217 13:37:27.841405 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:27 crc kubenswrapper[4768]: I0217 13:37:27.841427 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:27 crc kubenswrapper[4768]: I0217 13:37:27.841439 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:27Z","lastTransitionTime":"2026-02-17T13:37:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:27 crc kubenswrapper[4768]: I0217 13:37:27.943686 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:27 crc kubenswrapper[4768]: I0217 13:37:27.943760 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:27 crc kubenswrapper[4768]: I0217 13:37:27.943784 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:27 crc kubenswrapper[4768]: I0217 13:37:27.943814 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:27 crc kubenswrapper[4768]: I0217 13:37:27.943838 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:27Z","lastTransitionTime":"2026-02-17T13:37:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:28 crc kubenswrapper[4768]: I0217 13:37:28.046643 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:28 crc kubenswrapper[4768]: I0217 13:37:28.046727 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:28 crc kubenswrapper[4768]: I0217 13:37:28.046749 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:28 crc kubenswrapper[4768]: I0217 13:37:28.046780 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:28 crc kubenswrapper[4768]: I0217 13:37:28.046805 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:28Z","lastTransitionTime":"2026-02-17T13:37:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:28 crc kubenswrapper[4768]: I0217 13:37:28.149245 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:28 crc kubenswrapper[4768]: I0217 13:37:28.149294 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:28 crc kubenswrapper[4768]: I0217 13:37:28.149416 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:28 crc kubenswrapper[4768]: I0217 13:37:28.149441 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:28 crc kubenswrapper[4768]: I0217 13:37:28.149456 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:28Z","lastTransitionTime":"2026-02-17T13:37:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:28 crc kubenswrapper[4768]: I0217 13:37:28.251520 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:28 crc kubenswrapper[4768]: I0217 13:37:28.251756 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:28 crc kubenswrapper[4768]: I0217 13:37:28.251821 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:28 crc kubenswrapper[4768]: I0217 13:37:28.251935 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:28 crc kubenswrapper[4768]: I0217 13:37:28.252070 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:28Z","lastTransitionTime":"2026-02-17T13:37:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:28 crc kubenswrapper[4768]: I0217 13:37:28.354400 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:28 crc kubenswrapper[4768]: I0217 13:37:28.354457 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:28 crc kubenswrapper[4768]: I0217 13:37:28.354469 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:28 crc kubenswrapper[4768]: I0217 13:37:28.354489 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:28 crc kubenswrapper[4768]: I0217 13:37:28.354501 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:28Z","lastTransitionTime":"2026-02-17T13:37:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:28 crc kubenswrapper[4768]: I0217 13:37:28.456616 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:28 crc kubenswrapper[4768]: I0217 13:37:28.456659 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:28 crc kubenswrapper[4768]: I0217 13:37:28.456670 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:28 crc kubenswrapper[4768]: I0217 13:37:28.456686 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:28 crc kubenswrapper[4768]: I0217 13:37:28.456697 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:28Z","lastTransitionTime":"2026-02-17T13:37:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:28 crc kubenswrapper[4768]: I0217 13:37:28.534181 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5bxh7" Feb 17 13:37:28 crc kubenswrapper[4768]: E0217 13:37:28.534354 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5bxh7" podUID="8c8b1469-ed55-4743-9553-f81efd79e5f1" Feb 17 13:37:28 crc kubenswrapper[4768]: I0217 13:37:28.544227 4768 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-16 14:06:02.122346068 +0000 UTC Feb 17 13:37:28 crc kubenswrapper[4768]: I0217 13:37:28.558771 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:28 crc kubenswrapper[4768]: I0217 13:37:28.558805 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:28 crc kubenswrapper[4768]: I0217 13:37:28.558813 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:28 crc kubenswrapper[4768]: I0217 13:37:28.558826 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:28 crc kubenswrapper[4768]: I0217 13:37:28.558834 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:28Z","lastTransitionTime":"2026-02-17T13:37:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:28 crc kubenswrapper[4768]: I0217 13:37:28.660831 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:28 crc kubenswrapper[4768]: I0217 13:37:28.660871 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:28 crc kubenswrapper[4768]: I0217 13:37:28.660881 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:28 crc kubenswrapper[4768]: I0217 13:37:28.660897 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:28 crc kubenswrapper[4768]: I0217 13:37:28.660909 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:28Z","lastTransitionTime":"2026-02-17T13:37:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:28 crc kubenswrapper[4768]: I0217 13:37:28.761178 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:28 crc kubenswrapper[4768]: I0217 13:37:28.761220 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:28 crc kubenswrapper[4768]: I0217 13:37:28.761231 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:28 crc kubenswrapper[4768]: I0217 13:37:28.761249 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:28 crc kubenswrapper[4768]: I0217 13:37:28.761259 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:28Z","lastTransitionTime":"2026-02-17T13:37:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:28 crc kubenswrapper[4768]: E0217 13:37:28.779215 4768 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T13:37:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T13:37:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T13:37:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T13:37:28Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T13:37:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T13:37:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T13:37:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T13:37:28Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"72b2b2e1-552d-4984-900d-b4db18ea60be\\\",\\\"systemUUID\\\":\\\"85ded5cb-c1f6-4d1b-b23e-ee8660dcd6ef\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:37:28Z is after 2025-08-24T17:21:41Z" Feb 17 13:37:28 crc kubenswrapper[4768]: I0217 13:37:28.782593 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:28 crc kubenswrapper[4768]: I0217 13:37:28.782630 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:28 crc kubenswrapper[4768]: I0217 13:37:28.782648 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:28 crc kubenswrapper[4768]: I0217 13:37:28.782670 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:28 crc kubenswrapper[4768]: I0217 13:37:28.782685 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:28Z","lastTransitionTime":"2026-02-17T13:37:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:28 crc kubenswrapper[4768]: E0217 13:37:28.795841 4768 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T13:37:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T13:37:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T13:37:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T13:37:28Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T13:37:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T13:37:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T13:37:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T13:37:28Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"72b2b2e1-552d-4984-900d-b4db18ea60be\\\",\\\"systemUUID\\\":\\\"85ded5cb-c1f6-4d1b-b23e-ee8660dcd6ef\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:37:28Z is after 2025-08-24T17:21:41Z" Feb 17 13:37:28 crc kubenswrapper[4768]: I0217 13:37:28.799188 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:28 crc kubenswrapper[4768]: I0217 13:37:28.799235 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:28 crc kubenswrapper[4768]: I0217 13:37:28.799247 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:28 crc kubenswrapper[4768]: I0217 13:37:28.799266 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:28 crc kubenswrapper[4768]: I0217 13:37:28.799278 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:28Z","lastTransitionTime":"2026-02-17T13:37:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:28 crc kubenswrapper[4768]: E0217 13:37:28.811296 4768 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T13:37:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T13:37:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T13:37:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T13:37:28Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T13:37:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T13:37:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T13:37:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T13:37:28Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"72b2b2e1-552d-4984-900d-b4db18ea60be\\\",\\\"systemUUID\\\":\\\"85ded5cb-c1f6-4d1b-b23e-ee8660dcd6ef\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:37:28Z is after 2025-08-24T17:21:41Z" Feb 17 13:37:28 crc kubenswrapper[4768]: I0217 13:37:28.814335 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:28 crc kubenswrapper[4768]: I0217 13:37:28.814383 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:28 crc kubenswrapper[4768]: I0217 13:37:28.814392 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:28 crc kubenswrapper[4768]: I0217 13:37:28.814407 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:28 crc kubenswrapper[4768]: I0217 13:37:28.814419 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:28Z","lastTransitionTime":"2026-02-17T13:37:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:28 crc kubenswrapper[4768]: E0217 13:37:28.829565 4768 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T13:37:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T13:37:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T13:37:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T13:37:28Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T13:37:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T13:37:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T13:37:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T13:37:28Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"72b2b2e1-552d-4984-900d-b4db18ea60be\\\",\\\"systemUUID\\\":\\\"85ded5cb-c1f6-4d1b-b23e-ee8660dcd6ef\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:37:28Z is after 2025-08-24T17:21:41Z" Feb 17 13:37:28 crc kubenswrapper[4768]: I0217 13:37:28.833514 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:28 crc kubenswrapper[4768]: I0217 13:37:28.833555 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:28 crc kubenswrapper[4768]: I0217 13:37:28.833565 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:28 crc kubenswrapper[4768]: I0217 13:37:28.833582 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:28 crc kubenswrapper[4768]: I0217 13:37:28.833593 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:28Z","lastTransitionTime":"2026-02-17T13:37:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:28 crc kubenswrapper[4768]: E0217 13:37:28.849959 4768 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T13:37:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T13:37:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T13:37:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T13:37:28Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T13:37:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T13:37:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T13:37:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T13:37:28Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"72b2b2e1-552d-4984-900d-b4db18ea60be\\\",\\\"systemUUID\\\":\\\"85ded5cb-c1f6-4d1b-b23e-ee8660dcd6ef\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:37:28Z is after 2025-08-24T17:21:41Z" Feb 17 13:37:28 crc kubenswrapper[4768]: E0217 13:37:28.850158 4768 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 17 13:37:28 crc kubenswrapper[4768]: I0217 13:37:28.853142 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:28 crc kubenswrapper[4768]: I0217 13:37:28.853184 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:28 crc kubenswrapper[4768]: I0217 13:37:28.853196 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:28 crc kubenswrapper[4768]: I0217 13:37:28.853214 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:28 crc kubenswrapper[4768]: I0217 13:37:28.853225 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:28Z","lastTransitionTime":"2026-02-17T13:37:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:28 crc kubenswrapper[4768]: I0217 13:37:28.956186 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:28 crc kubenswrapper[4768]: I0217 13:37:28.956244 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:28 crc kubenswrapper[4768]: I0217 13:37:28.956260 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:28 crc kubenswrapper[4768]: I0217 13:37:28.956283 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:28 crc kubenswrapper[4768]: I0217 13:37:28.956297 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:28Z","lastTransitionTime":"2026-02-17T13:37:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:29 crc kubenswrapper[4768]: I0217 13:37:29.058370 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:29 crc kubenswrapper[4768]: I0217 13:37:29.058407 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:29 crc kubenswrapper[4768]: I0217 13:37:29.058418 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:29 crc kubenswrapper[4768]: I0217 13:37:29.058434 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:29 crc kubenswrapper[4768]: I0217 13:37:29.058445 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:29Z","lastTransitionTime":"2026-02-17T13:37:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:29 crc kubenswrapper[4768]: I0217 13:37:29.160342 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:29 crc kubenswrapper[4768]: I0217 13:37:29.160601 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:29 crc kubenswrapper[4768]: I0217 13:37:29.160669 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:29 crc kubenswrapper[4768]: I0217 13:37:29.160741 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:29 crc kubenswrapper[4768]: I0217 13:37:29.160811 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:29Z","lastTransitionTime":"2026-02-17T13:37:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:29 crc kubenswrapper[4768]: I0217 13:37:29.264057 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:29 crc kubenswrapper[4768]: I0217 13:37:29.264169 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:29 crc kubenswrapper[4768]: I0217 13:37:29.264199 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:29 crc kubenswrapper[4768]: I0217 13:37:29.264274 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:29 crc kubenswrapper[4768]: I0217 13:37:29.264334 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:29Z","lastTransitionTime":"2026-02-17T13:37:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:29 crc kubenswrapper[4768]: I0217 13:37:29.367508 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:29 crc kubenswrapper[4768]: I0217 13:37:29.367568 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:29 crc kubenswrapper[4768]: I0217 13:37:29.367582 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:29 crc kubenswrapper[4768]: I0217 13:37:29.367625 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:29 crc kubenswrapper[4768]: I0217 13:37:29.367690 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:29Z","lastTransitionTime":"2026-02-17T13:37:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:29 crc kubenswrapper[4768]: I0217 13:37:29.470417 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:29 crc kubenswrapper[4768]: I0217 13:37:29.470464 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:29 crc kubenswrapper[4768]: I0217 13:37:29.470475 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:29 crc kubenswrapper[4768]: I0217 13:37:29.470492 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:29 crc kubenswrapper[4768]: I0217 13:37:29.470502 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:29Z","lastTransitionTime":"2026-02-17T13:37:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:29 crc kubenswrapper[4768]: I0217 13:37:29.533553 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 13:37:29 crc kubenswrapper[4768]: E0217 13:37:29.533710 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 13:37:29 crc kubenswrapper[4768]: I0217 13:37:29.533828 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 13:37:29 crc kubenswrapper[4768]: E0217 13:37:29.533914 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 13:37:29 crc kubenswrapper[4768]: I0217 13:37:29.533929 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 13:37:29 crc kubenswrapper[4768]: E0217 13:37:29.534159 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 13:37:29 crc kubenswrapper[4768]: I0217 13:37:29.544616 4768 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-12 06:44:40.923810054 +0000 UTC Feb 17 13:37:29 crc kubenswrapper[4768]: I0217 13:37:29.572318 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:29 crc kubenswrapper[4768]: I0217 13:37:29.572363 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:29 crc kubenswrapper[4768]: I0217 13:37:29.572371 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:29 crc kubenswrapper[4768]: I0217 13:37:29.572388 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:29 crc kubenswrapper[4768]: I0217 13:37:29.572400 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:29Z","lastTransitionTime":"2026-02-17T13:37:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:29 crc kubenswrapper[4768]: I0217 13:37:29.674798 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:29 crc kubenswrapper[4768]: I0217 13:37:29.674850 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:29 crc kubenswrapper[4768]: I0217 13:37:29.674863 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:29 crc kubenswrapper[4768]: I0217 13:37:29.674882 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:29 crc kubenswrapper[4768]: I0217 13:37:29.674893 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:29Z","lastTransitionTime":"2026-02-17T13:37:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:29 crc kubenswrapper[4768]: I0217 13:37:29.777994 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:29 crc kubenswrapper[4768]: I0217 13:37:29.778059 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:29 crc kubenswrapper[4768]: I0217 13:37:29.778076 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:29 crc kubenswrapper[4768]: I0217 13:37:29.778124 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:29 crc kubenswrapper[4768]: I0217 13:37:29.778141 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:29Z","lastTransitionTime":"2026-02-17T13:37:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:29 crc kubenswrapper[4768]: I0217 13:37:29.880010 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:29 crc kubenswrapper[4768]: I0217 13:37:29.880258 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:29 crc kubenswrapper[4768]: I0217 13:37:29.880384 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:29 crc kubenswrapper[4768]: I0217 13:37:29.880501 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:29 crc kubenswrapper[4768]: I0217 13:37:29.880635 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:29Z","lastTransitionTime":"2026-02-17T13:37:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:29 crc kubenswrapper[4768]: I0217 13:37:29.982988 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:29 crc kubenswrapper[4768]: I0217 13:37:29.983338 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:29 crc kubenswrapper[4768]: I0217 13:37:29.983481 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:29 crc kubenswrapper[4768]: I0217 13:37:29.983636 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:29 crc kubenswrapper[4768]: I0217 13:37:29.983841 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:29Z","lastTransitionTime":"2026-02-17T13:37:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:30 crc kubenswrapper[4768]: I0217 13:37:30.086509 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:30 crc kubenswrapper[4768]: I0217 13:37:30.087262 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:30 crc kubenswrapper[4768]: I0217 13:37:30.087301 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:30 crc kubenswrapper[4768]: I0217 13:37:30.087328 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:30 crc kubenswrapper[4768]: I0217 13:37:30.087342 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:30Z","lastTransitionTime":"2026-02-17T13:37:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:30 crc kubenswrapper[4768]: I0217 13:37:30.190028 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:30 crc kubenswrapper[4768]: I0217 13:37:30.190066 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:30 crc kubenswrapper[4768]: I0217 13:37:30.190075 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:30 crc kubenswrapper[4768]: I0217 13:37:30.190093 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:30 crc kubenswrapper[4768]: I0217 13:37:30.190139 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:30Z","lastTransitionTime":"2026-02-17T13:37:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:30 crc kubenswrapper[4768]: I0217 13:37:30.292311 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:30 crc kubenswrapper[4768]: I0217 13:37:30.292359 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:30 crc kubenswrapper[4768]: I0217 13:37:30.292370 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:30 crc kubenswrapper[4768]: I0217 13:37:30.292388 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:30 crc kubenswrapper[4768]: I0217 13:37:30.292401 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:30Z","lastTransitionTime":"2026-02-17T13:37:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:30 crc kubenswrapper[4768]: I0217 13:37:30.395086 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:30 crc kubenswrapper[4768]: I0217 13:37:30.395141 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:30 crc kubenswrapper[4768]: I0217 13:37:30.395150 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:30 crc kubenswrapper[4768]: I0217 13:37:30.395165 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:30 crc kubenswrapper[4768]: I0217 13:37:30.395174 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:30Z","lastTransitionTime":"2026-02-17T13:37:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:30 crc kubenswrapper[4768]: I0217 13:37:30.497788 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:30 crc kubenswrapper[4768]: I0217 13:37:30.497828 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:30 crc kubenswrapper[4768]: I0217 13:37:30.497837 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:30 crc kubenswrapper[4768]: I0217 13:37:30.497856 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:30 crc kubenswrapper[4768]: I0217 13:37:30.497865 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:30Z","lastTransitionTime":"2026-02-17T13:37:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:30 crc kubenswrapper[4768]: I0217 13:37:30.534128 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5bxh7" Feb 17 13:37:30 crc kubenswrapper[4768]: E0217 13:37:30.534282 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5bxh7" podUID="8c8b1469-ed55-4743-9553-f81efd79e5f1" Feb 17 13:37:30 crc kubenswrapper[4768]: I0217 13:37:30.545190 4768 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-15 03:38:29.618689088 +0000 UTC Feb 17 13:37:30 crc kubenswrapper[4768]: I0217 13:37:30.600326 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:30 crc kubenswrapper[4768]: I0217 13:37:30.600357 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:30 crc kubenswrapper[4768]: I0217 13:37:30.600365 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:30 crc kubenswrapper[4768]: I0217 13:37:30.600380 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:30 crc kubenswrapper[4768]: I0217 13:37:30.600388 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:30Z","lastTransitionTime":"2026-02-17T13:37:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:30 crc kubenswrapper[4768]: I0217 13:37:30.703448 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:30 crc kubenswrapper[4768]: I0217 13:37:30.703490 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:30 crc kubenswrapper[4768]: I0217 13:37:30.703501 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:30 crc kubenswrapper[4768]: I0217 13:37:30.703521 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:30 crc kubenswrapper[4768]: I0217 13:37:30.703533 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:30Z","lastTransitionTime":"2026-02-17T13:37:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:30 crc kubenswrapper[4768]: I0217 13:37:30.805682 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:30 crc kubenswrapper[4768]: I0217 13:37:30.805713 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:30 crc kubenswrapper[4768]: I0217 13:37:30.805723 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:30 crc kubenswrapper[4768]: I0217 13:37:30.805741 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:30 crc kubenswrapper[4768]: I0217 13:37:30.805749 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:30Z","lastTransitionTime":"2026-02-17T13:37:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:30 crc kubenswrapper[4768]: I0217 13:37:30.907633 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:30 crc kubenswrapper[4768]: I0217 13:37:30.907664 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:30 crc kubenswrapper[4768]: I0217 13:37:30.907677 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:30 crc kubenswrapper[4768]: I0217 13:37:30.907694 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:30 crc kubenswrapper[4768]: I0217 13:37:30.907705 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:30Z","lastTransitionTime":"2026-02-17T13:37:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:30 crc kubenswrapper[4768]: I0217 13:37:30.942302 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-jjjqj_e044bf1f-26b2-4a39-86e6-0440eff3eaa9/kube-multus/0.log" Feb 17 13:37:30 crc kubenswrapper[4768]: I0217 13:37:30.942626 4768 generic.go:334] "Generic (PLEG): container finished" podID="e044bf1f-26b2-4a39-86e6-0440eff3eaa9" containerID="19bc9ea99c2b3b0015849f2a4c730a14e048c35cf69f5b4025a4d51d707cf9f1" exitCode=1 Feb 17 13:37:30 crc kubenswrapper[4768]: I0217 13:37:30.942681 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-jjjqj" event={"ID":"e044bf1f-26b2-4a39-86e6-0440eff3eaa9","Type":"ContainerDied","Data":"19bc9ea99c2b3b0015849f2a4c730a14e048c35cf69f5b4025a4d51d707cf9f1"} Feb 17 13:37:30 crc kubenswrapper[4768]: I0217 13:37:30.943701 4768 scope.go:117] "RemoveContainer" containerID="19bc9ea99c2b3b0015849f2a4c730a14e048c35cf69f5b4025a4d51d707cf9f1" Feb 17 13:37:30 crc kubenswrapper[4768]: I0217 13:37:30.963856 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:37:30Z is after 2025-08-24T17:21:41Z" Feb 17 13:37:30 crc kubenswrapper[4768]: I0217 13:37:30.975690 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-5bxh7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8c8b1469-ed55-4743-9553-f81efd79e5f1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ltsbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ltsbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:36:55Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-5bxh7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:37:30Z is after 2025-08-24T17:21:41Z" Feb 17 13:37:30 crc kubenswrapper[4768]: I0217 13:37:30.992001 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"88441663-1346-4328-a433-63cb4d9b6722\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:37:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:37:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1dd90501c9cf3ad0b27fefe4999afdfeaf0a8262728c8d47cd4714b0ef71d340\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://52dd97c46a2e371a68dcd9558be0f94d3bdd20c7a6ba6a65178fc09cdceea903\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4f0320f35632987c0e1f7eb6655c7221d7c159f1473088fa143a7c69a60f5b0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1670eca5569373c8bcdf22d19faf8794c43207f7711d502eeb2ade60d643f5dd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9ecebe409455190192bdd157c1aa1c3eaec5d7930ccfaa6c456a1d04be7fff2f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-17T13:36:41Z\\\",\\\"message\\\":\\\"le observer\\\\nW0217 13:36:40.862072 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0217 13:36:40.862247 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0217 13:36:40.862863 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1936381186/tls.crt::/tmp/serving-cert-1936381186/tls.key\\\\\\\"\\\\nI0217 13:36:41.350711 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0217 13:36:41.368959 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0217 13:36:41.368982 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0217 13:36:41.369003 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0217 13:36:41.369008 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0217 13:36:41.374960 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0217 13:36:41.374983 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 13:36:41.374989 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0217 13:36:41.374990 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0217 13:36:41.374994 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0217 13:36:41.375023 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0217 13:36:41.375028 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0217 13:36:41.375032 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0217 13:36:41.377284 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T13:36:25Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7641c2023d65064decc69606567dff6eff07116a6cc62f9d1b1cddb34a39a407\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:24Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9dffd0f3cc3b89ababe812ea2e550f319242913da2908f466cbb8d9ac541e4b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9dffd0f3cc3b89ababe812ea2e550f319242913da2908f466cbb8d9ac541e4b9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:36:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:36:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:36:21Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:37:30Z is after 2025-08-24T17:21:41Z" Feb 17 13:37:31 crc kubenswrapper[4768]: I0217 13:37:31.002370 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c0c5a46beaa5e6c9df638b288967eaff1605591faafc373623753e3c434c6ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a28c91cfb782b4976153d2afced3117498aaabaafa5bba655b020c13aebb1dc5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:37:31Z is after 2025-08-24T17:21:41Z" Feb 17 13:37:31 crc kubenswrapper[4768]: I0217 13:37:31.009594 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:31 crc kubenswrapper[4768]: I0217 13:37:31.009633 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:31 crc kubenswrapper[4768]: I0217 13:37:31.009651 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:31 crc kubenswrapper[4768]: I0217 13:37:31.009672 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:31 crc kubenswrapper[4768]: I0217 13:37:31.009689 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:31Z","lastTransitionTime":"2026-02-17T13:37:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:31 crc kubenswrapper[4768]: I0217 13:37:31.014976 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:41Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:37:31Z is after 2025-08-24T17:21:41Z" Feb 17 13:37:31 crc kubenswrapper[4768]: I0217 13:37:31.029627 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:41Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:37:31Z is after 2025-08-24T17:21:41Z" Feb 17 13:37:31 crc kubenswrapper[4768]: I0217 13:37:31.041332 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6l7rv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dac64b2f-4b0b-454c-96e0-fc7d563d300f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://903f9d31bd5f357a5a1f439a158788c1930d0f63cd50e809028689cfa7884e96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7njgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:36:41Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6l7rv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:37:31Z is after 2025-08-24T17:21:41Z" Feb 17 13:37:31 crc kubenswrapper[4768]: I0217 13:37:31.051805 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://07aa34038d6332ca2ddf610df510586d343655b1e38a2912ea1513cca35f4dcc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:37:31Z is after 2025-08-24T17:21:41Z" Feb 17 13:37:31 crc kubenswrapper[4768]: I0217 13:37:31.064732 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f9b2d1282c12fcb8597c7b4c60dc1bab1049afc179eaddbc8a72203b392f78d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:37:31Z is after 2025-08-24T17:21:41Z" Feb 17 13:37:31 crc kubenswrapper[4768]: I0217 13:37:31.075904 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-p97z4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"10c685ba-8fe0-425c-958c-3fb6754d3d84\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d56cb97a69888610458dcdfe5fa47e01b6641b0ab8ce5b11c01042001017d633\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b2h62\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://69b662556c0f67ed5b73c21d8dda6de43cd7b0e7c3dc7a57578f3ce94a4be7cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b2h62\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:36:42Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-p97z4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:37:31Z is after 2025-08-24T17:21:41Z" Feb 17 13:37:31 crc kubenswrapper[4768]: I0217 13:37:31.091750 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-jjjqj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e044bf1f-26b2-4a39-86e6-0440eff3eaa9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:37:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:37:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://19bc9ea99c2b3b0015849f2a4c730a14e048c35cf69f5b4025a4d51d707cf9f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://19bc9ea99c2b3b0015849f2a4c730a14e048c35cf69f5b4025a4d51d707cf9f1\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-17T13:37:30Z\\\",\\\"message\\\":\\\"2026-02-17T13:36:44+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_6f8da95b-d2bc-4d33-8a29-38d7a4b0eb01\\\\n2026-02-17T13:36:44+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_6f8da95b-d2bc-4d33-8a29-38d7a4b0eb01 to /host/opt/cni/bin/\\\\n2026-02-17T13:36:45Z [verbose] multus-daemon started\\\\n2026-02-17T13:36:45Z [verbose] Readiness Indicator file check\\\\n2026-02-17T13:37:30Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T13:36:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jlkgb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:36:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-jjjqj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:37:31Z is after 2025-08-24T17:21:41Z" Feb 17 13:37:31 crc kubenswrapper[4768]: I0217 13:37:31.111978 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:31 crc kubenswrapper[4768]: I0217 13:37:31.112025 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:31 crc kubenswrapper[4768]: I0217 13:37:31.112034 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:31 crc kubenswrapper[4768]: I0217 13:37:31.112049 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:31 crc kubenswrapper[4768]: I0217 13:37:31.112059 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:31Z","lastTransitionTime":"2026-02-17T13:37:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:31 crc kubenswrapper[4768]: I0217 13:37:31.122005 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5cplg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"742e6df8-2a68-426e-982c-ef825c6efca3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://88f4574f4bb95ef3e580f5e3e98feb55cd1381df1cc14a3797f040f3a8d92e86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg6ql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c29effffff4df037300ce8f32cae5337e3c60634d26fb82fde99ad7994a4fd4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg6ql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bca83097ac9d4b621e9bffa8f2f815a6060db34cb0272281b765aa7705d3ab5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg6ql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aaa535fc7f87d47d5bb54797f60b4adcabe1adc1d5d79720acc7a9ba7bb355c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg6ql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e14dfa7f4c26b9ae731392907658bf0a72b811243137b32320a1be70a334a1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg6ql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0849644593ce9a9b0ce3191eb62e47c5986c4b20f93313dd9fa721c7879ede5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg6ql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c2be51c8707a26e2f124761117543e359c32c8fb8d69d1aa54cea3f1b9cfb11b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c2be51c8707a26e2f124761117543e359c32c8fb8d69d1aa54cea3f1b9cfb11b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-17T13:37:09Z\\\",\\\"message\\\":\\\" start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:37:09Z is after 2025-08-24T17:21:41Z]\\\\nI0217 13:37:09.670472 6446 services_controller.go:473] Services do not match for network=default, existing lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-marketplace/redhat-marketplace_TCP_cluster\\\\\\\", UUID:\\\\\\\"97b6e7b0-06ca-455e-8259-06895040cb0c\\\\\\\", Protocol:\\\\\\\"tcp\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-marketplace/redhat-marketplace\\\\\\\"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T13:37:08Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-5cplg_openshift-ovn-kubernetes(742e6df8-2a68-426e-982c-ef825c6efca3)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg6ql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8e5ca206927b1e2a6968ce42ecdb1441da4c68ad11b65c657a68889ac73af37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg6ql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6fcb3dc45db5b2c74f9bdd4c7c453815a392de1f2afe91ebed618327cfe5cb7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6fcb3dc45db5b2c74f9bdd4c7c453815a392de1f2afe91ebed618327cfe5cb7c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:36:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg6ql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:36:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-5cplg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:37:31Z is after 2025-08-24T17:21:41Z" Feb 17 13:37:31 crc kubenswrapper[4768]: I0217 13:37:31.133304 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-hngsc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a9682c-5dd5-49ce-bd8c-60e91527ec2a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db596eafde85d9ad74007393b8576b424cd8561f68f694af82cde80f997a5688\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fbjbb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:36:46Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-hngsc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:37:31Z is after 2025-08-24T17:21:41Z" Feb 17 13:37:31 crc kubenswrapper[4768]: I0217 13:37:31.147085 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"99a4349d-f5a6-431b-b8d6-9de1f9bbe63a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f45aa99773693c07f27add72aa8305d47c48deeb24382aa9ac8bca232fd26ff3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://47db5ef94c07ecce7b368bb809a736c9532fe3e47081cfa7e248b8b40d1d243f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0adb0161ea24422b8b1443a81f00e9c38f60bf8d0718075ce189158200b28a78\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22e1c78ff231c7a76ebb06589f5f14caed8c0b72475083230c80b4512ae2a048\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:36:21Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:37:31Z is after 2025-08-24T17:21:41Z" Feb 17 13:37:31 crc kubenswrapper[4768]: I0217 13:37:31.161611 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1bbd8d51-f387-4de0-b841-e602f7249532\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:37:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:37:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a9d10a3cac9b0b43e1e7c3fb80828cd2d694a46314618dc023ffec8a43488c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e06287439e9f86ed8e81d6d3e8b07691439a8227d2eb2b819db1a7e6de078c18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://28f9a3cd67e4819d5c4e24b1b0d8576898a01c75184c958d65ce81f2523ce7f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3492da352fa4e1b54d266098578ea4f42e40cf1ebe2d6e511dc804473efd9e64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3492da352fa4e1b54d266098578ea4f42e40cf1ebe2d6e511dc804473efd9e64\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:36:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:36:22Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:36:21Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:37:31Z is after 2025-08-24T17:21:41Z" Feb 17 13:37:31 crc kubenswrapper[4768]: I0217 13:37:31.180345 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-6xvnz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"68ae92b3-aced-409b-901b-252d2364cc01\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c3e6dc9e6ef1908b60694a76b89114accbccc8e33d2334e751125f348ff68191\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42dxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ccce43f2773621b91f6782909e74d6c1cb29f3f2e0d4280753e2719f256d694\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3ccce43f2773621b91f6782909e74d6c1cb29f3f2e0d4280753e2719f256d694\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:36:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:36:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42dxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8fa44e73498e2987dca421d954e3c6b71c36138496d25e3498934a0ae03c25de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8fa44e73498e2987dca421d954e3c6b71c36138496d25e3498934a0ae03c25de\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:36:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:36:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42dxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8eee5a030051c479dc3e7d690a338260f3a886f6c8b6764037de0a4b3dcea101\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8eee5a030051c479dc3e7d690a338260f3a886f6c8b6764037de0a4b3dcea101\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:36:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:36:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42dxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://473b18f574df0c69d4f70ded0dee7c42ee64daab422dff19c6a58a0cde52d045\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://473b18f574df0c69d4f70ded0dee7c42ee64daab422dff19c6a58a0cde52d045\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:36:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:36:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42dxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a84122e995b4744d706f85250dc149036364dc1e476c38b54f08494e0cb41ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3a84122e995b4744d706f85250dc149036364dc1e476c38b54f08494e0cb41ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:36:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:36:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42dxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f77e278dccb14a5e674dad398d194080a57e6949529914ddfa9f8a63dbea8a34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f77e278dccb14a5e674dad398d194080a57e6949529914ddfa9f8a63dbea8a34\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:36:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:36:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42dxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:36:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-6xvnz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:37:31Z is after 2025-08-24T17:21:41Z" Feb 17 13:37:31 crc kubenswrapper[4768]: I0217 13:37:31.195343 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-62mzv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f7e6dba9-bf9f-464a-9842-f4f2a793dedf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25c892cb5b274e124423e1da0778d7b3415022fb0280eed30a6b48055d44c3bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nz8rj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4dade5f6bbbe0473f9330ea16398a4522405ca6b22a0e65851a98fe57943b38f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nz8rj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:36:53Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-62mzv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:37:31Z is after 2025-08-24T17:21:41Z" Feb 17 13:37:31 crc kubenswrapper[4768]: I0217 13:37:31.214354 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:31 crc kubenswrapper[4768]: I0217 13:37:31.214556 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:31 crc kubenswrapper[4768]: I0217 13:37:31.214618 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:31 crc kubenswrapper[4768]: I0217 13:37:31.214676 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:31 crc kubenswrapper[4768]: I0217 13:37:31.214732 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:31Z","lastTransitionTime":"2026-02-17T13:37:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:31 crc kubenswrapper[4768]: I0217 13:37:31.317312 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:31 crc kubenswrapper[4768]: I0217 13:37:31.317588 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:31 crc kubenswrapper[4768]: I0217 13:37:31.317711 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:31 crc kubenswrapper[4768]: I0217 13:37:31.317778 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:31 crc kubenswrapper[4768]: I0217 13:37:31.317843 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:31Z","lastTransitionTime":"2026-02-17T13:37:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:31 crc kubenswrapper[4768]: I0217 13:37:31.419566 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:31 crc kubenswrapper[4768]: I0217 13:37:31.419595 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:31 crc kubenswrapper[4768]: I0217 13:37:31.419603 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:31 crc kubenswrapper[4768]: I0217 13:37:31.419617 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:31 crc kubenswrapper[4768]: I0217 13:37:31.419626 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:31Z","lastTransitionTime":"2026-02-17T13:37:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:31 crc kubenswrapper[4768]: I0217 13:37:31.521376 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:31 crc kubenswrapper[4768]: I0217 13:37:31.521414 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:31 crc kubenswrapper[4768]: I0217 13:37:31.521427 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:31 crc kubenswrapper[4768]: I0217 13:37:31.521443 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:31 crc kubenswrapper[4768]: I0217 13:37:31.521455 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:31Z","lastTransitionTime":"2026-02-17T13:37:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:31 crc kubenswrapper[4768]: I0217 13:37:31.533678 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 13:37:31 crc kubenswrapper[4768]: I0217 13:37:31.533688 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 13:37:31 crc kubenswrapper[4768]: E0217 13:37:31.534079 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 13:37:31 crc kubenswrapper[4768]: I0217 13:37:31.533706 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 13:37:31 crc kubenswrapper[4768]: E0217 13:37:31.534164 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 13:37:31 crc kubenswrapper[4768]: E0217 13:37:31.533990 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 13:37:31 crc kubenswrapper[4768]: I0217 13:37:31.545499 4768 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-01 07:44:05.72047331 +0000 UTC Feb 17 13:37:31 crc kubenswrapper[4768]: I0217 13:37:31.546366 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://07aa34038d6332ca2ddf610df510586d343655b1e38a2912ea1513cca35f4dcc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:37:31Z is after 2025-08-24T17:21:41Z" Feb 17 13:37:31 crc kubenswrapper[4768]: I0217 13:37:31.557727 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:41Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:37:31Z is after 2025-08-24T17:21:41Z" Feb 17 13:37:31 crc kubenswrapper[4768]: I0217 13:37:31.569058 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:41Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:37:31Z is after 2025-08-24T17:21:41Z" Feb 17 13:37:31 crc kubenswrapper[4768]: I0217 13:37:31.579045 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6l7rv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dac64b2f-4b0b-454c-96e0-fc7d563d300f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://903f9d31bd5f357a5a1f439a158788c1930d0f63cd50e809028689cfa7884e96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7njgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:36:41Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6l7rv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:37:31Z is after 2025-08-24T17:21:41Z" Feb 17 13:37:31 crc kubenswrapper[4768]: I0217 13:37:31.588502 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-hngsc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a9682c-5dd5-49ce-bd8c-60e91527ec2a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db596eafde85d9ad74007393b8576b424cd8561f68f694af82cde80f997a5688\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fbjbb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:36:46Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-hngsc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:37:31Z is after 2025-08-24T17:21:41Z" Feb 17 13:37:31 crc kubenswrapper[4768]: I0217 13:37:31.599026 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"99a4349d-f5a6-431b-b8d6-9de1f9bbe63a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f45aa99773693c07f27add72aa8305d47c48deeb24382aa9ac8bca232fd26ff3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://47db5ef94c07ecce7b368bb809a736c9532fe3e47081cfa7e248b8b40d1d243f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0adb0161ea24422b8b1443a81f00e9c38f60bf8d0718075ce189158200b28a78\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22e1c78ff231c7a76ebb06589f5f14caed8c0b72475083230c80b4512ae2a048\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:36:21Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:37:31Z is after 2025-08-24T17:21:41Z" Feb 17 13:37:31 crc kubenswrapper[4768]: I0217 13:37:31.608846 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1bbd8d51-f387-4de0-b841-e602f7249532\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:37:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:37:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a9d10a3cac9b0b43e1e7c3fb80828cd2d694a46314618dc023ffec8a43488c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e06287439e9f86ed8e81d6d3e8b07691439a8227d2eb2b819db1a7e6de078c18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://28f9a3cd67e4819d5c4e24b1b0d8576898a01c75184c958d65ce81f2523ce7f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3492da352fa4e1b54d266098578ea4f42e40cf1ebe2d6e511dc804473efd9e64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3492da352fa4e1b54d266098578ea4f42e40cf1ebe2d6e511dc804473efd9e64\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:36:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:36:22Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:36:21Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:37:31Z is after 2025-08-24T17:21:41Z" Feb 17 13:37:31 crc kubenswrapper[4768]: I0217 13:37:31.622958 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:31 crc kubenswrapper[4768]: I0217 13:37:31.622989 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:31 crc kubenswrapper[4768]: I0217 13:37:31.623001 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:31 crc kubenswrapper[4768]: I0217 13:37:31.623019 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:31 crc kubenswrapper[4768]: I0217 13:37:31.623030 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:31Z","lastTransitionTime":"2026-02-17T13:37:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:31 crc kubenswrapper[4768]: I0217 13:37:31.623657 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f9b2d1282c12fcb8597c7b4c60dc1bab1049afc179eaddbc8a72203b392f78d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:37:31Z is after 2025-08-24T17:21:41Z" Feb 17 13:37:31 crc kubenswrapper[4768]: I0217 13:37:31.633628 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-p97z4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"10c685ba-8fe0-425c-958c-3fb6754d3d84\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d56cb97a69888610458dcdfe5fa47e01b6641b0ab8ce5b11c01042001017d633\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b2h62\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://69b662556c0f67ed5b73c21d8dda6de43cd7b0e7c3dc7a57578f3ce94a4be7cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b2h62\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:36:42Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-p97z4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:37:31Z is after 2025-08-24T17:21:41Z" Feb 17 13:37:31 crc kubenswrapper[4768]: I0217 13:37:31.644720 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-jjjqj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e044bf1f-26b2-4a39-86e6-0440eff3eaa9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:37:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:37:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://19bc9ea99c2b3b0015849f2a4c730a14e048c35cf69f5b4025a4d51d707cf9f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://19bc9ea99c2b3b0015849f2a4c730a14e048c35cf69f5b4025a4d51d707cf9f1\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-17T13:37:30Z\\\",\\\"message\\\":\\\"2026-02-17T13:36:44+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_6f8da95b-d2bc-4d33-8a29-38d7a4b0eb01\\\\n2026-02-17T13:36:44+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_6f8da95b-d2bc-4d33-8a29-38d7a4b0eb01 to /host/opt/cni/bin/\\\\n2026-02-17T13:36:45Z [verbose] multus-daemon started\\\\n2026-02-17T13:36:45Z [verbose] Readiness Indicator file check\\\\n2026-02-17T13:37:30Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T13:36:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jlkgb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:36:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-jjjqj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:37:31Z is after 2025-08-24T17:21:41Z" Feb 17 13:37:31 crc kubenswrapper[4768]: I0217 13:37:31.662431 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5cplg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"742e6df8-2a68-426e-982c-ef825c6efca3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://88f4574f4bb95ef3e580f5e3e98feb55cd1381df1cc14a3797f040f3a8d92e86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg6ql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c29effffff4df037300ce8f32cae5337e3c60634d26fb82fde99ad7994a4fd4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg6ql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bca83097ac9d4b621e9bffa8f2f815a6060db34cb0272281b765aa7705d3ab5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg6ql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aaa535fc7f87d47d5bb54797f60b4adcabe1adc1d5d79720acc7a9ba7bb355c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg6ql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e14dfa7f4c26b9ae731392907658bf0a72b811243137b32320a1be70a334a1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg6ql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0849644593ce9a9b0ce3191eb62e47c5986c4b20f93313dd9fa721c7879ede5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg6ql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c2be51c8707a26e2f124761117543e359c32c8fb8d69d1aa54cea3f1b9cfb11b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c2be51c8707a26e2f124761117543e359c32c8fb8d69d1aa54cea3f1b9cfb11b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-17T13:37:09Z\\\",\\\"message\\\":\\\" start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:37:09Z is after 2025-08-24T17:21:41Z]\\\\nI0217 13:37:09.670472 6446 services_controller.go:473] Services do not match for network=default, existing lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-marketplace/redhat-marketplace_TCP_cluster\\\\\\\", UUID:\\\\\\\"97b6e7b0-06ca-455e-8259-06895040cb0c\\\\\\\", Protocol:\\\\\\\"tcp\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-marketplace/redhat-marketplace\\\\\\\"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T13:37:08Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-5cplg_openshift-ovn-kubernetes(742e6df8-2a68-426e-982c-ef825c6efca3)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg6ql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8e5ca206927b1e2a6968ce42ecdb1441da4c68ad11b65c657a68889ac73af37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg6ql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6fcb3dc45db5b2c74f9bdd4c7c453815a392de1f2afe91ebed618327cfe5cb7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6fcb3dc45db5b2c74f9bdd4c7c453815a392de1f2afe91ebed618327cfe5cb7c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:36:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg6ql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:36:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-5cplg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:37:31Z is after 2025-08-24T17:21:41Z" Feb 17 13:37:31 crc kubenswrapper[4768]: I0217 13:37:31.674863 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-6xvnz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"68ae92b3-aced-409b-901b-252d2364cc01\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c3e6dc9e6ef1908b60694a76b89114accbccc8e33d2334e751125f348ff68191\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42dxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ccce43f2773621b91f6782909e74d6c1cb29f3f2e0d4280753e2719f256d694\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3ccce43f2773621b91f6782909e74d6c1cb29f3f2e0d4280753e2719f256d694\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:36:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:36:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42dxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8fa44e73498e2987dca421d954e3c6b71c36138496d25e3498934a0ae03c25de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8fa44e73498e2987dca421d954e3c6b71c36138496d25e3498934a0ae03c25de\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:36:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:36:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42dxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8eee5a030051c479dc3e7d690a338260f3a886f6c8b6764037de0a4b3dcea101\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8eee5a030051c479dc3e7d690a338260f3a886f6c8b6764037de0a4b3dcea101\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:36:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:36:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42dxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://473b18f574df0c69d4f70ded0dee7c42ee64daab422dff19c6a58a0cde52d045\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://473b18f574df0c69d4f70ded0dee7c42ee64daab422dff19c6a58a0cde52d045\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:36:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:36:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42dxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a84122e995b4744d706f85250dc149036364dc1e476c38b54f08494e0cb41ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3a84122e995b4744d706f85250dc149036364dc1e476c38b54f08494e0cb41ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:36:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:36:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42dxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f77e278dccb14a5e674dad398d194080a57e6949529914ddfa9f8a63dbea8a34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f77e278dccb14a5e674dad398d194080a57e6949529914ddfa9f8a63dbea8a34\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:36:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:36:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42dxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:36:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-6xvnz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:37:31Z is after 2025-08-24T17:21:41Z" Feb 17 13:37:31 crc kubenswrapper[4768]: I0217 13:37:31.684788 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-62mzv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f7e6dba9-bf9f-464a-9842-f4f2a793dedf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25c892cb5b274e124423e1da0778d7b3415022fb0280eed30a6b48055d44c3bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nz8rj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4dade5f6bbbe0473f9330ea16398a4522405ca6b22a0e65851a98fe57943b38f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nz8rj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:36:53Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-62mzv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:37:31Z is after 2025-08-24T17:21:41Z" Feb 17 13:37:31 crc kubenswrapper[4768]: I0217 13:37:31.698297 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"88441663-1346-4328-a433-63cb4d9b6722\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:37:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:37:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1dd90501c9cf3ad0b27fefe4999afdfeaf0a8262728c8d47cd4714b0ef71d340\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://52dd97c46a2e371a68dcd9558be0f94d3bdd20c7a6ba6a65178fc09cdceea903\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4f0320f35632987c0e1f7eb6655c7221d7c159f1473088fa143a7c69a60f5b0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1670eca5569373c8bcdf22d19faf8794c43207f7711d502eeb2ade60d643f5dd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9ecebe409455190192bdd157c1aa1c3eaec5d7930ccfaa6c456a1d04be7fff2f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-17T13:36:41Z\\\",\\\"message\\\":\\\"le observer\\\\nW0217 13:36:40.862072 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0217 13:36:40.862247 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0217 13:36:40.862863 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1936381186/tls.crt::/tmp/serving-cert-1936381186/tls.key\\\\\\\"\\\\nI0217 13:36:41.350711 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0217 13:36:41.368959 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0217 13:36:41.368982 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0217 13:36:41.369003 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0217 13:36:41.369008 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0217 13:36:41.374960 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0217 13:36:41.374983 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 13:36:41.374989 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0217 13:36:41.374990 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0217 13:36:41.374994 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0217 13:36:41.375023 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0217 13:36:41.375028 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0217 13:36:41.375032 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0217 13:36:41.377284 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T13:36:25Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7641c2023d65064decc69606567dff6eff07116a6cc62f9d1b1cddb34a39a407\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:24Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9dffd0f3cc3b89ababe812ea2e550f319242913da2908f466cbb8d9ac541e4b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9dffd0f3cc3b89ababe812ea2e550f319242913da2908f466cbb8d9ac541e4b9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:36:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:36:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:36:21Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:37:31Z is after 2025-08-24T17:21:41Z" Feb 17 13:37:31 crc kubenswrapper[4768]: I0217 13:37:31.711677 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c0c5a46beaa5e6c9df638b288967eaff1605591faafc373623753e3c434c6ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a28c91cfb782b4976153d2afced3117498aaabaafa5bba655b020c13aebb1dc5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:37:31Z is after 2025-08-24T17:21:41Z" Feb 17 13:37:31 crc kubenswrapper[4768]: I0217 13:37:31.724426 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:37:31Z is after 2025-08-24T17:21:41Z" Feb 17 13:37:31 crc kubenswrapper[4768]: I0217 13:37:31.725129 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:31 crc kubenswrapper[4768]: I0217 13:37:31.725164 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:31 crc kubenswrapper[4768]: I0217 13:37:31.725178 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:31 crc kubenswrapper[4768]: I0217 13:37:31.725193 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:31 crc kubenswrapper[4768]: I0217 13:37:31.725202 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:31Z","lastTransitionTime":"2026-02-17T13:37:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:31 crc kubenswrapper[4768]: I0217 13:37:31.733641 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-5bxh7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8c8b1469-ed55-4743-9553-f81efd79e5f1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ltsbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ltsbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:36:55Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-5bxh7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:37:31Z is after 2025-08-24T17:21:41Z" Feb 17 13:37:31 crc kubenswrapper[4768]: I0217 13:37:31.827513 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:31 crc kubenswrapper[4768]: I0217 13:37:31.827588 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:31 crc kubenswrapper[4768]: I0217 13:37:31.827609 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:31 crc kubenswrapper[4768]: I0217 13:37:31.827637 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:31 crc kubenswrapper[4768]: I0217 13:37:31.827661 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:31Z","lastTransitionTime":"2026-02-17T13:37:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:31 crc kubenswrapper[4768]: I0217 13:37:31.929891 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:31 crc kubenswrapper[4768]: I0217 13:37:31.929931 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:31 crc kubenswrapper[4768]: I0217 13:37:31.929940 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:31 crc kubenswrapper[4768]: I0217 13:37:31.929958 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:31 crc kubenswrapper[4768]: I0217 13:37:31.929969 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:31Z","lastTransitionTime":"2026-02-17T13:37:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:31 crc kubenswrapper[4768]: I0217 13:37:31.947008 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-jjjqj_e044bf1f-26b2-4a39-86e6-0440eff3eaa9/kube-multus/0.log" Feb 17 13:37:31 crc kubenswrapper[4768]: I0217 13:37:31.947067 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-jjjqj" event={"ID":"e044bf1f-26b2-4a39-86e6-0440eff3eaa9","Type":"ContainerStarted","Data":"8710b3ee2ff8d7aa849b863cfe8b99fd97e02f57ece68e528c7c23994608bedd"} Feb 17 13:37:31 crc kubenswrapper[4768]: I0217 13:37:31.957810 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-hngsc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a9682c-5dd5-49ce-bd8c-60e91527ec2a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db596eafde85d9ad74007393b8576b424cd8561f68f694af82cde80f997a5688\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fbjbb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:36:46Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-hngsc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:37:31Z is after 2025-08-24T17:21:41Z" Feb 17 13:37:31 crc kubenswrapper[4768]: I0217 13:37:31.972417 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"99a4349d-f5a6-431b-b8d6-9de1f9bbe63a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f45aa99773693c07f27add72aa8305d47c48deeb24382aa9ac8bca232fd26ff3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://47db5ef94c07ecce7b368bb809a736c9532fe3e47081cfa7e248b8b40d1d243f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0adb0161ea24422b8b1443a81f00e9c38f60bf8d0718075ce189158200b28a78\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22e1c78ff231c7a76ebb06589f5f14caed8c0b72475083230c80b4512ae2a048\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:36:21Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:37:31Z is after 2025-08-24T17:21:41Z" Feb 17 13:37:31 crc kubenswrapper[4768]: I0217 13:37:31.983228 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1bbd8d51-f387-4de0-b841-e602f7249532\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:37:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:37:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a9d10a3cac9b0b43e1e7c3fb80828cd2d694a46314618dc023ffec8a43488c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e06287439e9f86ed8e81d6d3e8b07691439a8227d2eb2b819db1a7e6de078c18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://28f9a3cd67e4819d5c4e24b1b0d8576898a01c75184c958d65ce81f2523ce7f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3492da352fa4e1b54d266098578ea4f42e40cf1ebe2d6e511dc804473efd9e64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3492da352fa4e1b54d266098578ea4f42e40cf1ebe2d6e511dc804473efd9e64\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:36:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:36:22Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:36:21Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:37:31Z is after 2025-08-24T17:21:41Z" Feb 17 13:37:31 crc kubenswrapper[4768]: I0217 13:37:31.994411 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f9b2d1282c12fcb8597c7b4c60dc1bab1049afc179eaddbc8a72203b392f78d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:37:31Z is after 2025-08-24T17:21:41Z" Feb 17 13:37:32 crc kubenswrapper[4768]: I0217 13:37:32.006859 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-p97z4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"10c685ba-8fe0-425c-958c-3fb6754d3d84\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d56cb97a69888610458dcdfe5fa47e01b6641b0ab8ce5b11c01042001017d633\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b2h62\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://69b662556c0f67ed5b73c21d8dda6de43cd7b0e7c3dc7a57578f3ce94a4be7cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b2h62\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:36:42Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-p97z4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:37:32Z is after 2025-08-24T17:21:41Z" Feb 17 13:37:32 crc kubenswrapper[4768]: I0217 13:37:32.018941 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-jjjqj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e044bf1f-26b2-4a39-86e6-0440eff3eaa9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:37:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:37:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8710b3ee2ff8d7aa849b863cfe8b99fd97e02f57ece68e528c7c23994608bedd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://19bc9ea99c2b3b0015849f2a4c730a14e048c35cf69f5b4025a4d51d707cf9f1\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-17T13:37:30Z\\\",\\\"message\\\":\\\"2026-02-17T13:36:44+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_6f8da95b-d2bc-4d33-8a29-38d7a4b0eb01\\\\n2026-02-17T13:36:44+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_6f8da95b-d2bc-4d33-8a29-38d7a4b0eb01 to /host/opt/cni/bin/\\\\n2026-02-17T13:36:45Z [verbose] multus-daemon started\\\\n2026-02-17T13:36:45Z [verbose] Readiness Indicator file check\\\\n2026-02-17T13:37:30Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T13:36:42Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:37:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jlkgb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:36:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-jjjqj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:37:32Z is after 2025-08-24T17:21:41Z" Feb 17 13:37:32 crc kubenswrapper[4768]: I0217 13:37:32.032561 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:32 crc kubenswrapper[4768]: I0217 13:37:32.032594 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:32 crc kubenswrapper[4768]: I0217 13:37:32.032606 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:32 crc kubenswrapper[4768]: I0217 13:37:32.032623 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:32 crc kubenswrapper[4768]: I0217 13:37:32.032635 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:32Z","lastTransitionTime":"2026-02-17T13:37:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:32 crc kubenswrapper[4768]: I0217 13:37:32.037858 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5cplg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"742e6df8-2a68-426e-982c-ef825c6efca3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://88f4574f4bb95ef3e580f5e3e98feb55cd1381df1cc14a3797f040f3a8d92e86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg6ql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c29effffff4df037300ce8f32cae5337e3c60634d26fb82fde99ad7994a4fd4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg6ql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bca83097ac9d4b621e9bffa8f2f815a6060db34cb0272281b765aa7705d3ab5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg6ql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aaa535fc7f87d47d5bb54797f60b4adcabe1adc1d5d79720acc7a9ba7bb355c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg6ql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e14dfa7f4c26b9ae731392907658bf0a72b811243137b32320a1be70a334a1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg6ql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0849644593ce9a9b0ce3191eb62e47c5986c4b20f93313dd9fa721c7879ede5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg6ql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c2be51c8707a26e2f124761117543e359c32c8fb8d69d1aa54cea3f1b9cfb11b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c2be51c8707a26e2f124761117543e359c32c8fb8d69d1aa54cea3f1b9cfb11b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-17T13:37:09Z\\\",\\\"message\\\":\\\" start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:37:09Z is after 2025-08-24T17:21:41Z]\\\\nI0217 13:37:09.670472 6446 services_controller.go:473] Services do not match for network=default, existing lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-marketplace/redhat-marketplace_TCP_cluster\\\\\\\", UUID:\\\\\\\"97b6e7b0-06ca-455e-8259-06895040cb0c\\\\\\\", Protocol:\\\\\\\"tcp\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-marketplace/redhat-marketplace\\\\\\\"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T13:37:08Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-5cplg_openshift-ovn-kubernetes(742e6df8-2a68-426e-982c-ef825c6efca3)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg6ql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8e5ca206927b1e2a6968ce42ecdb1441da4c68ad11b65c657a68889ac73af37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg6ql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6fcb3dc45db5b2c74f9bdd4c7c453815a392de1f2afe91ebed618327cfe5cb7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6fcb3dc45db5b2c74f9bdd4c7c453815a392de1f2afe91ebed618327cfe5cb7c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:36:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg6ql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:36:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-5cplg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:37:32Z is after 2025-08-24T17:21:41Z" Feb 17 13:37:32 crc kubenswrapper[4768]: I0217 13:37:32.052851 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-6xvnz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"68ae92b3-aced-409b-901b-252d2364cc01\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c3e6dc9e6ef1908b60694a76b89114accbccc8e33d2334e751125f348ff68191\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42dxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ccce43f2773621b91f6782909e74d6c1cb29f3f2e0d4280753e2719f256d694\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3ccce43f2773621b91f6782909e74d6c1cb29f3f2e0d4280753e2719f256d694\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:36:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:36:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42dxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8fa44e73498e2987dca421d954e3c6b71c36138496d25e3498934a0ae03c25de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8fa44e73498e2987dca421d954e3c6b71c36138496d25e3498934a0ae03c25de\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:36:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:36:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42dxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8eee5a030051c479dc3e7d690a338260f3a886f6c8b6764037de0a4b3dcea101\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8eee5a030051c479dc3e7d690a338260f3a886f6c8b6764037de0a4b3dcea101\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:36:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:36:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42dxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://473b18f574df0c69d4f70ded0dee7c42ee64daab422dff19c6a58a0cde52d045\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://473b18f574df0c69d4f70ded0dee7c42ee64daab422dff19c6a58a0cde52d045\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:36:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:36:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42dxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a84122e995b4744d706f85250dc149036364dc1e476c38b54f08494e0cb41ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3a84122e995b4744d706f85250dc149036364dc1e476c38b54f08494e0cb41ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:36:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:36:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42dxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f77e278dccb14a5e674dad398d194080a57e6949529914ddfa9f8a63dbea8a34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f77e278dccb14a5e674dad398d194080a57e6949529914ddfa9f8a63dbea8a34\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:36:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:36:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42dxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:36:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-6xvnz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:37:32Z is after 2025-08-24T17:21:41Z" Feb 17 13:37:32 crc kubenswrapper[4768]: I0217 13:37:32.063696 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-62mzv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f7e6dba9-bf9f-464a-9842-f4f2a793dedf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25c892cb5b274e124423e1da0778d7b3415022fb0280eed30a6b48055d44c3bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nz8rj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4dade5f6bbbe0473f9330ea16398a4522405ca6b22a0e65851a98fe57943b38f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nz8rj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:36:53Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-62mzv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:37:32Z is after 2025-08-24T17:21:41Z" Feb 17 13:37:32 crc kubenswrapper[4768]: I0217 13:37:32.075998 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"88441663-1346-4328-a433-63cb4d9b6722\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:37:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:37:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1dd90501c9cf3ad0b27fefe4999afdfeaf0a8262728c8d47cd4714b0ef71d340\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://52dd97c46a2e371a68dcd9558be0f94d3bdd20c7a6ba6a65178fc09cdceea903\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4f0320f35632987c0e1f7eb6655c7221d7c159f1473088fa143a7c69a60f5b0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1670eca5569373c8bcdf22d19faf8794c43207f7711d502eeb2ade60d643f5dd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9ecebe409455190192bdd157c1aa1c3eaec5d7930ccfaa6c456a1d04be7fff2f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-17T13:36:41Z\\\",\\\"message\\\":\\\"le observer\\\\nW0217 13:36:40.862072 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0217 13:36:40.862247 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0217 13:36:40.862863 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1936381186/tls.crt::/tmp/serving-cert-1936381186/tls.key\\\\\\\"\\\\nI0217 13:36:41.350711 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0217 13:36:41.368959 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0217 13:36:41.368982 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0217 13:36:41.369003 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0217 13:36:41.369008 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0217 13:36:41.374960 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0217 13:36:41.374983 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 13:36:41.374989 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0217 13:36:41.374990 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0217 13:36:41.374994 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0217 13:36:41.375023 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0217 13:36:41.375028 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0217 13:36:41.375032 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0217 13:36:41.377284 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T13:36:25Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7641c2023d65064decc69606567dff6eff07116a6cc62f9d1b1cddb34a39a407\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:24Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9dffd0f3cc3b89ababe812ea2e550f319242913da2908f466cbb8d9ac541e4b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9dffd0f3cc3b89ababe812ea2e550f319242913da2908f466cbb8d9ac541e4b9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:36:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:36:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:36:21Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:37:32Z is after 2025-08-24T17:21:41Z" Feb 17 13:37:32 crc kubenswrapper[4768]: I0217 13:37:32.088772 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c0c5a46beaa5e6c9df638b288967eaff1605591faafc373623753e3c434c6ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a28c91cfb782b4976153d2afced3117498aaabaafa5bba655b020c13aebb1dc5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:37:32Z is after 2025-08-24T17:21:41Z" Feb 17 13:37:32 crc kubenswrapper[4768]: I0217 13:37:32.100323 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:37:32Z is after 2025-08-24T17:21:41Z" Feb 17 13:37:32 crc kubenswrapper[4768]: I0217 13:37:32.109743 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-5bxh7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8c8b1469-ed55-4743-9553-f81efd79e5f1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ltsbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ltsbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:36:55Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-5bxh7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:37:32Z is after 2025-08-24T17:21:41Z" Feb 17 13:37:32 crc kubenswrapper[4768]: I0217 13:37:32.120090 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://07aa34038d6332ca2ddf610df510586d343655b1e38a2912ea1513cca35f4dcc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:37:32Z is after 2025-08-24T17:21:41Z" Feb 17 13:37:32 crc kubenswrapper[4768]: I0217 13:37:32.130781 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:41Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:37:32Z is after 2025-08-24T17:21:41Z" Feb 17 13:37:32 crc kubenswrapper[4768]: I0217 13:37:32.134419 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:32 crc kubenswrapper[4768]: I0217 13:37:32.134452 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:32 crc kubenswrapper[4768]: I0217 13:37:32.134461 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:32 crc kubenswrapper[4768]: I0217 13:37:32.134475 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:32 crc kubenswrapper[4768]: I0217 13:37:32.134485 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:32Z","lastTransitionTime":"2026-02-17T13:37:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:32 crc kubenswrapper[4768]: I0217 13:37:32.142875 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:41Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:37:32Z is after 2025-08-24T17:21:41Z" Feb 17 13:37:32 crc kubenswrapper[4768]: I0217 13:37:32.154509 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6l7rv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dac64b2f-4b0b-454c-96e0-fc7d563d300f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://903f9d31bd5f357a5a1f439a158788c1930d0f63cd50e809028689cfa7884e96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7njgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:36:41Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6l7rv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:37:32Z is after 2025-08-24T17:21:41Z" Feb 17 13:37:32 crc kubenswrapper[4768]: I0217 13:37:32.236598 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:32 crc kubenswrapper[4768]: I0217 13:37:32.236645 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:32 crc kubenswrapper[4768]: I0217 13:37:32.236655 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:32 crc kubenswrapper[4768]: I0217 13:37:32.236670 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:32 crc kubenswrapper[4768]: I0217 13:37:32.236680 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:32Z","lastTransitionTime":"2026-02-17T13:37:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:32 crc kubenswrapper[4768]: I0217 13:37:32.339494 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:32 crc kubenswrapper[4768]: I0217 13:37:32.339549 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:32 crc kubenswrapper[4768]: I0217 13:37:32.339561 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:32 crc kubenswrapper[4768]: I0217 13:37:32.339578 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:32 crc kubenswrapper[4768]: I0217 13:37:32.339591 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:32Z","lastTransitionTime":"2026-02-17T13:37:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:32 crc kubenswrapper[4768]: I0217 13:37:32.442329 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:32 crc kubenswrapper[4768]: I0217 13:37:32.442370 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:32 crc kubenswrapper[4768]: I0217 13:37:32.442378 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:32 crc kubenswrapper[4768]: I0217 13:37:32.442395 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:32 crc kubenswrapper[4768]: I0217 13:37:32.442405 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:32Z","lastTransitionTime":"2026-02-17T13:37:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:32 crc kubenswrapper[4768]: I0217 13:37:32.533907 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5bxh7" Feb 17 13:37:32 crc kubenswrapper[4768]: E0217 13:37:32.534086 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5bxh7" podUID="8c8b1469-ed55-4743-9553-f81efd79e5f1" Feb 17 13:37:32 crc kubenswrapper[4768]: I0217 13:37:32.545243 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:32 crc kubenswrapper[4768]: I0217 13:37:32.545274 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:32 crc kubenswrapper[4768]: I0217 13:37:32.545285 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:32 crc kubenswrapper[4768]: I0217 13:37:32.545299 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:32 crc kubenswrapper[4768]: I0217 13:37:32.545310 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:32Z","lastTransitionTime":"2026-02-17T13:37:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:32 crc kubenswrapper[4768]: I0217 13:37:32.545835 4768 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-29 19:30:09.691617652 +0000 UTC Feb 17 13:37:32 crc kubenswrapper[4768]: I0217 13:37:32.647154 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:32 crc kubenswrapper[4768]: I0217 13:37:32.647188 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:32 crc kubenswrapper[4768]: I0217 13:37:32.647198 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:32 crc kubenswrapper[4768]: I0217 13:37:32.647214 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:32 crc kubenswrapper[4768]: I0217 13:37:32.647225 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:32Z","lastTransitionTime":"2026-02-17T13:37:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:32 crc kubenswrapper[4768]: I0217 13:37:32.749323 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:32 crc kubenswrapper[4768]: I0217 13:37:32.749370 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:32 crc kubenswrapper[4768]: I0217 13:37:32.749382 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:32 crc kubenswrapper[4768]: I0217 13:37:32.749401 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:32 crc kubenswrapper[4768]: I0217 13:37:32.749413 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:32Z","lastTransitionTime":"2026-02-17T13:37:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:32 crc kubenswrapper[4768]: I0217 13:37:32.852209 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:32 crc kubenswrapper[4768]: I0217 13:37:32.852476 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:32 crc kubenswrapper[4768]: I0217 13:37:32.852542 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:32 crc kubenswrapper[4768]: I0217 13:37:32.852620 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:32 crc kubenswrapper[4768]: I0217 13:37:32.852764 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:32Z","lastTransitionTime":"2026-02-17T13:37:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:32 crc kubenswrapper[4768]: I0217 13:37:32.954745 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:32 crc kubenswrapper[4768]: I0217 13:37:32.954780 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:32 crc kubenswrapper[4768]: I0217 13:37:32.954789 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:32 crc kubenswrapper[4768]: I0217 13:37:32.954806 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:32 crc kubenswrapper[4768]: I0217 13:37:32.954818 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:32Z","lastTransitionTime":"2026-02-17T13:37:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:33 crc kubenswrapper[4768]: I0217 13:37:33.057257 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:33 crc kubenswrapper[4768]: I0217 13:37:33.057290 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:33 crc kubenswrapper[4768]: I0217 13:37:33.057298 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:33 crc kubenswrapper[4768]: I0217 13:37:33.057312 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:33 crc kubenswrapper[4768]: I0217 13:37:33.057322 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:33Z","lastTransitionTime":"2026-02-17T13:37:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:33 crc kubenswrapper[4768]: I0217 13:37:33.158978 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:33 crc kubenswrapper[4768]: I0217 13:37:33.159019 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:33 crc kubenswrapper[4768]: I0217 13:37:33.159031 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:33 crc kubenswrapper[4768]: I0217 13:37:33.159051 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:33 crc kubenswrapper[4768]: I0217 13:37:33.160251 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:33Z","lastTransitionTime":"2026-02-17T13:37:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:33 crc kubenswrapper[4768]: I0217 13:37:33.263148 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:33 crc kubenswrapper[4768]: I0217 13:37:33.263180 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:33 crc kubenswrapper[4768]: I0217 13:37:33.263189 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:33 crc kubenswrapper[4768]: I0217 13:37:33.263203 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:33 crc kubenswrapper[4768]: I0217 13:37:33.263211 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:33Z","lastTransitionTime":"2026-02-17T13:37:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:33 crc kubenswrapper[4768]: I0217 13:37:33.365626 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:33 crc kubenswrapper[4768]: I0217 13:37:33.365685 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:33 crc kubenswrapper[4768]: I0217 13:37:33.365698 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:33 crc kubenswrapper[4768]: I0217 13:37:33.365717 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:33 crc kubenswrapper[4768]: I0217 13:37:33.365729 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:33Z","lastTransitionTime":"2026-02-17T13:37:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:33 crc kubenswrapper[4768]: I0217 13:37:33.468831 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:33 crc kubenswrapper[4768]: I0217 13:37:33.469190 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:33 crc kubenswrapper[4768]: I0217 13:37:33.469375 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:33 crc kubenswrapper[4768]: I0217 13:37:33.469557 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:33 crc kubenswrapper[4768]: I0217 13:37:33.469715 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:33Z","lastTransitionTime":"2026-02-17T13:37:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:33 crc kubenswrapper[4768]: I0217 13:37:33.533598 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 13:37:33 crc kubenswrapper[4768]: I0217 13:37:33.533640 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 13:37:33 crc kubenswrapper[4768]: I0217 13:37:33.533713 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 13:37:33 crc kubenswrapper[4768]: E0217 13:37:33.533736 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 13:37:33 crc kubenswrapper[4768]: E0217 13:37:33.533822 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 13:37:33 crc kubenswrapper[4768]: E0217 13:37:33.533892 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 13:37:33 crc kubenswrapper[4768]: I0217 13:37:33.546253 4768 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-15 13:05:43.738118147 +0000 UTC Feb 17 13:37:33 crc kubenswrapper[4768]: I0217 13:37:33.571909 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:33 crc kubenswrapper[4768]: I0217 13:37:33.571944 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:33 crc kubenswrapper[4768]: I0217 13:37:33.571952 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:33 crc kubenswrapper[4768]: I0217 13:37:33.571968 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:33 crc kubenswrapper[4768]: I0217 13:37:33.571978 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:33Z","lastTransitionTime":"2026-02-17T13:37:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:33 crc kubenswrapper[4768]: I0217 13:37:33.673919 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:33 crc kubenswrapper[4768]: I0217 13:37:33.673988 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:33 crc kubenswrapper[4768]: I0217 13:37:33.673997 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:33 crc kubenswrapper[4768]: I0217 13:37:33.674012 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:33 crc kubenswrapper[4768]: I0217 13:37:33.674021 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:33Z","lastTransitionTime":"2026-02-17T13:37:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:33 crc kubenswrapper[4768]: I0217 13:37:33.776444 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:33 crc kubenswrapper[4768]: I0217 13:37:33.776504 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:33 crc kubenswrapper[4768]: I0217 13:37:33.776521 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:33 crc kubenswrapper[4768]: I0217 13:37:33.776562 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:33 crc kubenswrapper[4768]: I0217 13:37:33.776577 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:33Z","lastTransitionTime":"2026-02-17T13:37:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:33 crc kubenswrapper[4768]: I0217 13:37:33.878722 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:33 crc kubenswrapper[4768]: I0217 13:37:33.878748 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:33 crc kubenswrapper[4768]: I0217 13:37:33.878757 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:33 crc kubenswrapper[4768]: I0217 13:37:33.878770 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:33 crc kubenswrapper[4768]: I0217 13:37:33.878779 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:33Z","lastTransitionTime":"2026-02-17T13:37:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:33 crc kubenswrapper[4768]: I0217 13:37:33.981258 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:33 crc kubenswrapper[4768]: I0217 13:37:33.981298 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:33 crc kubenswrapper[4768]: I0217 13:37:33.981307 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:33 crc kubenswrapper[4768]: I0217 13:37:33.981324 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:33 crc kubenswrapper[4768]: I0217 13:37:33.981336 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:33Z","lastTransitionTime":"2026-02-17T13:37:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:34 crc kubenswrapper[4768]: I0217 13:37:34.083583 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:34 crc kubenswrapper[4768]: I0217 13:37:34.083622 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:34 crc kubenswrapper[4768]: I0217 13:37:34.083634 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:34 crc kubenswrapper[4768]: I0217 13:37:34.083651 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:34 crc kubenswrapper[4768]: I0217 13:37:34.083661 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:34Z","lastTransitionTime":"2026-02-17T13:37:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:34 crc kubenswrapper[4768]: I0217 13:37:34.185798 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:34 crc kubenswrapper[4768]: I0217 13:37:34.185836 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:34 crc kubenswrapper[4768]: I0217 13:37:34.185844 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:34 crc kubenswrapper[4768]: I0217 13:37:34.185861 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:34 crc kubenswrapper[4768]: I0217 13:37:34.185871 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:34Z","lastTransitionTime":"2026-02-17T13:37:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:34 crc kubenswrapper[4768]: I0217 13:37:34.288041 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:34 crc kubenswrapper[4768]: I0217 13:37:34.288090 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:34 crc kubenswrapper[4768]: I0217 13:37:34.288124 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:34 crc kubenswrapper[4768]: I0217 13:37:34.288142 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:34 crc kubenswrapper[4768]: I0217 13:37:34.288154 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:34Z","lastTransitionTime":"2026-02-17T13:37:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:34 crc kubenswrapper[4768]: I0217 13:37:34.390681 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:34 crc kubenswrapper[4768]: I0217 13:37:34.390723 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:34 crc kubenswrapper[4768]: I0217 13:37:34.390733 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:34 crc kubenswrapper[4768]: I0217 13:37:34.390750 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:34 crc kubenswrapper[4768]: I0217 13:37:34.390760 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:34Z","lastTransitionTime":"2026-02-17T13:37:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:34 crc kubenswrapper[4768]: I0217 13:37:34.492580 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:34 crc kubenswrapper[4768]: I0217 13:37:34.492622 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:34 crc kubenswrapper[4768]: I0217 13:37:34.492634 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:34 crc kubenswrapper[4768]: I0217 13:37:34.492652 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:34 crc kubenswrapper[4768]: I0217 13:37:34.492663 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:34Z","lastTransitionTime":"2026-02-17T13:37:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:34 crc kubenswrapper[4768]: I0217 13:37:34.533972 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5bxh7" Feb 17 13:37:34 crc kubenswrapper[4768]: E0217 13:37:34.534166 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5bxh7" podUID="8c8b1469-ed55-4743-9553-f81efd79e5f1" Feb 17 13:37:34 crc kubenswrapper[4768]: I0217 13:37:34.534882 4768 scope.go:117] "RemoveContainer" containerID="c2be51c8707a26e2f124761117543e359c32c8fb8d69d1aa54cea3f1b9cfb11b" Feb 17 13:37:34 crc kubenswrapper[4768]: I0217 13:37:34.548335 4768 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-13 11:50:54.923536431 +0000 UTC Feb 17 13:37:34 crc kubenswrapper[4768]: I0217 13:37:34.595467 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:34 crc kubenswrapper[4768]: I0217 13:37:34.595510 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:34 crc kubenswrapper[4768]: I0217 13:37:34.595521 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:34 crc kubenswrapper[4768]: I0217 13:37:34.595540 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:34 crc kubenswrapper[4768]: I0217 13:37:34.595551 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:34Z","lastTransitionTime":"2026-02-17T13:37:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:34 crc kubenswrapper[4768]: I0217 13:37:34.697358 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:34 crc kubenswrapper[4768]: I0217 13:37:34.697393 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:34 crc kubenswrapper[4768]: I0217 13:37:34.697402 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:34 crc kubenswrapper[4768]: I0217 13:37:34.697418 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:34 crc kubenswrapper[4768]: I0217 13:37:34.697430 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:34Z","lastTransitionTime":"2026-02-17T13:37:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:34 crc kubenswrapper[4768]: I0217 13:37:34.799616 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:34 crc kubenswrapper[4768]: I0217 13:37:34.799661 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:34 crc kubenswrapper[4768]: I0217 13:37:34.799678 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:34 crc kubenswrapper[4768]: I0217 13:37:34.799699 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:34 crc kubenswrapper[4768]: I0217 13:37:34.799709 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:34Z","lastTransitionTime":"2026-02-17T13:37:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:34 crc kubenswrapper[4768]: I0217 13:37:34.902388 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:34 crc kubenswrapper[4768]: I0217 13:37:34.902438 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:34 crc kubenswrapper[4768]: I0217 13:37:34.902446 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:34 crc kubenswrapper[4768]: I0217 13:37:34.902464 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:34 crc kubenswrapper[4768]: I0217 13:37:34.902481 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:34Z","lastTransitionTime":"2026-02-17T13:37:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:34 crc kubenswrapper[4768]: I0217 13:37:34.958772 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-5cplg_742e6df8-2a68-426e-982c-ef825c6efca3/ovnkube-controller/2.log" Feb 17 13:37:34 crc kubenswrapper[4768]: I0217 13:37:34.961512 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5cplg" event={"ID":"742e6df8-2a68-426e-982c-ef825c6efca3","Type":"ContainerStarted","Data":"115c47b6c343076ece428facfb56ccbf0b490cbfe7eff1470c731d6c304df3b9"} Feb 17 13:37:34 crc kubenswrapper[4768]: I0217 13:37:34.961830 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-5cplg" Feb 17 13:37:34 crc kubenswrapper[4768]: I0217 13:37:34.973554 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://07aa34038d6332ca2ddf610df510586d343655b1e38a2912ea1513cca35f4dcc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:37:34Z is after 2025-08-24T17:21:41Z" Feb 17 13:37:34 crc kubenswrapper[4768]: I0217 13:37:34.984975 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:41Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:37:34Z is after 2025-08-24T17:21:41Z" Feb 17 13:37:34 crc kubenswrapper[4768]: I0217 13:37:34.998476 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:41Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:37:34Z is after 2025-08-24T17:21:41Z" Feb 17 13:37:35 crc kubenswrapper[4768]: I0217 13:37:35.007880 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:35 crc kubenswrapper[4768]: I0217 13:37:35.007917 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:35 crc kubenswrapper[4768]: I0217 13:37:35.007929 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:35 crc kubenswrapper[4768]: I0217 13:37:35.007949 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:35 crc kubenswrapper[4768]: I0217 13:37:35.007960 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:35Z","lastTransitionTime":"2026-02-17T13:37:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:35 crc kubenswrapper[4768]: I0217 13:37:35.015631 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6l7rv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dac64b2f-4b0b-454c-96e0-fc7d563d300f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://903f9d31bd5f357a5a1f439a158788c1930d0f63cd50e809028689cfa7884e96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7njgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:36:41Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6l7rv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:37:35Z is after 2025-08-24T17:21:41Z" Feb 17 13:37:35 crc kubenswrapper[4768]: I0217 13:37:35.029872 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-jjjqj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e044bf1f-26b2-4a39-86e6-0440eff3eaa9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:37:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:37:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8710b3ee2ff8d7aa849b863cfe8b99fd97e02f57ece68e528c7c23994608bedd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://19bc9ea99c2b3b0015849f2a4c730a14e048c35cf69f5b4025a4d51d707cf9f1\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-17T13:37:30Z\\\",\\\"message\\\":\\\"2026-02-17T13:36:44+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_6f8da95b-d2bc-4d33-8a29-38d7a4b0eb01\\\\n2026-02-17T13:36:44+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_6f8da95b-d2bc-4d33-8a29-38d7a4b0eb01 to /host/opt/cni/bin/\\\\n2026-02-17T13:36:45Z [verbose] multus-daemon started\\\\n2026-02-17T13:36:45Z [verbose] Readiness Indicator file check\\\\n2026-02-17T13:37:30Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T13:36:42Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:37:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jlkgb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:36:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-jjjqj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:37:35Z is after 2025-08-24T17:21:41Z" Feb 17 13:37:35 crc kubenswrapper[4768]: I0217 13:37:35.048352 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5cplg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"742e6df8-2a68-426e-982c-ef825c6efca3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://88f4574f4bb95ef3e580f5e3e98feb55cd1381df1cc14a3797f040f3a8d92e86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg6ql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c29effffff4df037300ce8f32cae5337e3c60634d26fb82fde99ad7994a4fd4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg6ql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bca83097ac9d4b621e9bffa8f2f815a6060db34cb0272281b765aa7705d3ab5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg6ql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aaa535fc7f87d47d5bb54797f60b4adcabe1adc1d5d79720acc7a9ba7bb355c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg6ql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e14dfa7f4c26b9ae731392907658bf0a72b811243137b32320a1be70a334a1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg6ql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0849644593ce9a9b0ce3191eb62e47c5986c4b20f93313dd9fa721c7879ede5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg6ql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://115c47b6c343076ece428facfb56ccbf0b490cbfe7eff1470c731d6c304df3b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c2be51c8707a26e2f124761117543e359c32c8fb8d69d1aa54cea3f1b9cfb11b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-17T13:37:09Z\\\",\\\"message\\\":\\\" start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:37:09Z is after 2025-08-24T17:21:41Z]\\\\nI0217 13:37:09.670472 6446 services_controller.go:473] Services do not match for network=default, existing lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-marketplace/redhat-marketplace_TCP_cluster\\\\\\\", UUID:\\\\\\\"97b6e7b0-06ca-455e-8259-06895040cb0c\\\\\\\", Protocol:\\\\\\\"tcp\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-marketplace/redhat-marketplace\\\\\\\"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T13:37:08Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:37:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg6ql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8e5ca206927b1e2a6968ce42ecdb1441da4c68ad11b65c657a68889ac73af37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg6ql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6fcb3dc45db5b2c74f9bdd4c7c453815a392de1f2afe91ebed618327cfe5cb7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6fcb3dc45db5b2c74f9bdd4c7c453815a392de1f2afe91ebed618327cfe5cb7c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:36:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg6ql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:36:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-5cplg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:37:35Z is after 2025-08-24T17:21:41Z" Feb 17 13:37:35 crc kubenswrapper[4768]: I0217 13:37:35.062083 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-hngsc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a9682c-5dd5-49ce-bd8c-60e91527ec2a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db596eafde85d9ad74007393b8576b424cd8561f68f694af82cde80f997a5688\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fbjbb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:36:46Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-hngsc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:37:35Z is after 2025-08-24T17:21:41Z" Feb 17 13:37:35 crc kubenswrapper[4768]: I0217 13:37:35.074003 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"99a4349d-f5a6-431b-b8d6-9de1f9bbe63a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f45aa99773693c07f27add72aa8305d47c48deeb24382aa9ac8bca232fd26ff3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://47db5ef94c07ecce7b368bb809a736c9532fe3e47081cfa7e248b8b40d1d243f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0adb0161ea24422b8b1443a81f00e9c38f60bf8d0718075ce189158200b28a78\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22e1c78ff231c7a76ebb06589f5f14caed8c0b72475083230c80b4512ae2a048\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:36:21Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:37:35Z is after 2025-08-24T17:21:41Z" Feb 17 13:37:35 crc kubenswrapper[4768]: I0217 13:37:35.084062 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1bbd8d51-f387-4de0-b841-e602f7249532\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:37:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:37:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a9d10a3cac9b0b43e1e7c3fb80828cd2d694a46314618dc023ffec8a43488c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e06287439e9f86ed8e81d6d3e8b07691439a8227d2eb2b819db1a7e6de078c18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://28f9a3cd67e4819d5c4e24b1b0d8576898a01c75184c958d65ce81f2523ce7f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3492da352fa4e1b54d266098578ea4f42e40cf1ebe2d6e511dc804473efd9e64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3492da352fa4e1b54d266098578ea4f42e40cf1ebe2d6e511dc804473efd9e64\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:36:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:36:22Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:36:21Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:37:35Z is after 2025-08-24T17:21:41Z" Feb 17 13:37:35 crc kubenswrapper[4768]: I0217 13:37:35.097662 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f9b2d1282c12fcb8597c7b4c60dc1bab1049afc179eaddbc8a72203b392f78d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:37:35Z is after 2025-08-24T17:21:41Z" Feb 17 13:37:35 crc kubenswrapper[4768]: I0217 13:37:35.109466 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-p97z4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"10c685ba-8fe0-425c-958c-3fb6754d3d84\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d56cb97a69888610458dcdfe5fa47e01b6641b0ab8ce5b11c01042001017d633\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b2h62\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://69b662556c0f67ed5b73c21d8dda6de43cd7b0e7c3dc7a57578f3ce94a4be7cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b2h62\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:36:42Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-p97z4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:37:35Z is after 2025-08-24T17:21:41Z" Feb 17 13:37:35 crc kubenswrapper[4768]: I0217 13:37:35.110226 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:35 crc kubenswrapper[4768]: I0217 13:37:35.110254 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:35 crc kubenswrapper[4768]: I0217 13:37:35.110262 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:35 crc kubenswrapper[4768]: I0217 13:37:35.110276 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:35 crc kubenswrapper[4768]: I0217 13:37:35.110285 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:35Z","lastTransitionTime":"2026-02-17T13:37:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:35 crc kubenswrapper[4768]: I0217 13:37:35.122032 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-6xvnz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"68ae92b3-aced-409b-901b-252d2364cc01\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c3e6dc9e6ef1908b60694a76b89114accbccc8e33d2334e751125f348ff68191\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42dxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ccce43f2773621b91f6782909e74d6c1cb29f3f2e0d4280753e2719f256d694\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3ccce43f2773621b91f6782909e74d6c1cb29f3f2e0d4280753e2719f256d694\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:36:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:36:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42dxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8fa44e73498e2987dca421d954e3c6b71c36138496d25e3498934a0ae03c25de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8fa44e73498e2987dca421d954e3c6b71c36138496d25e3498934a0ae03c25de\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:36:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:36:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42dxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8eee5a030051c479dc3e7d690a338260f3a886f6c8b6764037de0a4b3dcea101\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8eee5a030051c479dc3e7d690a338260f3a886f6c8b6764037de0a4b3dcea101\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:36:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:36:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42dxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://473b18f574df0c69d4f70ded0dee7c42ee64daab422dff19c6a58a0cde52d045\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://473b18f574df0c69d4f70ded0dee7c42ee64daab422dff19c6a58a0cde52d045\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:36:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:36:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42dxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a84122e995b4744d706f85250dc149036364dc1e476c38b54f08494e0cb41ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3a84122e995b4744d706f85250dc149036364dc1e476c38b54f08494e0cb41ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:36:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:36:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42dxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f77e278dccb14a5e674dad398d194080a57e6949529914ddfa9f8a63dbea8a34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f77e278dccb14a5e674dad398d194080a57e6949529914ddfa9f8a63dbea8a34\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:36:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:36:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42dxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:36:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-6xvnz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:37:35Z is after 2025-08-24T17:21:41Z" Feb 17 13:37:35 crc kubenswrapper[4768]: I0217 13:37:35.131802 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-62mzv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f7e6dba9-bf9f-464a-9842-f4f2a793dedf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25c892cb5b274e124423e1da0778d7b3415022fb0280eed30a6b48055d44c3bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nz8rj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4dade5f6bbbe0473f9330ea16398a4522405ca6b22a0e65851a98fe57943b38f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nz8rj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:36:53Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-62mzv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:37:35Z is after 2025-08-24T17:21:41Z" Feb 17 13:37:35 crc kubenswrapper[4768]: I0217 13:37:35.146671 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"88441663-1346-4328-a433-63cb4d9b6722\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:37:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:37:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1dd90501c9cf3ad0b27fefe4999afdfeaf0a8262728c8d47cd4714b0ef71d340\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://52dd97c46a2e371a68dcd9558be0f94d3bdd20c7a6ba6a65178fc09cdceea903\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4f0320f35632987c0e1f7eb6655c7221d7c159f1473088fa143a7c69a60f5b0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1670eca5569373c8bcdf22d19faf8794c43207f7711d502eeb2ade60d643f5dd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9ecebe409455190192bdd157c1aa1c3eaec5d7930ccfaa6c456a1d04be7fff2f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-17T13:36:41Z\\\",\\\"message\\\":\\\"le observer\\\\nW0217 13:36:40.862072 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0217 13:36:40.862247 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0217 13:36:40.862863 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1936381186/tls.crt::/tmp/serving-cert-1936381186/tls.key\\\\\\\"\\\\nI0217 13:36:41.350711 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0217 13:36:41.368959 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0217 13:36:41.368982 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0217 13:36:41.369003 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0217 13:36:41.369008 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0217 13:36:41.374960 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0217 13:36:41.374983 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 13:36:41.374989 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0217 13:36:41.374990 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0217 13:36:41.374994 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0217 13:36:41.375023 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0217 13:36:41.375028 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0217 13:36:41.375032 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0217 13:36:41.377284 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T13:36:25Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7641c2023d65064decc69606567dff6eff07116a6cc62f9d1b1cddb34a39a407\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:24Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9dffd0f3cc3b89ababe812ea2e550f319242913da2908f466cbb8d9ac541e4b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9dffd0f3cc3b89ababe812ea2e550f319242913da2908f466cbb8d9ac541e4b9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:36:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:36:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:36:21Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:37:35Z is after 2025-08-24T17:21:41Z" Feb 17 13:37:35 crc kubenswrapper[4768]: I0217 13:37:35.160723 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c0c5a46beaa5e6c9df638b288967eaff1605591faafc373623753e3c434c6ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a28c91cfb782b4976153d2afced3117498aaabaafa5bba655b020c13aebb1dc5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:37:35Z is after 2025-08-24T17:21:41Z" Feb 17 13:37:35 crc kubenswrapper[4768]: I0217 13:37:35.171501 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:37:35Z is after 2025-08-24T17:21:41Z" Feb 17 13:37:35 crc kubenswrapper[4768]: I0217 13:37:35.180839 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-5bxh7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8c8b1469-ed55-4743-9553-f81efd79e5f1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ltsbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ltsbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:36:55Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-5bxh7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:37:35Z is after 2025-08-24T17:21:41Z" Feb 17 13:37:35 crc kubenswrapper[4768]: I0217 13:37:35.211838 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:35 crc kubenswrapper[4768]: I0217 13:37:35.211868 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:35 crc kubenswrapper[4768]: I0217 13:37:35.211876 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:35 crc kubenswrapper[4768]: I0217 13:37:35.211889 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:35 crc kubenswrapper[4768]: I0217 13:37:35.211898 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:35Z","lastTransitionTime":"2026-02-17T13:37:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:35 crc kubenswrapper[4768]: I0217 13:37:35.313609 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:35 crc kubenswrapper[4768]: I0217 13:37:35.313651 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:35 crc kubenswrapper[4768]: I0217 13:37:35.313664 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:35 crc kubenswrapper[4768]: I0217 13:37:35.313680 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:35 crc kubenswrapper[4768]: I0217 13:37:35.313692 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:35Z","lastTransitionTime":"2026-02-17T13:37:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:35 crc kubenswrapper[4768]: I0217 13:37:35.416680 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:35 crc kubenswrapper[4768]: I0217 13:37:35.416726 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:35 crc kubenswrapper[4768]: I0217 13:37:35.416740 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:35 crc kubenswrapper[4768]: I0217 13:37:35.416760 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:35 crc kubenswrapper[4768]: I0217 13:37:35.417166 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:35Z","lastTransitionTime":"2026-02-17T13:37:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:35 crc kubenswrapper[4768]: I0217 13:37:35.520362 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:35 crc kubenswrapper[4768]: I0217 13:37:35.520479 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:35 crc kubenswrapper[4768]: I0217 13:37:35.520505 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:35 crc kubenswrapper[4768]: I0217 13:37:35.520535 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:35 crc kubenswrapper[4768]: I0217 13:37:35.520558 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:35Z","lastTransitionTime":"2026-02-17T13:37:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:35 crc kubenswrapper[4768]: I0217 13:37:35.533770 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 13:37:35 crc kubenswrapper[4768]: I0217 13:37:35.533833 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 13:37:35 crc kubenswrapper[4768]: I0217 13:37:35.533854 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 13:37:35 crc kubenswrapper[4768]: E0217 13:37:35.534006 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 13:37:35 crc kubenswrapper[4768]: E0217 13:37:35.534244 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 13:37:35 crc kubenswrapper[4768]: E0217 13:37:35.534413 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 13:37:35 crc kubenswrapper[4768]: I0217 13:37:35.549325 4768 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-20 08:16:36.182676086 +0000 UTC Feb 17 13:37:35 crc kubenswrapper[4768]: I0217 13:37:35.623900 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:35 crc kubenswrapper[4768]: I0217 13:37:35.624223 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:35 crc kubenswrapper[4768]: I0217 13:37:35.624312 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:35 crc kubenswrapper[4768]: I0217 13:37:35.624428 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:35 crc kubenswrapper[4768]: I0217 13:37:35.624519 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:35Z","lastTransitionTime":"2026-02-17T13:37:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:35 crc kubenswrapper[4768]: I0217 13:37:35.727967 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:35 crc kubenswrapper[4768]: I0217 13:37:35.728037 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:35 crc kubenswrapper[4768]: I0217 13:37:35.728060 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:35 crc kubenswrapper[4768]: I0217 13:37:35.728088 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:35 crc kubenswrapper[4768]: I0217 13:37:35.728153 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:35Z","lastTransitionTime":"2026-02-17T13:37:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:35 crc kubenswrapper[4768]: I0217 13:37:35.830978 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:35 crc kubenswrapper[4768]: I0217 13:37:35.831010 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:35 crc kubenswrapper[4768]: I0217 13:37:35.831018 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:35 crc kubenswrapper[4768]: I0217 13:37:35.831036 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:35 crc kubenswrapper[4768]: I0217 13:37:35.831044 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:35Z","lastTransitionTime":"2026-02-17T13:37:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:35 crc kubenswrapper[4768]: I0217 13:37:35.933600 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:35 crc kubenswrapper[4768]: I0217 13:37:35.933655 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:35 crc kubenswrapper[4768]: I0217 13:37:35.933671 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:35 crc kubenswrapper[4768]: I0217 13:37:35.933697 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:35 crc kubenswrapper[4768]: I0217 13:37:35.933715 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:35Z","lastTransitionTime":"2026-02-17T13:37:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:35 crc kubenswrapper[4768]: I0217 13:37:35.967247 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-5cplg_742e6df8-2a68-426e-982c-ef825c6efca3/ovnkube-controller/3.log" Feb 17 13:37:35 crc kubenswrapper[4768]: I0217 13:37:35.968230 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-5cplg_742e6df8-2a68-426e-982c-ef825c6efca3/ovnkube-controller/2.log" Feb 17 13:37:35 crc kubenswrapper[4768]: I0217 13:37:35.971330 4768 generic.go:334] "Generic (PLEG): container finished" podID="742e6df8-2a68-426e-982c-ef825c6efca3" containerID="115c47b6c343076ece428facfb56ccbf0b490cbfe7eff1470c731d6c304df3b9" exitCode=1 Feb 17 13:37:35 crc kubenswrapper[4768]: I0217 13:37:35.971369 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5cplg" event={"ID":"742e6df8-2a68-426e-982c-ef825c6efca3","Type":"ContainerDied","Data":"115c47b6c343076ece428facfb56ccbf0b490cbfe7eff1470c731d6c304df3b9"} Feb 17 13:37:35 crc kubenswrapper[4768]: I0217 13:37:35.971405 4768 scope.go:117] "RemoveContainer" containerID="c2be51c8707a26e2f124761117543e359c32c8fb8d69d1aa54cea3f1b9cfb11b" Feb 17 13:37:35 crc kubenswrapper[4768]: I0217 13:37:35.972158 4768 scope.go:117] "RemoveContainer" containerID="115c47b6c343076ece428facfb56ccbf0b490cbfe7eff1470c731d6c304df3b9" Feb 17 13:37:35 crc kubenswrapper[4768]: E0217 13:37:35.972369 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-5cplg_openshift-ovn-kubernetes(742e6df8-2a68-426e-982c-ef825c6efca3)\"" pod="openshift-ovn-kubernetes/ovnkube-node-5cplg" podUID="742e6df8-2a68-426e-982c-ef825c6efca3" Feb 17 13:37:35 crc kubenswrapper[4768]: I0217 13:37:35.993530 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://07aa34038d6332ca2ddf610df510586d343655b1e38a2912ea1513cca35f4dcc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:37:35Z is after 2025-08-24T17:21:41Z" Feb 17 13:37:36 crc kubenswrapper[4768]: I0217 13:37:36.008473 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:41Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:37:36Z is after 2025-08-24T17:21:41Z" Feb 17 13:37:36 crc kubenswrapper[4768]: I0217 13:37:36.025896 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:41Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:37:36Z is after 2025-08-24T17:21:41Z" Feb 17 13:37:36 crc kubenswrapper[4768]: I0217 13:37:36.037011 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:36 crc kubenswrapper[4768]: I0217 13:37:36.037056 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:36 crc kubenswrapper[4768]: I0217 13:37:36.037070 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:36 crc kubenswrapper[4768]: I0217 13:37:36.037092 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:36 crc kubenswrapper[4768]: I0217 13:37:36.037132 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:36Z","lastTransitionTime":"2026-02-17T13:37:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:36 crc kubenswrapper[4768]: I0217 13:37:36.037173 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6l7rv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dac64b2f-4b0b-454c-96e0-fc7d563d300f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://903f9d31bd5f357a5a1f439a158788c1930d0f63cd50e809028689cfa7884e96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7njgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:36:41Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6l7rv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:37:36Z is after 2025-08-24T17:21:41Z" Feb 17 13:37:36 crc kubenswrapper[4768]: I0217 13:37:36.056388 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"99a4349d-f5a6-431b-b8d6-9de1f9bbe63a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f45aa99773693c07f27add72aa8305d47c48deeb24382aa9ac8bca232fd26ff3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://47db5ef94c07ecce7b368bb809a736c9532fe3e47081cfa7e248b8b40d1d243f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0adb0161ea24422b8b1443a81f00e9c38f60bf8d0718075ce189158200b28a78\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22e1c78ff231c7a76ebb06589f5f14caed8c0b72475083230c80b4512ae2a048\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:36:21Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:37:36Z is after 2025-08-24T17:21:41Z" Feb 17 13:37:36 crc kubenswrapper[4768]: I0217 13:37:36.069245 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1bbd8d51-f387-4de0-b841-e602f7249532\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:37:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:37:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a9d10a3cac9b0b43e1e7c3fb80828cd2d694a46314618dc023ffec8a43488c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e06287439e9f86ed8e81d6d3e8b07691439a8227d2eb2b819db1a7e6de078c18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://28f9a3cd67e4819d5c4e24b1b0d8576898a01c75184c958d65ce81f2523ce7f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3492da352fa4e1b54d266098578ea4f42e40cf1ebe2d6e511dc804473efd9e64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3492da352fa4e1b54d266098578ea4f42e40cf1ebe2d6e511dc804473efd9e64\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:36:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:36:22Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:36:21Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:37:36Z is after 2025-08-24T17:21:41Z" Feb 17 13:37:36 crc kubenswrapper[4768]: I0217 13:37:36.085556 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f9b2d1282c12fcb8597c7b4c60dc1bab1049afc179eaddbc8a72203b392f78d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:37:36Z is after 2025-08-24T17:21:41Z" Feb 17 13:37:36 crc kubenswrapper[4768]: I0217 13:37:36.095688 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-p97z4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"10c685ba-8fe0-425c-958c-3fb6754d3d84\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d56cb97a69888610458dcdfe5fa47e01b6641b0ab8ce5b11c01042001017d633\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b2h62\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://69b662556c0f67ed5b73c21d8dda6de43cd7b0e7c3dc7a57578f3ce94a4be7cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b2h62\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:36:42Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-p97z4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:37:36Z is after 2025-08-24T17:21:41Z" Feb 17 13:37:36 crc kubenswrapper[4768]: I0217 13:37:36.110757 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-jjjqj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e044bf1f-26b2-4a39-86e6-0440eff3eaa9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:37:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:37:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8710b3ee2ff8d7aa849b863cfe8b99fd97e02f57ece68e528c7c23994608bedd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://19bc9ea99c2b3b0015849f2a4c730a14e048c35cf69f5b4025a4d51d707cf9f1\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-17T13:37:30Z\\\",\\\"message\\\":\\\"2026-02-17T13:36:44+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_6f8da95b-d2bc-4d33-8a29-38d7a4b0eb01\\\\n2026-02-17T13:36:44+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_6f8da95b-d2bc-4d33-8a29-38d7a4b0eb01 to /host/opt/cni/bin/\\\\n2026-02-17T13:36:45Z [verbose] multus-daemon started\\\\n2026-02-17T13:36:45Z [verbose] Readiness Indicator file check\\\\n2026-02-17T13:37:30Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T13:36:42Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:37:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jlkgb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:36:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-jjjqj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:37:36Z is after 2025-08-24T17:21:41Z" Feb 17 13:37:36 crc kubenswrapper[4768]: I0217 13:37:36.128326 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5cplg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"742e6df8-2a68-426e-982c-ef825c6efca3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://88f4574f4bb95ef3e580f5e3e98feb55cd1381df1cc14a3797f040f3a8d92e86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg6ql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c29effffff4df037300ce8f32cae5337e3c60634d26fb82fde99ad7994a4fd4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg6ql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bca83097ac9d4b621e9bffa8f2f815a6060db34cb0272281b765aa7705d3ab5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg6ql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aaa535fc7f87d47d5bb54797f60b4adcabe1adc1d5d79720acc7a9ba7bb355c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg6ql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e14dfa7f4c26b9ae731392907658bf0a72b811243137b32320a1be70a334a1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg6ql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0849644593ce9a9b0ce3191eb62e47c5986c4b20f93313dd9fa721c7879ede5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg6ql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://115c47b6c343076ece428facfb56ccbf0b490cbfe7eff1470c731d6c304df3b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c2be51c8707a26e2f124761117543e359c32c8fb8d69d1aa54cea3f1b9cfb11b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-17T13:37:09Z\\\",\\\"message\\\":\\\" start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:37:09Z is after 2025-08-24T17:21:41Z]\\\\nI0217 13:37:09.670472 6446 services_controller.go:473] Services do not match for network=default, existing lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-marketplace/redhat-marketplace_TCP_cluster\\\\\\\", UUID:\\\\\\\"97b6e7b0-06ca-455e-8259-06895040cb0c\\\\\\\", Protocol:\\\\\\\"tcp\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-marketplace/redhat-marketplace\\\\\\\"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T13:37:08Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://115c47b6c343076ece428facfb56ccbf0b490cbfe7eff1470c731d6c304df3b9\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-17T13:37:35Z\\\",\\\"message\\\":\\\"/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0217 13:37:35.213969 6847 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0217 13:37:35.214004 6847 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0217 13:37:35.214057 6847 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0217 13:37:35.214085 6847 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0217 13:37:35.214131 6847 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0217 13:37:35.222069 6847 factory.go:656] Stopping watch factory\\\\nI0217 13:37:35.266234 6847 shared_informer.go:320] Caches are synced for node-tracker-controller\\\\nI0217 13:37:35.266283 6847 services_controller.go:204] Setting up event handlers for services for network=default\\\\nI0217 13:37:35.266359 6847 ovnkube.go:599] Stopped ovnkube\\\\nI0217 13:37:35.266398 6847 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0217 13:37:35.266476 6847 ovnkube.go:137] failed to run ov\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T13:37:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg6ql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8e5ca206927b1e2a6968ce42ecdb1441da4c68ad11b65c657a68889ac73af37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg6ql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6fcb3dc45db5b2c74f9bdd4c7c453815a392de1f2afe91ebed618327cfe5cb7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6fcb3dc45db5b2c74f9bdd4c7c453815a392de1f2afe91ebed618327cfe5cb7c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:36:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg6ql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:36:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-5cplg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:37:36Z is after 2025-08-24T17:21:41Z" Feb 17 13:37:36 crc kubenswrapper[4768]: I0217 13:37:36.141199 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:36 crc kubenswrapper[4768]: I0217 13:37:36.141239 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:36 crc kubenswrapper[4768]: I0217 13:37:36.141251 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:36 crc kubenswrapper[4768]: I0217 13:37:36.141269 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:36 crc kubenswrapper[4768]: I0217 13:37:36.141282 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:36Z","lastTransitionTime":"2026-02-17T13:37:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:36 crc kubenswrapper[4768]: I0217 13:37:36.141621 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-hngsc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a9682c-5dd5-49ce-bd8c-60e91527ec2a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db596eafde85d9ad74007393b8576b424cd8561f68f694af82cde80f997a5688\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fbjbb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:36:46Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-hngsc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:37:36Z is after 2025-08-24T17:21:41Z" Feb 17 13:37:36 crc kubenswrapper[4768]: I0217 13:37:36.156199 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-6xvnz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"68ae92b3-aced-409b-901b-252d2364cc01\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c3e6dc9e6ef1908b60694a76b89114accbccc8e33d2334e751125f348ff68191\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42dxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ccce43f2773621b91f6782909e74d6c1cb29f3f2e0d4280753e2719f256d694\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3ccce43f2773621b91f6782909e74d6c1cb29f3f2e0d4280753e2719f256d694\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:36:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:36:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42dxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8fa44e73498e2987dca421d954e3c6b71c36138496d25e3498934a0ae03c25de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8fa44e73498e2987dca421d954e3c6b71c36138496d25e3498934a0ae03c25de\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:36:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:36:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42dxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8eee5a030051c479dc3e7d690a338260f3a886f6c8b6764037de0a4b3dcea101\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8eee5a030051c479dc3e7d690a338260f3a886f6c8b6764037de0a4b3dcea101\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:36:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:36:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42dxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://473b18f574df0c69d4f70ded0dee7c42ee64daab422dff19c6a58a0cde52d045\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://473b18f574df0c69d4f70ded0dee7c42ee64daab422dff19c6a58a0cde52d045\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:36:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:36:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42dxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a84122e995b4744d706f85250dc149036364dc1e476c38b54f08494e0cb41ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3a84122e995b4744d706f85250dc149036364dc1e476c38b54f08494e0cb41ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:36:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:36:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42dxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f77e278dccb14a5e674dad398d194080a57e6949529914ddfa9f8a63dbea8a34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f77e278dccb14a5e674dad398d194080a57e6949529914ddfa9f8a63dbea8a34\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:36:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:36:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42dxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:36:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-6xvnz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:37:36Z is after 2025-08-24T17:21:41Z" Feb 17 13:37:36 crc kubenswrapper[4768]: I0217 13:37:36.169257 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-62mzv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f7e6dba9-bf9f-464a-9842-f4f2a793dedf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25c892cb5b274e124423e1da0778d7b3415022fb0280eed30a6b48055d44c3bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nz8rj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4dade5f6bbbe0473f9330ea16398a4522405ca6b22a0e65851a98fe57943b38f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nz8rj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:36:53Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-62mzv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:37:36Z is after 2025-08-24T17:21:41Z" Feb 17 13:37:36 crc kubenswrapper[4768]: I0217 13:37:36.190323 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"88441663-1346-4328-a433-63cb4d9b6722\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:37:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:37:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1dd90501c9cf3ad0b27fefe4999afdfeaf0a8262728c8d47cd4714b0ef71d340\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://52dd97c46a2e371a68dcd9558be0f94d3bdd20c7a6ba6a65178fc09cdceea903\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4f0320f35632987c0e1f7eb6655c7221d7c159f1473088fa143a7c69a60f5b0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1670eca5569373c8bcdf22d19faf8794c43207f7711d502eeb2ade60d643f5dd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9ecebe409455190192bdd157c1aa1c3eaec5d7930ccfaa6c456a1d04be7fff2f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-17T13:36:41Z\\\",\\\"message\\\":\\\"le observer\\\\nW0217 13:36:40.862072 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0217 13:36:40.862247 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0217 13:36:40.862863 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1936381186/tls.crt::/tmp/serving-cert-1936381186/tls.key\\\\\\\"\\\\nI0217 13:36:41.350711 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0217 13:36:41.368959 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0217 13:36:41.368982 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0217 13:36:41.369003 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0217 13:36:41.369008 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0217 13:36:41.374960 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0217 13:36:41.374983 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 13:36:41.374989 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0217 13:36:41.374990 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0217 13:36:41.374994 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0217 13:36:41.375023 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0217 13:36:41.375028 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0217 13:36:41.375032 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0217 13:36:41.377284 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T13:36:25Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7641c2023d65064decc69606567dff6eff07116a6cc62f9d1b1cddb34a39a407\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:24Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9dffd0f3cc3b89ababe812ea2e550f319242913da2908f466cbb8d9ac541e4b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9dffd0f3cc3b89ababe812ea2e550f319242913da2908f466cbb8d9ac541e4b9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:36:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:36:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:36:21Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:37:36Z is after 2025-08-24T17:21:41Z" Feb 17 13:37:36 crc kubenswrapper[4768]: I0217 13:37:36.201707 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c0c5a46beaa5e6c9df638b288967eaff1605591faafc373623753e3c434c6ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a28c91cfb782b4976153d2afced3117498aaabaafa5bba655b020c13aebb1dc5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:37:36Z is after 2025-08-24T17:21:41Z" Feb 17 13:37:36 crc kubenswrapper[4768]: I0217 13:37:36.217279 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:37:36Z is after 2025-08-24T17:21:41Z" Feb 17 13:37:36 crc kubenswrapper[4768]: I0217 13:37:36.228554 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-5bxh7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8c8b1469-ed55-4743-9553-f81efd79e5f1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ltsbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ltsbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:36:55Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-5bxh7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:37:36Z is after 2025-08-24T17:21:41Z" Feb 17 13:37:36 crc kubenswrapper[4768]: I0217 13:37:36.243092 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:36 crc kubenswrapper[4768]: I0217 13:37:36.243135 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:36 crc kubenswrapper[4768]: I0217 13:37:36.243142 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:36 crc kubenswrapper[4768]: I0217 13:37:36.243156 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:36 crc kubenswrapper[4768]: I0217 13:37:36.243164 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:36Z","lastTransitionTime":"2026-02-17T13:37:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:36 crc kubenswrapper[4768]: I0217 13:37:36.345636 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:36 crc kubenswrapper[4768]: I0217 13:37:36.345663 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:36 crc kubenswrapper[4768]: I0217 13:37:36.345670 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:36 crc kubenswrapper[4768]: I0217 13:37:36.345686 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:36 crc kubenswrapper[4768]: I0217 13:37:36.345710 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:36Z","lastTransitionTime":"2026-02-17T13:37:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:36 crc kubenswrapper[4768]: I0217 13:37:36.447760 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:36 crc kubenswrapper[4768]: I0217 13:37:36.447818 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:36 crc kubenswrapper[4768]: I0217 13:37:36.447833 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:36 crc kubenswrapper[4768]: I0217 13:37:36.447856 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:36 crc kubenswrapper[4768]: I0217 13:37:36.447869 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:36Z","lastTransitionTime":"2026-02-17T13:37:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:36 crc kubenswrapper[4768]: I0217 13:37:36.533269 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5bxh7" Feb 17 13:37:36 crc kubenswrapper[4768]: E0217 13:37:36.533408 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5bxh7" podUID="8c8b1469-ed55-4743-9553-f81efd79e5f1" Feb 17 13:37:36 crc kubenswrapper[4768]: I0217 13:37:36.550222 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:36 crc kubenswrapper[4768]: I0217 13:37:36.550296 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:36 crc kubenswrapper[4768]: I0217 13:37:36.550323 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:36 crc kubenswrapper[4768]: I0217 13:37:36.550357 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:36 crc kubenswrapper[4768]: I0217 13:37:36.550378 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:36Z","lastTransitionTime":"2026-02-17T13:37:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:36 crc kubenswrapper[4768]: I0217 13:37:36.549506 4768 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-07 15:17:44.206601656 +0000 UTC Feb 17 13:37:36 crc kubenswrapper[4768]: I0217 13:37:36.652284 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:36 crc kubenswrapper[4768]: I0217 13:37:36.652325 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:36 crc kubenswrapper[4768]: I0217 13:37:36.652336 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:36 crc kubenswrapper[4768]: I0217 13:37:36.652353 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:36 crc kubenswrapper[4768]: I0217 13:37:36.652366 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:36Z","lastTransitionTime":"2026-02-17T13:37:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:36 crc kubenswrapper[4768]: I0217 13:37:36.754527 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:36 crc kubenswrapper[4768]: I0217 13:37:36.754563 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:36 crc kubenswrapper[4768]: I0217 13:37:36.754574 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:36 crc kubenswrapper[4768]: I0217 13:37:36.754590 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:36 crc kubenswrapper[4768]: I0217 13:37:36.754600 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:36Z","lastTransitionTime":"2026-02-17T13:37:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:36 crc kubenswrapper[4768]: I0217 13:37:36.856736 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:36 crc kubenswrapper[4768]: I0217 13:37:36.856768 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:36 crc kubenswrapper[4768]: I0217 13:37:36.856777 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:36 crc kubenswrapper[4768]: I0217 13:37:36.856790 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:36 crc kubenswrapper[4768]: I0217 13:37:36.856798 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:36Z","lastTransitionTime":"2026-02-17T13:37:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:36 crc kubenswrapper[4768]: I0217 13:37:36.959427 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:36 crc kubenswrapper[4768]: I0217 13:37:36.959466 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:36 crc kubenswrapper[4768]: I0217 13:37:36.959474 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:36 crc kubenswrapper[4768]: I0217 13:37:36.959489 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:36 crc kubenswrapper[4768]: I0217 13:37:36.959497 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:36Z","lastTransitionTime":"2026-02-17T13:37:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:36 crc kubenswrapper[4768]: I0217 13:37:36.975471 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-5cplg_742e6df8-2a68-426e-982c-ef825c6efca3/ovnkube-controller/3.log" Feb 17 13:37:36 crc kubenswrapper[4768]: I0217 13:37:36.979494 4768 scope.go:117] "RemoveContainer" containerID="115c47b6c343076ece428facfb56ccbf0b490cbfe7eff1470c731d6c304df3b9" Feb 17 13:37:36 crc kubenswrapper[4768]: E0217 13:37:36.979755 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-5cplg_openshift-ovn-kubernetes(742e6df8-2a68-426e-982c-ef825c6efca3)\"" pod="openshift-ovn-kubernetes/ovnkube-node-5cplg" podUID="742e6df8-2a68-426e-982c-ef825c6efca3" Feb 17 13:37:36 crc kubenswrapper[4768]: I0217 13:37:36.994086 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://07aa34038d6332ca2ddf610df510586d343655b1e38a2912ea1513cca35f4dcc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:37:36Z is after 2025-08-24T17:21:41Z" Feb 17 13:37:37 crc kubenswrapper[4768]: I0217 13:37:37.010047 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:41Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:37:37Z is after 2025-08-24T17:21:41Z" Feb 17 13:37:37 crc kubenswrapper[4768]: I0217 13:37:37.024048 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:41Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:37:37Z is after 2025-08-24T17:21:41Z" Feb 17 13:37:37 crc kubenswrapper[4768]: I0217 13:37:37.036357 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6l7rv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dac64b2f-4b0b-454c-96e0-fc7d563d300f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://903f9d31bd5f357a5a1f439a158788c1930d0f63cd50e809028689cfa7884e96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7njgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:36:41Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6l7rv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:37:37Z is after 2025-08-24T17:21:41Z" Feb 17 13:37:37 crc kubenswrapper[4768]: I0217 13:37:37.052178 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"99a4349d-f5a6-431b-b8d6-9de1f9bbe63a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f45aa99773693c07f27add72aa8305d47c48deeb24382aa9ac8bca232fd26ff3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://47db5ef94c07ecce7b368bb809a736c9532fe3e47081cfa7e248b8b40d1d243f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0adb0161ea24422b8b1443a81f00e9c38f60bf8d0718075ce189158200b28a78\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22e1c78ff231c7a76ebb06589f5f14caed8c0b72475083230c80b4512ae2a048\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:36:21Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:37:37Z is after 2025-08-24T17:21:41Z" Feb 17 13:37:37 crc kubenswrapper[4768]: I0217 13:37:37.061545 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:37 crc kubenswrapper[4768]: I0217 13:37:37.061583 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:37 crc kubenswrapper[4768]: I0217 13:37:37.061616 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:37 crc kubenswrapper[4768]: I0217 13:37:37.061634 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:37 crc kubenswrapper[4768]: I0217 13:37:37.061646 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:37Z","lastTransitionTime":"2026-02-17T13:37:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:37 crc kubenswrapper[4768]: I0217 13:37:37.065736 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1bbd8d51-f387-4de0-b841-e602f7249532\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:37:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:37:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a9d10a3cac9b0b43e1e7c3fb80828cd2d694a46314618dc023ffec8a43488c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e06287439e9f86ed8e81d6d3e8b07691439a8227d2eb2b819db1a7e6de078c18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://28f9a3cd67e4819d5c4e24b1b0d8576898a01c75184c958d65ce81f2523ce7f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3492da352fa4e1b54d266098578ea4f42e40cf1ebe2d6e511dc804473efd9e64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3492da352fa4e1b54d266098578ea4f42e40cf1ebe2d6e511dc804473efd9e64\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:36:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:36:22Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:36:21Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:37:37Z is after 2025-08-24T17:21:41Z" Feb 17 13:37:37 crc kubenswrapper[4768]: I0217 13:37:37.078315 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f9b2d1282c12fcb8597c7b4c60dc1bab1049afc179eaddbc8a72203b392f78d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:37:37Z is after 2025-08-24T17:21:41Z" Feb 17 13:37:37 crc kubenswrapper[4768]: I0217 13:37:37.090460 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-p97z4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"10c685ba-8fe0-425c-958c-3fb6754d3d84\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d56cb97a69888610458dcdfe5fa47e01b6641b0ab8ce5b11c01042001017d633\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b2h62\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://69b662556c0f67ed5b73c21d8dda6de43cd7b0e7c3dc7a57578f3ce94a4be7cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b2h62\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:36:42Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-p97z4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:37:37Z is after 2025-08-24T17:21:41Z" Feb 17 13:37:37 crc kubenswrapper[4768]: I0217 13:37:37.103967 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-jjjqj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e044bf1f-26b2-4a39-86e6-0440eff3eaa9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:37:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:37:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8710b3ee2ff8d7aa849b863cfe8b99fd97e02f57ece68e528c7c23994608bedd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://19bc9ea99c2b3b0015849f2a4c730a14e048c35cf69f5b4025a4d51d707cf9f1\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-17T13:37:30Z\\\",\\\"message\\\":\\\"2026-02-17T13:36:44+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_6f8da95b-d2bc-4d33-8a29-38d7a4b0eb01\\\\n2026-02-17T13:36:44+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_6f8da95b-d2bc-4d33-8a29-38d7a4b0eb01 to /host/opt/cni/bin/\\\\n2026-02-17T13:36:45Z [verbose] multus-daemon started\\\\n2026-02-17T13:36:45Z [verbose] Readiness Indicator file check\\\\n2026-02-17T13:37:30Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T13:36:42Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:37:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jlkgb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:36:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-jjjqj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:37:37Z is after 2025-08-24T17:21:41Z" Feb 17 13:37:37 crc kubenswrapper[4768]: I0217 13:37:37.124554 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5cplg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"742e6df8-2a68-426e-982c-ef825c6efca3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://88f4574f4bb95ef3e580f5e3e98feb55cd1381df1cc14a3797f040f3a8d92e86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg6ql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c29effffff4df037300ce8f32cae5337e3c60634d26fb82fde99ad7994a4fd4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg6ql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bca83097ac9d4b621e9bffa8f2f815a6060db34cb0272281b765aa7705d3ab5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg6ql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aaa535fc7f87d47d5bb54797f60b4adcabe1adc1d5d79720acc7a9ba7bb355c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg6ql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e14dfa7f4c26b9ae731392907658bf0a72b811243137b32320a1be70a334a1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg6ql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0849644593ce9a9b0ce3191eb62e47c5986c4b20f93313dd9fa721c7879ede5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg6ql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://115c47b6c343076ece428facfb56ccbf0b490cbfe7eff1470c731d6c304df3b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://115c47b6c343076ece428facfb56ccbf0b490cbfe7eff1470c731d6c304df3b9\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-17T13:37:35Z\\\",\\\"message\\\":\\\"/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0217 13:37:35.213969 6847 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0217 13:37:35.214004 6847 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0217 13:37:35.214057 6847 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0217 13:37:35.214085 6847 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0217 13:37:35.214131 6847 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0217 13:37:35.222069 6847 factory.go:656] Stopping watch factory\\\\nI0217 13:37:35.266234 6847 shared_informer.go:320] Caches are synced for node-tracker-controller\\\\nI0217 13:37:35.266283 6847 services_controller.go:204] Setting up event handlers for services for network=default\\\\nI0217 13:37:35.266359 6847 ovnkube.go:599] Stopped ovnkube\\\\nI0217 13:37:35.266398 6847 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0217 13:37:35.266476 6847 ovnkube.go:137] failed to run ov\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T13:37:34Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-5cplg_openshift-ovn-kubernetes(742e6df8-2a68-426e-982c-ef825c6efca3)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg6ql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8e5ca206927b1e2a6968ce42ecdb1441da4c68ad11b65c657a68889ac73af37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg6ql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6fcb3dc45db5b2c74f9bdd4c7c453815a392de1f2afe91ebed618327cfe5cb7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6fcb3dc45db5b2c74f9bdd4c7c453815a392de1f2afe91ebed618327cfe5cb7c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:36:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg6ql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:36:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-5cplg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:37:37Z is after 2025-08-24T17:21:41Z" Feb 17 13:37:37 crc kubenswrapper[4768]: I0217 13:37:37.137043 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-hngsc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a9682c-5dd5-49ce-bd8c-60e91527ec2a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db596eafde85d9ad74007393b8576b424cd8561f68f694af82cde80f997a5688\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fbjbb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:36:46Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-hngsc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:37:37Z is after 2025-08-24T17:21:41Z" Feb 17 13:37:37 crc kubenswrapper[4768]: I0217 13:37:37.151784 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-6xvnz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"68ae92b3-aced-409b-901b-252d2364cc01\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c3e6dc9e6ef1908b60694a76b89114accbccc8e33d2334e751125f348ff68191\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42dxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ccce43f2773621b91f6782909e74d6c1cb29f3f2e0d4280753e2719f256d694\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3ccce43f2773621b91f6782909e74d6c1cb29f3f2e0d4280753e2719f256d694\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:36:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:36:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42dxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8fa44e73498e2987dca421d954e3c6b71c36138496d25e3498934a0ae03c25de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8fa44e73498e2987dca421d954e3c6b71c36138496d25e3498934a0ae03c25de\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:36:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:36:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42dxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8eee5a030051c479dc3e7d690a338260f3a886f6c8b6764037de0a4b3dcea101\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8eee5a030051c479dc3e7d690a338260f3a886f6c8b6764037de0a4b3dcea101\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:36:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:36:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42dxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://473b18f574df0c69d4f70ded0dee7c42ee64daab422dff19c6a58a0cde52d045\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://473b18f574df0c69d4f70ded0dee7c42ee64daab422dff19c6a58a0cde52d045\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:36:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:36:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42dxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a84122e995b4744d706f85250dc149036364dc1e476c38b54f08494e0cb41ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3a84122e995b4744d706f85250dc149036364dc1e476c38b54f08494e0cb41ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:36:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:36:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42dxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f77e278dccb14a5e674dad398d194080a57e6949529914ddfa9f8a63dbea8a34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f77e278dccb14a5e674dad398d194080a57e6949529914ddfa9f8a63dbea8a34\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:36:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:36:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42dxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:36:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-6xvnz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:37:37Z is after 2025-08-24T17:21:41Z" Feb 17 13:37:37 crc kubenswrapper[4768]: I0217 13:37:37.161661 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-62mzv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f7e6dba9-bf9f-464a-9842-f4f2a793dedf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25c892cb5b274e124423e1da0778d7b3415022fb0280eed30a6b48055d44c3bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nz8rj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4dade5f6bbbe0473f9330ea16398a4522405ca6b22a0e65851a98fe57943b38f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nz8rj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:36:53Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-62mzv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:37:37Z is after 2025-08-24T17:21:41Z" Feb 17 13:37:37 crc kubenswrapper[4768]: I0217 13:37:37.163191 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:37 crc kubenswrapper[4768]: I0217 13:37:37.163217 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:37 crc kubenswrapper[4768]: I0217 13:37:37.163225 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:37 crc kubenswrapper[4768]: I0217 13:37:37.163257 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:37 crc kubenswrapper[4768]: I0217 13:37:37.163269 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:37Z","lastTransitionTime":"2026-02-17T13:37:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:37 crc kubenswrapper[4768]: I0217 13:37:37.174667 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"88441663-1346-4328-a433-63cb4d9b6722\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:37:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:37:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1dd90501c9cf3ad0b27fefe4999afdfeaf0a8262728c8d47cd4714b0ef71d340\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://52dd97c46a2e371a68dcd9558be0f94d3bdd20c7a6ba6a65178fc09cdceea903\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4f0320f35632987c0e1f7eb6655c7221d7c159f1473088fa143a7c69a60f5b0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1670eca5569373c8bcdf22d19faf8794c43207f7711d502eeb2ade60d643f5dd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9ecebe409455190192bdd157c1aa1c3eaec5d7930ccfaa6c456a1d04be7fff2f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-17T13:36:41Z\\\",\\\"message\\\":\\\"le observer\\\\nW0217 13:36:40.862072 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0217 13:36:40.862247 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0217 13:36:40.862863 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1936381186/tls.crt::/tmp/serving-cert-1936381186/tls.key\\\\\\\"\\\\nI0217 13:36:41.350711 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0217 13:36:41.368959 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0217 13:36:41.368982 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0217 13:36:41.369003 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0217 13:36:41.369008 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0217 13:36:41.374960 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0217 13:36:41.374983 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 13:36:41.374989 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0217 13:36:41.374990 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0217 13:36:41.374994 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0217 13:36:41.375023 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0217 13:36:41.375028 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0217 13:36:41.375032 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0217 13:36:41.377284 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T13:36:25Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7641c2023d65064decc69606567dff6eff07116a6cc62f9d1b1cddb34a39a407\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:24Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9dffd0f3cc3b89ababe812ea2e550f319242913da2908f466cbb8d9ac541e4b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9dffd0f3cc3b89ababe812ea2e550f319242913da2908f466cbb8d9ac541e4b9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:36:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:36:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:36:21Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:37:37Z is after 2025-08-24T17:21:41Z" Feb 17 13:37:37 crc kubenswrapper[4768]: I0217 13:37:37.186392 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c0c5a46beaa5e6c9df638b288967eaff1605591faafc373623753e3c434c6ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a28c91cfb782b4976153d2afced3117498aaabaafa5bba655b020c13aebb1dc5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:37:37Z is after 2025-08-24T17:21:41Z" Feb 17 13:37:37 crc kubenswrapper[4768]: I0217 13:37:37.198010 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:37:37Z is after 2025-08-24T17:21:41Z" Feb 17 13:37:37 crc kubenswrapper[4768]: I0217 13:37:37.206980 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-5bxh7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8c8b1469-ed55-4743-9553-f81efd79e5f1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ltsbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ltsbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:36:55Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-5bxh7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:37:37Z is after 2025-08-24T17:21:41Z" Feb 17 13:37:37 crc kubenswrapper[4768]: I0217 13:37:37.266203 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:37 crc kubenswrapper[4768]: I0217 13:37:37.266245 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:37 crc kubenswrapper[4768]: I0217 13:37:37.266255 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:37 crc kubenswrapper[4768]: I0217 13:37:37.266274 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:37 crc kubenswrapper[4768]: I0217 13:37:37.266283 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:37Z","lastTransitionTime":"2026-02-17T13:37:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:37 crc kubenswrapper[4768]: I0217 13:37:37.369171 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:37 crc kubenswrapper[4768]: I0217 13:37:37.369438 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:37 crc kubenswrapper[4768]: I0217 13:37:37.369499 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:37 crc kubenswrapper[4768]: I0217 13:37:37.369575 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:37 crc kubenswrapper[4768]: I0217 13:37:37.369673 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:37Z","lastTransitionTime":"2026-02-17T13:37:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:37 crc kubenswrapper[4768]: I0217 13:37:37.472528 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:37 crc kubenswrapper[4768]: I0217 13:37:37.472795 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:37 crc kubenswrapper[4768]: I0217 13:37:37.472974 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:37 crc kubenswrapper[4768]: I0217 13:37:37.473093 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:37 crc kubenswrapper[4768]: I0217 13:37:37.473287 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:37Z","lastTransitionTime":"2026-02-17T13:37:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:37 crc kubenswrapper[4768]: I0217 13:37:37.533660 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 13:37:37 crc kubenswrapper[4768]: I0217 13:37:37.533742 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 13:37:37 crc kubenswrapper[4768]: E0217 13:37:37.533835 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 13:37:37 crc kubenswrapper[4768]: I0217 13:37:37.533876 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 13:37:37 crc kubenswrapper[4768]: E0217 13:37:37.534034 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 13:37:37 crc kubenswrapper[4768]: E0217 13:37:37.534186 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 13:37:37 crc kubenswrapper[4768]: I0217 13:37:37.551680 4768 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-20 13:08:57.159246722 +0000 UTC Feb 17 13:37:37 crc kubenswrapper[4768]: I0217 13:37:37.576411 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:37 crc kubenswrapper[4768]: I0217 13:37:37.576444 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:37 crc kubenswrapper[4768]: I0217 13:37:37.576453 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:37 crc kubenswrapper[4768]: I0217 13:37:37.576467 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:37 crc kubenswrapper[4768]: I0217 13:37:37.576476 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:37Z","lastTransitionTime":"2026-02-17T13:37:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:37 crc kubenswrapper[4768]: I0217 13:37:37.679342 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:37 crc kubenswrapper[4768]: I0217 13:37:37.679372 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:37 crc kubenswrapper[4768]: I0217 13:37:37.679379 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:37 crc kubenswrapper[4768]: I0217 13:37:37.679393 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:37 crc kubenswrapper[4768]: I0217 13:37:37.679403 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:37Z","lastTransitionTime":"2026-02-17T13:37:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:37 crc kubenswrapper[4768]: I0217 13:37:37.782254 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:37 crc kubenswrapper[4768]: I0217 13:37:37.782289 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:37 crc kubenswrapper[4768]: I0217 13:37:37.782330 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:37 crc kubenswrapper[4768]: I0217 13:37:37.782348 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:37 crc kubenswrapper[4768]: I0217 13:37:37.782359 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:37Z","lastTransitionTime":"2026-02-17T13:37:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:37 crc kubenswrapper[4768]: I0217 13:37:37.885586 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:37 crc kubenswrapper[4768]: I0217 13:37:37.885667 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:37 crc kubenswrapper[4768]: I0217 13:37:37.885695 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:37 crc kubenswrapper[4768]: I0217 13:37:37.885726 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:37 crc kubenswrapper[4768]: I0217 13:37:37.885749 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:37Z","lastTransitionTime":"2026-02-17T13:37:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:37 crc kubenswrapper[4768]: I0217 13:37:37.988567 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:37 crc kubenswrapper[4768]: I0217 13:37:37.988602 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:37 crc kubenswrapper[4768]: I0217 13:37:37.988614 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:37 crc kubenswrapper[4768]: I0217 13:37:37.988630 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:37 crc kubenswrapper[4768]: I0217 13:37:37.988642 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:37Z","lastTransitionTime":"2026-02-17T13:37:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:38 crc kubenswrapper[4768]: I0217 13:37:38.091342 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:38 crc kubenswrapper[4768]: I0217 13:37:38.091371 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:38 crc kubenswrapper[4768]: I0217 13:37:38.091378 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:38 crc kubenswrapper[4768]: I0217 13:37:38.091392 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:38 crc kubenswrapper[4768]: I0217 13:37:38.091401 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:38Z","lastTransitionTime":"2026-02-17T13:37:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:38 crc kubenswrapper[4768]: I0217 13:37:38.193317 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:38 crc kubenswrapper[4768]: I0217 13:37:38.193359 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:38 crc kubenswrapper[4768]: I0217 13:37:38.193371 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:38 crc kubenswrapper[4768]: I0217 13:37:38.193384 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:38 crc kubenswrapper[4768]: I0217 13:37:38.193393 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:38Z","lastTransitionTime":"2026-02-17T13:37:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:38 crc kubenswrapper[4768]: I0217 13:37:38.296254 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:38 crc kubenswrapper[4768]: I0217 13:37:38.296325 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:38 crc kubenswrapper[4768]: I0217 13:37:38.296350 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:38 crc kubenswrapper[4768]: I0217 13:37:38.296378 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:38 crc kubenswrapper[4768]: I0217 13:37:38.296395 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:38Z","lastTransitionTime":"2026-02-17T13:37:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:38 crc kubenswrapper[4768]: I0217 13:37:38.399635 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:38 crc kubenswrapper[4768]: I0217 13:37:38.399674 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:38 crc kubenswrapper[4768]: I0217 13:37:38.399694 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:38 crc kubenswrapper[4768]: I0217 13:37:38.399718 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:38 crc kubenswrapper[4768]: I0217 13:37:38.399730 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:38Z","lastTransitionTime":"2026-02-17T13:37:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:38 crc kubenswrapper[4768]: I0217 13:37:38.501564 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:38 crc kubenswrapper[4768]: I0217 13:37:38.501595 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:38 crc kubenswrapper[4768]: I0217 13:37:38.501603 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:38 crc kubenswrapper[4768]: I0217 13:37:38.501618 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:38 crc kubenswrapper[4768]: I0217 13:37:38.501627 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:38Z","lastTransitionTime":"2026-02-17T13:37:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:38 crc kubenswrapper[4768]: I0217 13:37:38.533498 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5bxh7" Feb 17 13:37:38 crc kubenswrapper[4768]: E0217 13:37:38.533660 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5bxh7" podUID="8c8b1469-ed55-4743-9553-f81efd79e5f1" Feb 17 13:37:38 crc kubenswrapper[4768]: I0217 13:37:38.552240 4768 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-18 19:53:30.536612855 +0000 UTC Feb 17 13:37:38 crc kubenswrapper[4768]: I0217 13:37:38.605057 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:38 crc kubenswrapper[4768]: I0217 13:37:38.605129 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:38 crc kubenswrapper[4768]: I0217 13:37:38.605142 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:38 crc kubenswrapper[4768]: I0217 13:37:38.605162 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:38 crc kubenswrapper[4768]: I0217 13:37:38.605174 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:38Z","lastTransitionTime":"2026-02-17T13:37:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:38 crc kubenswrapper[4768]: I0217 13:37:38.708347 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:38 crc kubenswrapper[4768]: I0217 13:37:38.708390 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:38 crc kubenswrapper[4768]: I0217 13:37:38.708399 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:38 crc kubenswrapper[4768]: I0217 13:37:38.708415 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:38 crc kubenswrapper[4768]: I0217 13:37:38.708427 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:38Z","lastTransitionTime":"2026-02-17T13:37:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:38 crc kubenswrapper[4768]: I0217 13:37:38.823019 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:38 crc kubenswrapper[4768]: I0217 13:37:38.823350 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:38 crc kubenswrapper[4768]: I0217 13:37:38.823437 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:38 crc kubenswrapper[4768]: I0217 13:37:38.823505 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:38 crc kubenswrapper[4768]: I0217 13:37:38.823567 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:38Z","lastTransitionTime":"2026-02-17T13:37:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:38 crc kubenswrapper[4768]: I0217 13:37:38.926984 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:38 crc kubenswrapper[4768]: I0217 13:37:38.927052 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:38 crc kubenswrapper[4768]: I0217 13:37:38.927064 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:38 crc kubenswrapper[4768]: I0217 13:37:38.927130 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:38 crc kubenswrapper[4768]: I0217 13:37:38.927145 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:38Z","lastTransitionTime":"2026-02-17T13:37:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:39 crc kubenswrapper[4768]: I0217 13:37:39.029803 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:39 crc kubenswrapper[4768]: I0217 13:37:39.030061 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:39 crc kubenswrapper[4768]: I0217 13:37:39.030187 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:39 crc kubenswrapper[4768]: I0217 13:37:39.030258 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:39 crc kubenswrapper[4768]: I0217 13:37:39.030322 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:39Z","lastTransitionTime":"2026-02-17T13:37:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:39 crc kubenswrapper[4768]: I0217 13:37:39.133029 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:39 crc kubenswrapper[4768]: I0217 13:37:39.133061 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:39 crc kubenswrapper[4768]: I0217 13:37:39.133089 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:39 crc kubenswrapper[4768]: I0217 13:37:39.133130 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:39 crc kubenswrapper[4768]: I0217 13:37:39.133140 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:39Z","lastTransitionTime":"2026-02-17T13:37:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:39 crc kubenswrapper[4768]: I0217 13:37:39.150459 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:39 crc kubenswrapper[4768]: I0217 13:37:39.150525 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:39 crc kubenswrapper[4768]: I0217 13:37:39.150537 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:39 crc kubenswrapper[4768]: I0217 13:37:39.150551 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:39 crc kubenswrapper[4768]: I0217 13:37:39.150561 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:39Z","lastTransitionTime":"2026-02-17T13:37:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:39 crc kubenswrapper[4768]: E0217 13:37:39.161907 4768 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T13:37:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T13:37:39Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T13:37:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T13:37:39Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T13:37:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T13:37:39Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T13:37:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T13:37:39Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"72b2b2e1-552d-4984-900d-b4db18ea60be\\\",\\\"systemUUID\\\":\\\"85ded5cb-c1f6-4d1b-b23e-ee8660dcd6ef\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:37:39Z is after 2025-08-24T17:21:41Z" Feb 17 13:37:39 crc kubenswrapper[4768]: I0217 13:37:39.170558 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:39 crc kubenswrapper[4768]: I0217 13:37:39.170826 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:39 crc kubenswrapper[4768]: I0217 13:37:39.170922 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:39 crc kubenswrapper[4768]: I0217 13:37:39.171201 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:39 crc kubenswrapper[4768]: I0217 13:37:39.171924 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:39Z","lastTransitionTime":"2026-02-17T13:37:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:39 crc kubenswrapper[4768]: E0217 13:37:39.186448 4768 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T13:37:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T13:37:39Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T13:37:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T13:37:39Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T13:37:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T13:37:39Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T13:37:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T13:37:39Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"72b2b2e1-552d-4984-900d-b4db18ea60be\\\",\\\"systemUUID\\\":\\\"85ded5cb-c1f6-4d1b-b23e-ee8660dcd6ef\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:37:39Z is after 2025-08-24T17:21:41Z" Feb 17 13:37:39 crc kubenswrapper[4768]: I0217 13:37:39.190498 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:39 crc kubenswrapper[4768]: I0217 13:37:39.190571 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:39 crc kubenswrapper[4768]: I0217 13:37:39.190585 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:39 crc kubenswrapper[4768]: I0217 13:37:39.190602 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:39 crc kubenswrapper[4768]: I0217 13:37:39.190612 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:39Z","lastTransitionTime":"2026-02-17T13:37:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:39 crc kubenswrapper[4768]: E0217 13:37:39.202189 4768 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T13:37:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T13:37:39Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T13:37:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T13:37:39Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T13:37:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T13:37:39Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T13:37:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T13:37:39Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"72b2b2e1-552d-4984-900d-b4db18ea60be\\\",\\\"systemUUID\\\":\\\"85ded5cb-c1f6-4d1b-b23e-ee8660dcd6ef\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:37:39Z is after 2025-08-24T17:21:41Z" Feb 17 13:37:39 crc kubenswrapper[4768]: I0217 13:37:39.207325 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:39 crc kubenswrapper[4768]: I0217 13:37:39.207615 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:39 crc kubenswrapper[4768]: I0217 13:37:39.207710 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:39 crc kubenswrapper[4768]: I0217 13:37:39.207811 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:39 crc kubenswrapper[4768]: I0217 13:37:39.207946 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:39Z","lastTransitionTime":"2026-02-17T13:37:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:39 crc kubenswrapper[4768]: E0217 13:37:39.221693 4768 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T13:37:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T13:37:39Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T13:37:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T13:37:39Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T13:37:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T13:37:39Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T13:37:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T13:37:39Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"72b2b2e1-552d-4984-900d-b4db18ea60be\\\",\\\"systemUUID\\\":\\\"85ded5cb-c1f6-4d1b-b23e-ee8660dcd6ef\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:37:39Z is after 2025-08-24T17:21:41Z" Feb 17 13:37:39 crc kubenswrapper[4768]: I0217 13:37:39.225143 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:39 crc kubenswrapper[4768]: I0217 13:37:39.225234 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:39 crc kubenswrapper[4768]: I0217 13:37:39.225299 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:39 crc kubenswrapper[4768]: I0217 13:37:39.225360 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:39 crc kubenswrapper[4768]: I0217 13:37:39.225423 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:39Z","lastTransitionTime":"2026-02-17T13:37:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:39 crc kubenswrapper[4768]: E0217 13:37:39.238216 4768 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T13:37:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T13:37:39Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T13:37:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T13:37:39Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T13:37:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T13:37:39Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T13:37:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T13:37:39Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"72b2b2e1-552d-4984-900d-b4db18ea60be\\\",\\\"systemUUID\\\":\\\"85ded5cb-c1f6-4d1b-b23e-ee8660dcd6ef\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:37:39Z is after 2025-08-24T17:21:41Z" Feb 17 13:37:39 crc kubenswrapper[4768]: E0217 13:37:39.238558 4768 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 17 13:37:39 crc kubenswrapper[4768]: I0217 13:37:39.240351 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:39 crc kubenswrapper[4768]: I0217 13:37:39.240428 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:39 crc kubenswrapper[4768]: I0217 13:37:39.240439 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:39 crc kubenswrapper[4768]: I0217 13:37:39.240455 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:39 crc kubenswrapper[4768]: I0217 13:37:39.240465 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:39Z","lastTransitionTime":"2026-02-17T13:37:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:39 crc kubenswrapper[4768]: I0217 13:37:39.342187 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:39 crc kubenswrapper[4768]: I0217 13:37:39.342222 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:39 crc kubenswrapper[4768]: I0217 13:37:39.342230 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:39 crc kubenswrapper[4768]: I0217 13:37:39.342244 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:39 crc kubenswrapper[4768]: I0217 13:37:39.342253 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:39Z","lastTransitionTime":"2026-02-17T13:37:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:39 crc kubenswrapper[4768]: I0217 13:37:39.444527 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:39 crc kubenswrapper[4768]: I0217 13:37:39.444594 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:39 crc kubenswrapper[4768]: I0217 13:37:39.444605 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:39 crc kubenswrapper[4768]: I0217 13:37:39.444618 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:39 crc kubenswrapper[4768]: I0217 13:37:39.444628 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:39Z","lastTransitionTime":"2026-02-17T13:37:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:39 crc kubenswrapper[4768]: I0217 13:37:39.533370 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 13:37:39 crc kubenswrapper[4768]: I0217 13:37:39.533427 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 13:37:39 crc kubenswrapper[4768]: E0217 13:37:39.533515 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 13:37:39 crc kubenswrapper[4768]: I0217 13:37:39.533437 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 13:37:39 crc kubenswrapper[4768]: E0217 13:37:39.533626 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 13:37:39 crc kubenswrapper[4768]: E0217 13:37:39.533710 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 13:37:39 crc kubenswrapper[4768]: I0217 13:37:39.548483 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:39 crc kubenswrapper[4768]: I0217 13:37:39.548552 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:39 crc kubenswrapper[4768]: I0217 13:37:39.548570 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:39 crc kubenswrapper[4768]: I0217 13:37:39.548597 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:39 crc kubenswrapper[4768]: I0217 13:37:39.548614 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:39Z","lastTransitionTime":"2026-02-17T13:37:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:39 crc kubenswrapper[4768]: I0217 13:37:39.553044 4768 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-12 09:49:52.606774283 +0000 UTC Feb 17 13:37:39 crc kubenswrapper[4768]: I0217 13:37:39.676991 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:39 crc kubenswrapper[4768]: I0217 13:37:39.677045 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:39 crc kubenswrapper[4768]: I0217 13:37:39.677058 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:39 crc kubenswrapper[4768]: I0217 13:37:39.677078 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:39 crc kubenswrapper[4768]: I0217 13:37:39.677092 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:39Z","lastTransitionTime":"2026-02-17T13:37:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:39 crc kubenswrapper[4768]: I0217 13:37:39.779809 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:39 crc kubenswrapper[4768]: I0217 13:37:39.779856 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:39 crc kubenswrapper[4768]: I0217 13:37:39.779866 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:39 crc kubenswrapper[4768]: I0217 13:37:39.779881 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:39 crc kubenswrapper[4768]: I0217 13:37:39.779898 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:39Z","lastTransitionTime":"2026-02-17T13:37:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:39 crc kubenswrapper[4768]: I0217 13:37:39.882860 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:39 crc kubenswrapper[4768]: I0217 13:37:39.882926 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:39 crc kubenswrapper[4768]: I0217 13:37:39.882949 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:39 crc kubenswrapper[4768]: I0217 13:37:39.882982 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:39 crc kubenswrapper[4768]: I0217 13:37:39.883004 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:39Z","lastTransitionTime":"2026-02-17T13:37:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:39 crc kubenswrapper[4768]: I0217 13:37:39.985531 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:39 crc kubenswrapper[4768]: I0217 13:37:39.985571 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:39 crc kubenswrapper[4768]: I0217 13:37:39.985583 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:39 crc kubenswrapper[4768]: I0217 13:37:39.985602 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:39 crc kubenswrapper[4768]: I0217 13:37:39.985613 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:39Z","lastTransitionTime":"2026-02-17T13:37:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:40 crc kubenswrapper[4768]: I0217 13:37:40.088621 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:40 crc kubenswrapper[4768]: I0217 13:37:40.088657 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:40 crc kubenswrapper[4768]: I0217 13:37:40.088668 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:40 crc kubenswrapper[4768]: I0217 13:37:40.088685 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:40 crc kubenswrapper[4768]: I0217 13:37:40.088695 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:40Z","lastTransitionTime":"2026-02-17T13:37:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:40 crc kubenswrapper[4768]: I0217 13:37:40.191399 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:40 crc kubenswrapper[4768]: I0217 13:37:40.191619 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:40 crc kubenswrapper[4768]: I0217 13:37:40.191645 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:40 crc kubenswrapper[4768]: I0217 13:37:40.191722 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:40 crc kubenswrapper[4768]: I0217 13:37:40.191740 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:40Z","lastTransitionTime":"2026-02-17T13:37:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:40 crc kubenswrapper[4768]: I0217 13:37:40.293801 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:40 crc kubenswrapper[4768]: I0217 13:37:40.293838 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:40 crc kubenswrapper[4768]: I0217 13:37:40.293849 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:40 crc kubenswrapper[4768]: I0217 13:37:40.293864 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:40 crc kubenswrapper[4768]: I0217 13:37:40.293874 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:40Z","lastTransitionTime":"2026-02-17T13:37:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:40 crc kubenswrapper[4768]: I0217 13:37:40.397012 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:40 crc kubenswrapper[4768]: I0217 13:37:40.397061 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:40 crc kubenswrapper[4768]: I0217 13:37:40.397079 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:40 crc kubenswrapper[4768]: I0217 13:37:40.397142 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:40 crc kubenswrapper[4768]: I0217 13:37:40.397163 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:40Z","lastTransitionTime":"2026-02-17T13:37:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:40 crc kubenswrapper[4768]: I0217 13:37:40.500463 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:40 crc kubenswrapper[4768]: I0217 13:37:40.500517 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:40 crc kubenswrapper[4768]: I0217 13:37:40.500533 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:40 crc kubenswrapper[4768]: I0217 13:37:40.500558 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:40 crc kubenswrapper[4768]: I0217 13:37:40.500575 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:40Z","lastTransitionTime":"2026-02-17T13:37:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:40 crc kubenswrapper[4768]: I0217 13:37:40.534241 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5bxh7" Feb 17 13:37:40 crc kubenswrapper[4768]: E0217 13:37:40.534487 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5bxh7" podUID="8c8b1469-ed55-4743-9553-f81efd79e5f1" Feb 17 13:37:40 crc kubenswrapper[4768]: I0217 13:37:40.553948 4768 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-09 02:00:52.275074296 +0000 UTC Feb 17 13:37:40 crc kubenswrapper[4768]: I0217 13:37:40.603340 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:40 crc kubenswrapper[4768]: I0217 13:37:40.603419 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:40 crc kubenswrapper[4768]: I0217 13:37:40.603430 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:40 crc kubenswrapper[4768]: I0217 13:37:40.603451 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:40 crc kubenswrapper[4768]: I0217 13:37:40.603465 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:40Z","lastTransitionTime":"2026-02-17T13:37:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:40 crc kubenswrapper[4768]: I0217 13:37:40.705686 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:40 crc kubenswrapper[4768]: I0217 13:37:40.705736 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:40 crc kubenswrapper[4768]: I0217 13:37:40.705750 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:40 crc kubenswrapper[4768]: I0217 13:37:40.705771 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:40 crc kubenswrapper[4768]: I0217 13:37:40.705789 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:40Z","lastTransitionTime":"2026-02-17T13:37:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:40 crc kubenswrapper[4768]: I0217 13:37:40.808968 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:40 crc kubenswrapper[4768]: I0217 13:37:40.809033 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:40 crc kubenswrapper[4768]: I0217 13:37:40.809055 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:40 crc kubenswrapper[4768]: I0217 13:37:40.809083 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:40 crc kubenswrapper[4768]: I0217 13:37:40.809151 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:40Z","lastTransitionTime":"2026-02-17T13:37:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:40 crc kubenswrapper[4768]: I0217 13:37:40.911937 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:40 crc kubenswrapper[4768]: I0217 13:37:40.911974 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:40 crc kubenswrapper[4768]: I0217 13:37:40.911983 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:40 crc kubenswrapper[4768]: I0217 13:37:40.911999 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:40 crc kubenswrapper[4768]: I0217 13:37:40.912014 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:40Z","lastTransitionTime":"2026-02-17T13:37:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:41 crc kubenswrapper[4768]: I0217 13:37:41.015019 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:41 crc kubenswrapper[4768]: I0217 13:37:41.015121 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:41 crc kubenswrapper[4768]: I0217 13:37:41.015141 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:41 crc kubenswrapper[4768]: I0217 13:37:41.015165 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:41 crc kubenswrapper[4768]: I0217 13:37:41.015182 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:41Z","lastTransitionTime":"2026-02-17T13:37:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:41 crc kubenswrapper[4768]: I0217 13:37:41.117855 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:41 crc kubenswrapper[4768]: I0217 13:37:41.117932 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:41 crc kubenswrapper[4768]: I0217 13:37:41.117945 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:41 crc kubenswrapper[4768]: I0217 13:37:41.117965 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:41 crc kubenswrapper[4768]: I0217 13:37:41.117978 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:41Z","lastTransitionTime":"2026-02-17T13:37:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:41 crc kubenswrapper[4768]: I0217 13:37:41.219728 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:41 crc kubenswrapper[4768]: I0217 13:37:41.219765 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:41 crc kubenswrapper[4768]: I0217 13:37:41.219774 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:41 crc kubenswrapper[4768]: I0217 13:37:41.219840 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:41 crc kubenswrapper[4768]: I0217 13:37:41.219851 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:41Z","lastTransitionTime":"2026-02-17T13:37:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:41 crc kubenswrapper[4768]: I0217 13:37:41.322983 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:41 crc kubenswrapper[4768]: I0217 13:37:41.323049 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:41 crc kubenswrapper[4768]: I0217 13:37:41.323060 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:41 crc kubenswrapper[4768]: I0217 13:37:41.323077 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:41 crc kubenswrapper[4768]: I0217 13:37:41.323087 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:41Z","lastTransitionTime":"2026-02-17T13:37:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:41 crc kubenswrapper[4768]: I0217 13:37:41.425173 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:41 crc kubenswrapper[4768]: I0217 13:37:41.425214 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:41 crc kubenswrapper[4768]: I0217 13:37:41.425224 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:41 crc kubenswrapper[4768]: I0217 13:37:41.425239 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:41 crc kubenswrapper[4768]: I0217 13:37:41.425248 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:41Z","lastTransitionTime":"2026-02-17T13:37:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:41 crc kubenswrapper[4768]: I0217 13:37:41.527805 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:41 crc kubenswrapper[4768]: I0217 13:37:41.527922 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:41 crc kubenswrapper[4768]: I0217 13:37:41.527941 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:41 crc kubenswrapper[4768]: I0217 13:37:41.528312 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:41 crc kubenswrapper[4768]: I0217 13:37:41.528516 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:41Z","lastTransitionTime":"2026-02-17T13:37:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:41 crc kubenswrapper[4768]: I0217 13:37:41.533661 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 13:37:41 crc kubenswrapper[4768]: I0217 13:37:41.533729 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 13:37:41 crc kubenswrapper[4768]: E0217 13:37:41.533827 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 13:37:41 crc kubenswrapper[4768]: I0217 13:37:41.533668 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 13:37:41 crc kubenswrapper[4768]: E0217 13:37:41.533992 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 13:37:41 crc kubenswrapper[4768]: E0217 13:37:41.534170 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 13:37:41 crc kubenswrapper[4768]: I0217 13:37:41.553445 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-6xvnz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"68ae92b3-aced-409b-901b-252d2364cc01\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c3e6dc9e6ef1908b60694a76b89114accbccc8e33d2334e751125f348ff68191\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42dxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ccce43f2773621b91f6782909e74d6c1cb29f3f2e0d4280753e2719f256d694\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3ccce43f2773621b91f6782909e74d6c1cb29f3f2e0d4280753e2719f256d694\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:36:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:36:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42dxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8fa44e73498e2987dca421d954e3c6b71c36138496d25e3498934a0ae03c25de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8fa44e73498e2987dca421d954e3c6b71c36138496d25e3498934a0ae03c25de\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:36:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:36:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42dxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8eee5a030051c479dc3e7d690a338260f3a886f6c8b6764037de0a4b3dcea101\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8eee5a030051c479dc3e7d690a338260f3a886f6c8b6764037de0a4b3dcea101\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:36:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:36:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42dxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://473b18f574df0c69d4f70ded0dee7c42ee64daab422dff19c6a58a0cde52d045\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://473b18f574df0c69d4f70ded0dee7c42ee64daab422dff19c6a58a0cde52d045\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:36:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:36:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42dxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a84122e995b4744d706f85250dc149036364dc1e476c38b54f08494e0cb41ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3a84122e995b4744d706f85250dc149036364dc1e476c38b54f08494e0cb41ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:36:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:36:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42dxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f77e278dccb14a5e674dad398d194080a57e6949529914ddfa9f8a63dbea8a34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f77e278dccb14a5e674dad398d194080a57e6949529914ddfa9f8a63dbea8a34\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:36:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:36:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42dxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:36:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-6xvnz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:37:41Z is after 2025-08-24T17:21:41Z" Feb 17 13:37:41 crc kubenswrapper[4768]: I0217 13:37:41.554471 4768 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-12 19:25:21.224821742 +0000 UTC Feb 17 13:37:41 crc kubenswrapper[4768]: I0217 13:37:41.565520 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-62mzv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f7e6dba9-bf9f-464a-9842-f4f2a793dedf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25c892cb5b274e124423e1da0778d7b3415022fb0280eed30a6b48055d44c3bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nz8rj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4dade5f6bbbe0473f9330ea16398a4522405ca6b22a0e65851a98fe57943b38f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nz8rj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:36:53Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-62mzv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:37:41Z is after 2025-08-24T17:21:41Z" Feb 17 13:37:41 crc kubenswrapper[4768]: I0217 13:37:41.578873 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-5bxh7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8c8b1469-ed55-4743-9553-f81efd79e5f1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ltsbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ltsbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:36:55Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-5bxh7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:37:41Z is after 2025-08-24T17:21:41Z" Feb 17 13:37:41 crc kubenswrapper[4768]: I0217 13:37:41.594382 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"88441663-1346-4328-a433-63cb4d9b6722\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:37:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:37:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1dd90501c9cf3ad0b27fefe4999afdfeaf0a8262728c8d47cd4714b0ef71d340\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://52dd97c46a2e371a68dcd9558be0f94d3bdd20c7a6ba6a65178fc09cdceea903\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4f0320f35632987c0e1f7eb6655c7221d7c159f1473088fa143a7c69a60f5b0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1670eca5569373c8bcdf22d19faf8794c43207f7711d502eeb2ade60d643f5dd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9ecebe409455190192bdd157c1aa1c3eaec5d7930ccfaa6c456a1d04be7fff2f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-17T13:36:41Z\\\",\\\"message\\\":\\\"le observer\\\\nW0217 13:36:40.862072 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0217 13:36:40.862247 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0217 13:36:40.862863 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1936381186/tls.crt::/tmp/serving-cert-1936381186/tls.key\\\\\\\"\\\\nI0217 13:36:41.350711 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0217 13:36:41.368959 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0217 13:36:41.368982 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0217 13:36:41.369003 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0217 13:36:41.369008 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0217 13:36:41.374960 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0217 13:36:41.374983 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 13:36:41.374989 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0217 13:36:41.374990 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0217 13:36:41.374994 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0217 13:36:41.375023 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0217 13:36:41.375028 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0217 13:36:41.375032 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0217 13:36:41.377284 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T13:36:25Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7641c2023d65064decc69606567dff6eff07116a6cc62f9d1b1cddb34a39a407\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:24Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9dffd0f3cc3b89ababe812ea2e550f319242913da2908f466cbb8d9ac541e4b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9dffd0f3cc3b89ababe812ea2e550f319242913da2908f466cbb8d9ac541e4b9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:36:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:36:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:36:21Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:37:41Z is after 2025-08-24T17:21:41Z" Feb 17 13:37:41 crc kubenswrapper[4768]: I0217 13:37:41.607534 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c0c5a46beaa5e6c9df638b288967eaff1605591faafc373623753e3c434c6ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a28c91cfb782b4976153d2afced3117498aaabaafa5bba655b020c13aebb1dc5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:37:41Z is after 2025-08-24T17:21:41Z" Feb 17 13:37:41 crc kubenswrapper[4768]: I0217 13:37:41.628559 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:37:41Z is after 2025-08-24T17:21:41Z" Feb 17 13:37:41 crc kubenswrapper[4768]: I0217 13:37:41.631497 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:41 crc kubenswrapper[4768]: I0217 13:37:41.631549 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:41 crc kubenswrapper[4768]: I0217 13:37:41.631564 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:41 crc kubenswrapper[4768]: I0217 13:37:41.631583 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:41 crc kubenswrapper[4768]: I0217 13:37:41.631596 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:41Z","lastTransitionTime":"2026-02-17T13:37:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:41 crc kubenswrapper[4768]: I0217 13:37:41.647876 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:41Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:37:41Z is after 2025-08-24T17:21:41Z" Feb 17 13:37:41 crc kubenswrapper[4768]: I0217 13:37:41.659905 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6l7rv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dac64b2f-4b0b-454c-96e0-fc7d563d300f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://903f9d31bd5f357a5a1f439a158788c1930d0f63cd50e809028689cfa7884e96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7njgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:36:41Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6l7rv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:37:41Z is after 2025-08-24T17:21:41Z" Feb 17 13:37:41 crc kubenswrapper[4768]: I0217 13:37:41.675611 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://07aa34038d6332ca2ddf610df510586d343655b1e38a2912ea1513cca35f4dcc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:37:41Z is after 2025-08-24T17:21:41Z" Feb 17 13:37:41 crc kubenswrapper[4768]: I0217 13:37:41.695868 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:41Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:37:41Z is after 2025-08-24T17:21:41Z" Feb 17 13:37:41 crc kubenswrapper[4768]: I0217 13:37:41.712400 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f9b2d1282c12fcb8597c7b4c60dc1bab1049afc179eaddbc8a72203b392f78d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:37:41Z is after 2025-08-24T17:21:41Z" Feb 17 13:37:41 crc kubenswrapper[4768]: I0217 13:37:41.728999 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-p97z4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"10c685ba-8fe0-425c-958c-3fb6754d3d84\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d56cb97a69888610458dcdfe5fa47e01b6641b0ab8ce5b11c01042001017d633\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b2h62\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://69b662556c0f67ed5b73c21d8dda6de43cd7b0e7c3dc7a57578f3ce94a4be7cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b2h62\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:36:42Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-p97z4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:37:41Z is after 2025-08-24T17:21:41Z" Feb 17 13:37:41 crc kubenswrapper[4768]: I0217 13:37:41.734152 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:41 crc kubenswrapper[4768]: I0217 13:37:41.734339 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:41 crc kubenswrapper[4768]: I0217 13:37:41.734444 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:41 crc kubenswrapper[4768]: I0217 13:37:41.734538 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:41 crc kubenswrapper[4768]: I0217 13:37:41.734625 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:41Z","lastTransitionTime":"2026-02-17T13:37:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:41 crc kubenswrapper[4768]: I0217 13:37:41.748891 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-jjjqj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e044bf1f-26b2-4a39-86e6-0440eff3eaa9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:37:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:37:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8710b3ee2ff8d7aa849b863cfe8b99fd97e02f57ece68e528c7c23994608bedd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://19bc9ea99c2b3b0015849f2a4c730a14e048c35cf69f5b4025a4d51d707cf9f1\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-17T13:37:30Z\\\",\\\"message\\\":\\\"2026-02-17T13:36:44+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_6f8da95b-d2bc-4d33-8a29-38d7a4b0eb01\\\\n2026-02-17T13:36:44+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_6f8da95b-d2bc-4d33-8a29-38d7a4b0eb01 to /host/opt/cni/bin/\\\\n2026-02-17T13:36:45Z [verbose] multus-daemon started\\\\n2026-02-17T13:36:45Z [verbose] Readiness Indicator file check\\\\n2026-02-17T13:37:30Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T13:36:42Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:37:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jlkgb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:36:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-jjjqj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:37:41Z is after 2025-08-24T17:21:41Z" Feb 17 13:37:41 crc kubenswrapper[4768]: I0217 13:37:41.775165 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5cplg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"742e6df8-2a68-426e-982c-ef825c6efca3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://88f4574f4bb95ef3e580f5e3e98feb55cd1381df1cc14a3797f040f3a8d92e86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg6ql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c29effffff4df037300ce8f32cae5337e3c60634d26fb82fde99ad7994a4fd4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg6ql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bca83097ac9d4b621e9bffa8f2f815a6060db34cb0272281b765aa7705d3ab5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg6ql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aaa535fc7f87d47d5bb54797f60b4adcabe1adc1d5d79720acc7a9ba7bb355c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg6ql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e14dfa7f4c26b9ae731392907658bf0a72b811243137b32320a1be70a334a1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg6ql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0849644593ce9a9b0ce3191eb62e47c5986c4b20f93313dd9fa721c7879ede5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg6ql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://115c47b6c343076ece428facfb56ccbf0b490cbfe7eff1470c731d6c304df3b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://115c47b6c343076ece428facfb56ccbf0b490cbfe7eff1470c731d6c304df3b9\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-17T13:37:35Z\\\",\\\"message\\\":\\\"/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0217 13:37:35.213969 6847 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0217 13:37:35.214004 6847 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0217 13:37:35.214057 6847 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0217 13:37:35.214085 6847 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0217 13:37:35.214131 6847 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0217 13:37:35.222069 6847 factory.go:656] Stopping watch factory\\\\nI0217 13:37:35.266234 6847 shared_informer.go:320] Caches are synced for node-tracker-controller\\\\nI0217 13:37:35.266283 6847 services_controller.go:204] Setting up event handlers for services for network=default\\\\nI0217 13:37:35.266359 6847 ovnkube.go:599] Stopped ovnkube\\\\nI0217 13:37:35.266398 6847 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0217 13:37:35.266476 6847 ovnkube.go:137] failed to run ov\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T13:37:34Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-5cplg_openshift-ovn-kubernetes(742e6df8-2a68-426e-982c-ef825c6efca3)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg6ql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8e5ca206927b1e2a6968ce42ecdb1441da4c68ad11b65c657a68889ac73af37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg6ql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6fcb3dc45db5b2c74f9bdd4c7c453815a392de1f2afe91ebed618327cfe5cb7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6fcb3dc45db5b2c74f9bdd4c7c453815a392de1f2afe91ebed618327cfe5cb7c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:36:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg6ql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:36:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-5cplg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:37:41Z is after 2025-08-24T17:21:41Z" Feb 17 13:37:41 crc kubenswrapper[4768]: I0217 13:37:41.787799 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-hngsc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a9682c-5dd5-49ce-bd8c-60e91527ec2a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db596eafde85d9ad74007393b8576b424cd8561f68f694af82cde80f997a5688\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fbjbb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:36:46Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-hngsc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:37:41Z is after 2025-08-24T17:21:41Z" Feb 17 13:37:41 crc kubenswrapper[4768]: I0217 13:37:41.802404 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"99a4349d-f5a6-431b-b8d6-9de1f9bbe63a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f45aa99773693c07f27add72aa8305d47c48deeb24382aa9ac8bca232fd26ff3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://47db5ef94c07ecce7b368bb809a736c9532fe3e47081cfa7e248b8b40d1d243f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0adb0161ea24422b8b1443a81f00e9c38f60bf8d0718075ce189158200b28a78\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22e1c78ff231c7a76ebb06589f5f14caed8c0b72475083230c80b4512ae2a048\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:36:21Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:37:41Z is after 2025-08-24T17:21:41Z" Feb 17 13:37:41 crc kubenswrapper[4768]: I0217 13:37:41.818456 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1bbd8d51-f387-4de0-b841-e602f7249532\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:37:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:37:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a9d10a3cac9b0b43e1e7c3fb80828cd2d694a46314618dc023ffec8a43488c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e06287439e9f86ed8e81d6d3e8b07691439a8227d2eb2b819db1a7e6de078c18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://28f9a3cd67e4819d5c4e24b1b0d8576898a01c75184c958d65ce81f2523ce7f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3492da352fa4e1b54d266098578ea4f42e40cf1ebe2d6e511dc804473efd9e64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3492da352fa4e1b54d266098578ea4f42e40cf1ebe2d6e511dc804473efd9e64\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:36:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:36:22Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:36:21Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:37:41Z is after 2025-08-24T17:21:41Z" Feb 17 13:37:41 crc kubenswrapper[4768]: I0217 13:37:41.837341 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:41 crc kubenswrapper[4768]: I0217 13:37:41.837580 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:41 crc kubenswrapper[4768]: I0217 13:37:41.837670 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:41 crc kubenswrapper[4768]: I0217 13:37:41.837760 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:41 crc kubenswrapper[4768]: I0217 13:37:41.837838 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:41Z","lastTransitionTime":"2026-02-17T13:37:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:41 crc kubenswrapper[4768]: I0217 13:37:41.939835 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:41 crc kubenswrapper[4768]: I0217 13:37:41.939889 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:41 crc kubenswrapper[4768]: I0217 13:37:41.939902 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:41 crc kubenswrapper[4768]: I0217 13:37:41.939921 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:41 crc kubenswrapper[4768]: I0217 13:37:41.939934 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:41Z","lastTransitionTime":"2026-02-17T13:37:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:42 crc kubenswrapper[4768]: I0217 13:37:42.042430 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:42 crc kubenswrapper[4768]: I0217 13:37:42.042469 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:42 crc kubenswrapper[4768]: I0217 13:37:42.042479 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:42 crc kubenswrapper[4768]: I0217 13:37:42.042493 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:42 crc kubenswrapper[4768]: I0217 13:37:42.042502 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:42Z","lastTransitionTime":"2026-02-17T13:37:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:42 crc kubenswrapper[4768]: I0217 13:37:42.145088 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:42 crc kubenswrapper[4768]: I0217 13:37:42.145172 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:42 crc kubenswrapper[4768]: I0217 13:37:42.145189 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:42 crc kubenswrapper[4768]: I0217 13:37:42.145212 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:42 crc kubenswrapper[4768]: I0217 13:37:42.145225 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:42Z","lastTransitionTime":"2026-02-17T13:37:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:42 crc kubenswrapper[4768]: I0217 13:37:42.248084 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:42 crc kubenswrapper[4768]: I0217 13:37:42.248150 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:42 crc kubenswrapper[4768]: I0217 13:37:42.248162 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:42 crc kubenswrapper[4768]: I0217 13:37:42.248179 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:42 crc kubenswrapper[4768]: I0217 13:37:42.248190 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:42Z","lastTransitionTime":"2026-02-17T13:37:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:42 crc kubenswrapper[4768]: I0217 13:37:42.351838 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:42 crc kubenswrapper[4768]: I0217 13:37:42.351887 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:42 crc kubenswrapper[4768]: I0217 13:37:42.351897 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:42 crc kubenswrapper[4768]: I0217 13:37:42.351913 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:42 crc kubenswrapper[4768]: I0217 13:37:42.351924 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:42Z","lastTransitionTime":"2026-02-17T13:37:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:42 crc kubenswrapper[4768]: I0217 13:37:42.454743 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:42 crc kubenswrapper[4768]: I0217 13:37:42.454803 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:42 crc kubenswrapper[4768]: I0217 13:37:42.454821 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:42 crc kubenswrapper[4768]: I0217 13:37:42.454846 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:42 crc kubenswrapper[4768]: I0217 13:37:42.454863 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:42Z","lastTransitionTime":"2026-02-17T13:37:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:42 crc kubenswrapper[4768]: I0217 13:37:42.533651 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5bxh7" Feb 17 13:37:42 crc kubenswrapper[4768]: E0217 13:37:42.533948 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5bxh7" podUID="8c8b1469-ed55-4743-9553-f81efd79e5f1" Feb 17 13:37:42 crc kubenswrapper[4768]: I0217 13:37:42.547621 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc"] Feb 17 13:37:42 crc kubenswrapper[4768]: I0217 13:37:42.555237 4768 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-26 04:17:27.95967065 +0000 UTC Feb 17 13:37:42 crc kubenswrapper[4768]: I0217 13:37:42.556758 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:42 crc kubenswrapper[4768]: I0217 13:37:42.556791 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:42 crc kubenswrapper[4768]: I0217 13:37:42.556799 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:42 crc kubenswrapper[4768]: I0217 13:37:42.556813 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:42 crc kubenswrapper[4768]: I0217 13:37:42.556822 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:42Z","lastTransitionTime":"2026-02-17T13:37:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:42 crc kubenswrapper[4768]: I0217 13:37:42.660445 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:42 crc kubenswrapper[4768]: I0217 13:37:42.660519 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:42 crc kubenswrapper[4768]: I0217 13:37:42.660538 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:42 crc kubenswrapper[4768]: I0217 13:37:42.660567 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:42 crc kubenswrapper[4768]: I0217 13:37:42.660585 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:42Z","lastTransitionTime":"2026-02-17T13:37:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:42 crc kubenswrapper[4768]: I0217 13:37:42.764674 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:42 crc kubenswrapper[4768]: I0217 13:37:42.764741 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:42 crc kubenswrapper[4768]: I0217 13:37:42.764767 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:42 crc kubenswrapper[4768]: I0217 13:37:42.764799 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:42 crc kubenswrapper[4768]: I0217 13:37:42.764824 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:42Z","lastTransitionTime":"2026-02-17T13:37:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:42 crc kubenswrapper[4768]: I0217 13:37:42.867794 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:42 crc kubenswrapper[4768]: I0217 13:37:42.868167 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:42 crc kubenswrapper[4768]: I0217 13:37:42.868183 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:42 crc kubenswrapper[4768]: I0217 13:37:42.868199 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:42 crc kubenswrapper[4768]: I0217 13:37:42.868209 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:42Z","lastTransitionTime":"2026-02-17T13:37:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:42 crc kubenswrapper[4768]: I0217 13:37:42.970994 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:42 crc kubenswrapper[4768]: I0217 13:37:42.971065 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:42 crc kubenswrapper[4768]: I0217 13:37:42.971081 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:42 crc kubenswrapper[4768]: I0217 13:37:42.971132 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:42 crc kubenswrapper[4768]: I0217 13:37:42.971147 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:42Z","lastTransitionTime":"2026-02-17T13:37:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:43 crc kubenswrapper[4768]: I0217 13:37:43.073829 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:43 crc kubenswrapper[4768]: I0217 13:37:43.073913 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:43 crc kubenswrapper[4768]: I0217 13:37:43.073925 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:43 crc kubenswrapper[4768]: I0217 13:37:43.073944 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:43 crc kubenswrapper[4768]: I0217 13:37:43.073962 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:43Z","lastTransitionTime":"2026-02-17T13:37:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:43 crc kubenswrapper[4768]: I0217 13:37:43.176907 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:43 crc kubenswrapper[4768]: I0217 13:37:43.177235 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:43 crc kubenswrapper[4768]: I0217 13:37:43.177348 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:43 crc kubenswrapper[4768]: I0217 13:37:43.177440 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:43 crc kubenswrapper[4768]: I0217 13:37:43.177540 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:43Z","lastTransitionTime":"2026-02-17T13:37:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:43 crc kubenswrapper[4768]: I0217 13:37:43.280704 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:43 crc kubenswrapper[4768]: I0217 13:37:43.281070 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:43 crc kubenswrapper[4768]: I0217 13:37:43.281332 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:43 crc kubenswrapper[4768]: I0217 13:37:43.281534 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:43 crc kubenswrapper[4768]: I0217 13:37:43.281690 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:43Z","lastTransitionTime":"2026-02-17T13:37:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:43 crc kubenswrapper[4768]: I0217 13:37:43.384504 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:43 crc kubenswrapper[4768]: I0217 13:37:43.384555 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:43 crc kubenswrapper[4768]: I0217 13:37:43.384570 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:43 crc kubenswrapper[4768]: I0217 13:37:43.384592 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:43 crc kubenswrapper[4768]: I0217 13:37:43.384608 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:43Z","lastTransitionTime":"2026-02-17T13:37:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:43 crc kubenswrapper[4768]: I0217 13:37:43.487243 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:43 crc kubenswrapper[4768]: I0217 13:37:43.487296 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:43 crc kubenswrapper[4768]: I0217 13:37:43.487308 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:43 crc kubenswrapper[4768]: I0217 13:37:43.487325 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:43 crc kubenswrapper[4768]: I0217 13:37:43.487335 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:43Z","lastTransitionTime":"2026-02-17T13:37:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:43 crc kubenswrapper[4768]: I0217 13:37:43.533508 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 13:37:43 crc kubenswrapper[4768]: E0217 13:37:43.533637 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 13:37:43 crc kubenswrapper[4768]: I0217 13:37:43.533702 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 13:37:43 crc kubenswrapper[4768]: I0217 13:37:43.533727 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 13:37:43 crc kubenswrapper[4768]: E0217 13:37:43.533825 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 13:37:43 crc kubenswrapper[4768]: E0217 13:37:43.534083 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 13:37:43 crc kubenswrapper[4768]: I0217 13:37:43.555830 4768 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-10 17:05:14.87086564 +0000 UTC Feb 17 13:37:43 crc kubenswrapper[4768]: I0217 13:37:43.590655 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:43 crc kubenswrapper[4768]: I0217 13:37:43.591149 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:43 crc kubenswrapper[4768]: I0217 13:37:43.591307 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:43 crc kubenswrapper[4768]: I0217 13:37:43.591450 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:43 crc kubenswrapper[4768]: I0217 13:37:43.591597 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:43Z","lastTransitionTime":"2026-02-17T13:37:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:43 crc kubenswrapper[4768]: I0217 13:37:43.693854 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:43 crc kubenswrapper[4768]: I0217 13:37:43.694212 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:43 crc kubenswrapper[4768]: I0217 13:37:43.694286 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:43 crc kubenswrapper[4768]: I0217 13:37:43.694368 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:43 crc kubenswrapper[4768]: I0217 13:37:43.694434 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:43Z","lastTransitionTime":"2026-02-17T13:37:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:43 crc kubenswrapper[4768]: I0217 13:37:43.797345 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:43 crc kubenswrapper[4768]: I0217 13:37:43.797728 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:43 crc kubenswrapper[4768]: I0217 13:37:43.797990 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:43 crc kubenswrapper[4768]: I0217 13:37:43.798249 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:43 crc kubenswrapper[4768]: I0217 13:37:43.798447 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:43Z","lastTransitionTime":"2026-02-17T13:37:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:43 crc kubenswrapper[4768]: I0217 13:37:43.900751 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:43 crc kubenswrapper[4768]: I0217 13:37:43.901140 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:43 crc kubenswrapper[4768]: I0217 13:37:43.901256 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:43 crc kubenswrapper[4768]: I0217 13:37:43.901356 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:43 crc kubenswrapper[4768]: I0217 13:37:43.901451 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:43Z","lastTransitionTime":"2026-02-17T13:37:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:44 crc kubenswrapper[4768]: I0217 13:37:44.003151 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:44 crc kubenswrapper[4768]: I0217 13:37:44.003218 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:44 crc kubenswrapper[4768]: I0217 13:37:44.003236 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:44 crc kubenswrapper[4768]: I0217 13:37:44.003261 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:44 crc kubenswrapper[4768]: I0217 13:37:44.003279 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:44Z","lastTransitionTime":"2026-02-17T13:37:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:44 crc kubenswrapper[4768]: I0217 13:37:44.105611 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:44 crc kubenswrapper[4768]: I0217 13:37:44.105644 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:44 crc kubenswrapper[4768]: I0217 13:37:44.105654 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:44 crc kubenswrapper[4768]: I0217 13:37:44.105672 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:44 crc kubenswrapper[4768]: I0217 13:37:44.105684 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:44Z","lastTransitionTime":"2026-02-17T13:37:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:44 crc kubenswrapper[4768]: I0217 13:37:44.207906 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:44 crc kubenswrapper[4768]: I0217 13:37:44.207979 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:44 crc kubenswrapper[4768]: I0217 13:37:44.208007 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:44 crc kubenswrapper[4768]: I0217 13:37:44.208038 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:44 crc kubenswrapper[4768]: I0217 13:37:44.208060 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:44Z","lastTransitionTime":"2026-02-17T13:37:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:44 crc kubenswrapper[4768]: I0217 13:37:44.310577 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:44 crc kubenswrapper[4768]: I0217 13:37:44.310634 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:44 crc kubenswrapper[4768]: I0217 13:37:44.310643 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:44 crc kubenswrapper[4768]: I0217 13:37:44.310659 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:44 crc kubenswrapper[4768]: I0217 13:37:44.310668 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:44Z","lastTransitionTime":"2026-02-17T13:37:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:44 crc kubenswrapper[4768]: I0217 13:37:44.413232 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:44 crc kubenswrapper[4768]: I0217 13:37:44.413556 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:44 crc kubenswrapper[4768]: I0217 13:37:44.413651 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:44 crc kubenswrapper[4768]: I0217 13:37:44.413751 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:44 crc kubenswrapper[4768]: I0217 13:37:44.413852 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:44Z","lastTransitionTime":"2026-02-17T13:37:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:44 crc kubenswrapper[4768]: I0217 13:37:44.516505 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:44 crc kubenswrapper[4768]: I0217 13:37:44.516562 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:44 crc kubenswrapper[4768]: I0217 13:37:44.516575 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:44 crc kubenswrapper[4768]: I0217 13:37:44.516595 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:44 crc kubenswrapper[4768]: I0217 13:37:44.516611 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:44Z","lastTransitionTime":"2026-02-17T13:37:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:44 crc kubenswrapper[4768]: I0217 13:37:44.533819 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5bxh7" Feb 17 13:37:44 crc kubenswrapper[4768]: E0217 13:37:44.533965 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5bxh7" podUID="8c8b1469-ed55-4743-9553-f81efd79e5f1" Feb 17 13:37:44 crc kubenswrapper[4768]: I0217 13:37:44.556851 4768 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-14 21:25:31.491291177 +0000 UTC Feb 17 13:37:44 crc kubenswrapper[4768]: I0217 13:37:44.619390 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:44 crc kubenswrapper[4768]: I0217 13:37:44.619421 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:44 crc kubenswrapper[4768]: I0217 13:37:44.619430 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:44 crc kubenswrapper[4768]: I0217 13:37:44.619444 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:44 crc kubenswrapper[4768]: I0217 13:37:44.619453 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:44Z","lastTransitionTime":"2026-02-17T13:37:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:44 crc kubenswrapper[4768]: I0217 13:37:44.721839 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:44 crc kubenswrapper[4768]: I0217 13:37:44.721870 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:44 crc kubenswrapper[4768]: I0217 13:37:44.721879 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:44 crc kubenswrapper[4768]: I0217 13:37:44.721895 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:44 crc kubenswrapper[4768]: I0217 13:37:44.721914 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:44Z","lastTransitionTime":"2026-02-17T13:37:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:44 crc kubenswrapper[4768]: I0217 13:37:44.824392 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:44 crc kubenswrapper[4768]: I0217 13:37:44.824434 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:44 crc kubenswrapper[4768]: I0217 13:37:44.824444 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:44 crc kubenswrapper[4768]: I0217 13:37:44.824463 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:44 crc kubenswrapper[4768]: I0217 13:37:44.824474 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:44Z","lastTransitionTime":"2026-02-17T13:37:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:44 crc kubenswrapper[4768]: I0217 13:37:44.926669 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:44 crc kubenswrapper[4768]: I0217 13:37:44.926728 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:44 crc kubenswrapper[4768]: I0217 13:37:44.926736 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:44 crc kubenswrapper[4768]: I0217 13:37:44.926752 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:44 crc kubenswrapper[4768]: I0217 13:37:44.926765 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:44Z","lastTransitionTime":"2026-02-17T13:37:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:45 crc kubenswrapper[4768]: I0217 13:37:45.028980 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:45 crc kubenswrapper[4768]: I0217 13:37:45.029016 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:45 crc kubenswrapper[4768]: I0217 13:37:45.029024 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:45 crc kubenswrapper[4768]: I0217 13:37:45.029038 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:45 crc kubenswrapper[4768]: I0217 13:37:45.029048 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:45Z","lastTransitionTime":"2026-02-17T13:37:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:45 crc kubenswrapper[4768]: I0217 13:37:45.131822 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:45 crc kubenswrapper[4768]: I0217 13:37:45.131871 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:45 crc kubenswrapper[4768]: I0217 13:37:45.131882 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:45 crc kubenswrapper[4768]: I0217 13:37:45.131901 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:45 crc kubenswrapper[4768]: I0217 13:37:45.131913 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:45Z","lastTransitionTime":"2026-02-17T13:37:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:45 crc kubenswrapper[4768]: I0217 13:37:45.234513 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:45 crc kubenswrapper[4768]: I0217 13:37:45.234553 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:45 crc kubenswrapper[4768]: I0217 13:37:45.234563 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:45 crc kubenswrapper[4768]: I0217 13:37:45.234578 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:45 crc kubenswrapper[4768]: I0217 13:37:45.234588 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:45Z","lastTransitionTime":"2026-02-17T13:37:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:45 crc kubenswrapper[4768]: I0217 13:37:45.342978 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:45 crc kubenswrapper[4768]: I0217 13:37:45.343025 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:45 crc kubenswrapper[4768]: I0217 13:37:45.343036 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:45 crc kubenswrapper[4768]: I0217 13:37:45.343054 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:45 crc kubenswrapper[4768]: I0217 13:37:45.343066 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:45Z","lastTransitionTime":"2026-02-17T13:37:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:45 crc kubenswrapper[4768]: I0217 13:37:45.406973 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 13:37:45 crc kubenswrapper[4768]: E0217 13:37:45.407139 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 13:38:49.407119995 +0000 UTC m=+148.686506447 (durationBeforeRetry 1m4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 13:37:45 crc kubenswrapper[4768]: I0217 13:37:45.407233 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 13:37:45 crc kubenswrapper[4768]: I0217 13:37:45.407284 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 13:37:45 crc kubenswrapper[4768]: I0217 13:37:45.407333 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 13:37:45 crc kubenswrapper[4768]: E0217 13:37:45.407481 4768 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 17 13:37:45 crc kubenswrapper[4768]: E0217 13:37:45.407481 4768 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 17 13:37:45 crc kubenswrapper[4768]: E0217 13:37:45.407498 4768 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 17 13:37:45 crc kubenswrapper[4768]: E0217 13:37:45.407497 4768 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 17 13:37:45 crc kubenswrapper[4768]: E0217 13:37:45.407555 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-17 13:38:49.407536386 +0000 UTC m=+148.686922868 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 17 13:37:45 crc kubenswrapper[4768]: E0217 13:37:45.407631 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-17 13:38:49.407600308 +0000 UTC m=+148.686986830 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 17 13:37:45 crc kubenswrapper[4768]: E0217 13:37:45.407513 4768 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 17 13:37:45 crc kubenswrapper[4768]: E0217 13:37:45.407710 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-17 13:38:49.407696951 +0000 UTC m=+148.687083523 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 17 13:37:45 crc kubenswrapper[4768]: I0217 13:37:45.446011 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:45 crc kubenswrapper[4768]: I0217 13:37:45.446053 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:45 crc kubenswrapper[4768]: I0217 13:37:45.446065 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:45 crc kubenswrapper[4768]: I0217 13:37:45.446080 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:45 crc kubenswrapper[4768]: I0217 13:37:45.446091 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:45Z","lastTransitionTime":"2026-02-17T13:37:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:45 crc kubenswrapper[4768]: I0217 13:37:45.508824 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 13:37:45 crc kubenswrapper[4768]: E0217 13:37:45.509056 4768 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 17 13:37:45 crc kubenswrapper[4768]: E0217 13:37:45.509275 4768 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 17 13:37:45 crc kubenswrapper[4768]: E0217 13:37:45.509342 4768 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 17 13:37:45 crc kubenswrapper[4768]: E0217 13:37:45.509441 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-17 13:38:49.509428053 +0000 UTC m=+148.788814495 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 17 13:37:45 crc kubenswrapper[4768]: I0217 13:37:45.534063 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 13:37:45 crc kubenswrapper[4768]: I0217 13:37:45.534132 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 13:37:45 crc kubenswrapper[4768]: E0217 13:37:45.534221 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 13:37:45 crc kubenswrapper[4768]: E0217 13:37:45.534365 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 13:37:45 crc kubenswrapper[4768]: I0217 13:37:45.534602 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 13:37:45 crc kubenswrapper[4768]: E0217 13:37:45.534835 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 13:37:45 crc kubenswrapper[4768]: I0217 13:37:45.548229 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:45 crc kubenswrapper[4768]: I0217 13:37:45.548520 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:45 crc kubenswrapper[4768]: I0217 13:37:45.548675 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:45 crc kubenswrapper[4768]: I0217 13:37:45.548822 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:45 crc kubenswrapper[4768]: I0217 13:37:45.548954 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:45Z","lastTransitionTime":"2026-02-17T13:37:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:45 crc kubenswrapper[4768]: I0217 13:37:45.557626 4768 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-20 19:16:04.238911658 +0000 UTC Feb 17 13:37:45 crc kubenswrapper[4768]: I0217 13:37:45.651465 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:45 crc kubenswrapper[4768]: I0217 13:37:45.651520 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:45 crc kubenswrapper[4768]: I0217 13:37:45.651535 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:45 crc kubenswrapper[4768]: I0217 13:37:45.651555 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:45 crc kubenswrapper[4768]: I0217 13:37:45.651568 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:45Z","lastTransitionTime":"2026-02-17T13:37:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:45 crc kubenswrapper[4768]: I0217 13:37:45.755929 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:45 crc kubenswrapper[4768]: I0217 13:37:45.755980 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:45 crc kubenswrapper[4768]: I0217 13:37:45.755996 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:45 crc kubenswrapper[4768]: I0217 13:37:45.756020 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:45 crc kubenswrapper[4768]: I0217 13:37:45.756036 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:45Z","lastTransitionTime":"2026-02-17T13:37:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:45 crc kubenswrapper[4768]: I0217 13:37:45.859802 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:45 crc kubenswrapper[4768]: I0217 13:37:45.859846 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:45 crc kubenswrapper[4768]: I0217 13:37:45.859863 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:45 crc kubenswrapper[4768]: I0217 13:37:45.859888 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:45 crc kubenswrapper[4768]: I0217 13:37:45.859904 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:45Z","lastTransitionTime":"2026-02-17T13:37:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:45 crc kubenswrapper[4768]: I0217 13:37:45.962753 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:45 crc kubenswrapper[4768]: I0217 13:37:45.963193 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:45 crc kubenswrapper[4768]: I0217 13:37:45.963462 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:45 crc kubenswrapper[4768]: I0217 13:37:45.963697 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:45 crc kubenswrapper[4768]: I0217 13:37:45.963892 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:45Z","lastTransitionTime":"2026-02-17T13:37:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:46 crc kubenswrapper[4768]: I0217 13:37:46.066254 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:46 crc kubenswrapper[4768]: I0217 13:37:46.066289 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:46 crc kubenswrapper[4768]: I0217 13:37:46.066299 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:46 crc kubenswrapper[4768]: I0217 13:37:46.066317 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:46 crc kubenswrapper[4768]: I0217 13:37:46.066328 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:46Z","lastTransitionTime":"2026-02-17T13:37:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:46 crc kubenswrapper[4768]: I0217 13:37:46.169548 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:46 crc kubenswrapper[4768]: I0217 13:37:46.169634 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:46 crc kubenswrapper[4768]: I0217 13:37:46.169659 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:46 crc kubenswrapper[4768]: I0217 13:37:46.169757 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:46 crc kubenswrapper[4768]: I0217 13:37:46.169784 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:46Z","lastTransitionTime":"2026-02-17T13:37:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:46 crc kubenswrapper[4768]: I0217 13:37:46.272907 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:46 crc kubenswrapper[4768]: I0217 13:37:46.272946 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:46 crc kubenswrapper[4768]: I0217 13:37:46.272972 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:46 crc kubenswrapper[4768]: I0217 13:37:46.272986 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:46 crc kubenswrapper[4768]: I0217 13:37:46.272995 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:46Z","lastTransitionTime":"2026-02-17T13:37:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:46 crc kubenswrapper[4768]: I0217 13:37:46.375700 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:46 crc kubenswrapper[4768]: I0217 13:37:46.375766 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:46 crc kubenswrapper[4768]: I0217 13:37:46.375779 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:46 crc kubenswrapper[4768]: I0217 13:37:46.375797 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:46 crc kubenswrapper[4768]: I0217 13:37:46.375809 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:46Z","lastTransitionTime":"2026-02-17T13:37:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:46 crc kubenswrapper[4768]: I0217 13:37:46.479166 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:46 crc kubenswrapper[4768]: I0217 13:37:46.479217 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:46 crc kubenswrapper[4768]: I0217 13:37:46.479227 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:46 crc kubenswrapper[4768]: I0217 13:37:46.479245 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:46 crc kubenswrapper[4768]: I0217 13:37:46.479257 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:46Z","lastTransitionTime":"2026-02-17T13:37:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:46 crc kubenswrapper[4768]: I0217 13:37:46.533696 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5bxh7" Feb 17 13:37:46 crc kubenswrapper[4768]: E0217 13:37:46.533938 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5bxh7" podUID="8c8b1469-ed55-4743-9553-f81efd79e5f1" Feb 17 13:37:46 crc kubenswrapper[4768]: I0217 13:37:46.558154 4768 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-03 23:30:13.292387238 +0000 UTC Feb 17 13:37:46 crc kubenswrapper[4768]: I0217 13:37:46.583009 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:46 crc kubenswrapper[4768]: I0217 13:37:46.583068 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:46 crc kubenswrapper[4768]: I0217 13:37:46.583086 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:46 crc kubenswrapper[4768]: I0217 13:37:46.583158 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:46 crc kubenswrapper[4768]: I0217 13:37:46.583197 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:46Z","lastTransitionTime":"2026-02-17T13:37:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:46 crc kubenswrapper[4768]: I0217 13:37:46.685505 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:46 crc kubenswrapper[4768]: I0217 13:37:46.685546 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:46 crc kubenswrapper[4768]: I0217 13:37:46.685559 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:46 crc kubenswrapper[4768]: I0217 13:37:46.685577 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:46 crc kubenswrapper[4768]: I0217 13:37:46.685590 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:46Z","lastTransitionTime":"2026-02-17T13:37:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:46 crc kubenswrapper[4768]: I0217 13:37:46.787902 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:46 crc kubenswrapper[4768]: I0217 13:37:46.787954 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:46 crc kubenswrapper[4768]: I0217 13:37:46.787971 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:46 crc kubenswrapper[4768]: I0217 13:37:46.787994 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:46 crc kubenswrapper[4768]: I0217 13:37:46.788012 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:46Z","lastTransitionTime":"2026-02-17T13:37:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:46 crc kubenswrapper[4768]: I0217 13:37:46.890508 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:46 crc kubenswrapper[4768]: I0217 13:37:46.890734 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:46 crc kubenswrapper[4768]: I0217 13:37:46.890806 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:46 crc kubenswrapper[4768]: I0217 13:37:46.890872 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:46 crc kubenswrapper[4768]: I0217 13:37:46.890930 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:46Z","lastTransitionTime":"2026-02-17T13:37:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:46 crc kubenswrapper[4768]: I0217 13:37:46.993513 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:46 crc kubenswrapper[4768]: I0217 13:37:46.993575 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:46 crc kubenswrapper[4768]: I0217 13:37:46.993592 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:46 crc kubenswrapper[4768]: I0217 13:37:46.993622 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:46 crc kubenswrapper[4768]: I0217 13:37:46.993644 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:46Z","lastTransitionTime":"2026-02-17T13:37:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:47 crc kubenswrapper[4768]: I0217 13:37:47.096537 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:47 crc kubenswrapper[4768]: I0217 13:37:47.096595 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:47 crc kubenswrapper[4768]: I0217 13:37:47.096606 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:47 crc kubenswrapper[4768]: I0217 13:37:47.096622 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:47 crc kubenswrapper[4768]: I0217 13:37:47.096631 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:47Z","lastTransitionTime":"2026-02-17T13:37:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:47 crc kubenswrapper[4768]: I0217 13:37:47.199074 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:47 crc kubenswrapper[4768]: I0217 13:37:47.199174 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:47 crc kubenswrapper[4768]: I0217 13:37:47.199195 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:47 crc kubenswrapper[4768]: I0217 13:37:47.199222 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:47 crc kubenswrapper[4768]: I0217 13:37:47.199242 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:47Z","lastTransitionTime":"2026-02-17T13:37:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:47 crc kubenswrapper[4768]: I0217 13:37:47.301654 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:47 crc kubenswrapper[4768]: I0217 13:37:47.301882 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:47 crc kubenswrapper[4768]: I0217 13:37:47.301949 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:47 crc kubenswrapper[4768]: I0217 13:37:47.302015 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:47 crc kubenswrapper[4768]: I0217 13:37:47.302076 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:47Z","lastTransitionTime":"2026-02-17T13:37:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:47 crc kubenswrapper[4768]: I0217 13:37:47.403744 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:47 crc kubenswrapper[4768]: I0217 13:37:47.403993 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:47 crc kubenswrapper[4768]: I0217 13:37:47.404137 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:47 crc kubenswrapper[4768]: I0217 13:37:47.404244 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:47 crc kubenswrapper[4768]: I0217 13:37:47.404334 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:47Z","lastTransitionTime":"2026-02-17T13:37:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:47 crc kubenswrapper[4768]: I0217 13:37:47.506695 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:47 crc kubenswrapper[4768]: I0217 13:37:47.506731 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:47 crc kubenswrapper[4768]: I0217 13:37:47.506742 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:47 crc kubenswrapper[4768]: I0217 13:37:47.506759 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:47 crc kubenswrapper[4768]: I0217 13:37:47.506771 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:47Z","lastTransitionTime":"2026-02-17T13:37:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:47 crc kubenswrapper[4768]: I0217 13:37:47.534054 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 13:37:47 crc kubenswrapper[4768]: E0217 13:37:47.534236 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 13:37:47 crc kubenswrapper[4768]: I0217 13:37:47.534395 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 13:37:47 crc kubenswrapper[4768]: I0217 13:37:47.534475 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 13:37:47 crc kubenswrapper[4768]: E0217 13:37:47.534575 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 13:37:47 crc kubenswrapper[4768]: E0217 13:37:47.534655 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 13:37:47 crc kubenswrapper[4768]: I0217 13:37:47.558473 4768 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-04 00:02:57.861151378 +0000 UTC Feb 17 13:37:47 crc kubenswrapper[4768]: I0217 13:37:47.610302 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:47 crc kubenswrapper[4768]: I0217 13:37:47.610376 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:47 crc kubenswrapper[4768]: I0217 13:37:47.610400 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:47 crc kubenswrapper[4768]: I0217 13:37:47.610432 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:47 crc kubenswrapper[4768]: I0217 13:37:47.610457 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:47Z","lastTransitionTime":"2026-02-17T13:37:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:47 crc kubenswrapper[4768]: I0217 13:37:47.713516 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:47 crc kubenswrapper[4768]: I0217 13:37:47.713573 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:47 crc kubenswrapper[4768]: I0217 13:37:47.713591 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:47 crc kubenswrapper[4768]: I0217 13:37:47.713619 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:47 crc kubenswrapper[4768]: I0217 13:37:47.713638 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:47Z","lastTransitionTime":"2026-02-17T13:37:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:47 crc kubenswrapper[4768]: I0217 13:37:47.816864 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:47 crc kubenswrapper[4768]: I0217 13:37:47.816942 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:47 crc kubenswrapper[4768]: I0217 13:37:47.816966 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:47 crc kubenswrapper[4768]: I0217 13:37:47.816995 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:47 crc kubenswrapper[4768]: I0217 13:37:47.817014 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:47Z","lastTransitionTime":"2026-02-17T13:37:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:47 crc kubenswrapper[4768]: I0217 13:37:47.919656 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:47 crc kubenswrapper[4768]: I0217 13:37:47.919916 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:47 crc kubenswrapper[4768]: I0217 13:37:47.920040 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:47 crc kubenswrapper[4768]: I0217 13:37:47.920277 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:47 crc kubenswrapper[4768]: I0217 13:37:47.920479 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:47Z","lastTransitionTime":"2026-02-17T13:37:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:48 crc kubenswrapper[4768]: I0217 13:37:48.023360 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:48 crc kubenswrapper[4768]: I0217 13:37:48.023426 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:48 crc kubenswrapper[4768]: I0217 13:37:48.023443 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:48 crc kubenswrapper[4768]: I0217 13:37:48.023469 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:48 crc kubenswrapper[4768]: I0217 13:37:48.023485 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:48Z","lastTransitionTime":"2026-02-17T13:37:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:48 crc kubenswrapper[4768]: I0217 13:37:48.125749 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:48 crc kubenswrapper[4768]: I0217 13:37:48.125789 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:48 crc kubenswrapper[4768]: I0217 13:37:48.125799 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:48 crc kubenswrapper[4768]: I0217 13:37:48.125815 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:48 crc kubenswrapper[4768]: I0217 13:37:48.125825 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:48Z","lastTransitionTime":"2026-02-17T13:37:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:48 crc kubenswrapper[4768]: I0217 13:37:48.228613 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:48 crc kubenswrapper[4768]: I0217 13:37:48.228704 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:48 crc kubenswrapper[4768]: I0217 13:37:48.228740 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:48 crc kubenswrapper[4768]: I0217 13:37:48.228773 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:48 crc kubenswrapper[4768]: I0217 13:37:48.228792 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:48Z","lastTransitionTime":"2026-02-17T13:37:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:48 crc kubenswrapper[4768]: I0217 13:37:48.331365 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:48 crc kubenswrapper[4768]: I0217 13:37:48.331400 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:48 crc kubenswrapper[4768]: I0217 13:37:48.331409 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:48 crc kubenswrapper[4768]: I0217 13:37:48.331424 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:48 crc kubenswrapper[4768]: I0217 13:37:48.331434 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:48Z","lastTransitionTime":"2026-02-17T13:37:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:48 crc kubenswrapper[4768]: I0217 13:37:48.435087 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:48 crc kubenswrapper[4768]: I0217 13:37:48.435217 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:48 crc kubenswrapper[4768]: I0217 13:37:48.435258 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:48 crc kubenswrapper[4768]: I0217 13:37:48.435285 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:48 crc kubenswrapper[4768]: I0217 13:37:48.435302 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:48Z","lastTransitionTime":"2026-02-17T13:37:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:48 crc kubenswrapper[4768]: I0217 13:37:48.533352 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5bxh7" Feb 17 13:37:48 crc kubenswrapper[4768]: E0217 13:37:48.533848 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5bxh7" podUID="8c8b1469-ed55-4743-9553-f81efd79e5f1" Feb 17 13:37:48 crc kubenswrapper[4768]: I0217 13:37:48.538778 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:48 crc kubenswrapper[4768]: I0217 13:37:48.538844 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:48 crc kubenswrapper[4768]: I0217 13:37:48.538863 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:48 crc kubenswrapper[4768]: I0217 13:37:48.538890 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:48 crc kubenswrapper[4768]: I0217 13:37:48.538907 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:48Z","lastTransitionTime":"2026-02-17T13:37:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:48 crc kubenswrapper[4768]: I0217 13:37:48.559592 4768 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-03 00:04:07.146244137 +0000 UTC Feb 17 13:37:48 crc kubenswrapper[4768]: I0217 13:37:48.642286 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:48 crc kubenswrapper[4768]: I0217 13:37:48.642679 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:48 crc kubenswrapper[4768]: I0217 13:37:48.642904 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:48 crc kubenswrapper[4768]: I0217 13:37:48.643192 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:48 crc kubenswrapper[4768]: I0217 13:37:48.643420 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:48Z","lastTransitionTime":"2026-02-17T13:37:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:48 crc kubenswrapper[4768]: I0217 13:37:48.747168 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:48 crc kubenswrapper[4768]: I0217 13:37:48.747625 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:48 crc kubenswrapper[4768]: I0217 13:37:48.748040 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:48 crc kubenswrapper[4768]: I0217 13:37:48.748348 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:48 crc kubenswrapper[4768]: I0217 13:37:48.748558 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:48Z","lastTransitionTime":"2026-02-17T13:37:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:48 crc kubenswrapper[4768]: I0217 13:37:48.851397 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:48 crc kubenswrapper[4768]: I0217 13:37:48.851425 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:48 crc kubenswrapper[4768]: I0217 13:37:48.851434 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:48 crc kubenswrapper[4768]: I0217 13:37:48.851449 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:48 crc kubenswrapper[4768]: I0217 13:37:48.851458 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:48Z","lastTransitionTime":"2026-02-17T13:37:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:48 crc kubenswrapper[4768]: I0217 13:37:48.965501 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:48 crc kubenswrapper[4768]: I0217 13:37:48.965566 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:48 crc kubenswrapper[4768]: I0217 13:37:48.965586 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:48 crc kubenswrapper[4768]: I0217 13:37:48.965614 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:48 crc kubenswrapper[4768]: I0217 13:37:48.965632 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:48Z","lastTransitionTime":"2026-02-17T13:37:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:49 crc kubenswrapper[4768]: I0217 13:37:49.069392 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:49 crc kubenswrapper[4768]: I0217 13:37:49.069441 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:49 crc kubenswrapper[4768]: I0217 13:37:49.069450 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:49 crc kubenswrapper[4768]: I0217 13:37:49.069466 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:49 crc kubenswrapper[4768]: I0217 13:37:49.069479 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:49Z","lastTransitionTime":"2026-02-17T13:37:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:49 crc kubenswrapper[4768]: I0217 13:37:49.172754 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:49 crc kubenswrapper[4768]: I0217 13:37:49.173011 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:49 crc kubenswrapper[4768]: I0217 13:37:49.173088 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:49 crc kubenswrapper[4768]: I0217 13:37:49.173185 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:49 crc kubenswrapper[4768]: I0217 13:37:49.173260 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:49Z","lastTransitionTime":"2026-02-17T13:37:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:49 crc kubenswrapper[4768]: I0217 13:37:49.275553 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:49 crc kubenswrapper[4768]: I0217 13:37:49.275601 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:49 crc kubenswrapper[4768]: I0217 13:37:49.275615 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:49 crc kubenswrapper[4768]: I0217 13:37:49.275635 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:49 crc kubenswrapper[4768]: I0217 13:37:49.275649 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:49Z","lastTransitionTime":"2026-02-17T13:37:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:49 crc kubenswrapper[4768]: I0217 13:37:49.378404 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:49 crc kubenswrapper[4768]: I0217 13:37:49.378447 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:49 crc kubenswrapper[4768]: I0217 13:37:49.378458 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:49 crc kubenswrapper[4768]: I0217 13:37:49.378476 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:49 crc kubenswrapper[4768]: I0217 13:37:49.378487 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:49Z","lastTransitionTime":"2026-02-17T13:37:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:49 crc kubenswrapper[4768]: I0217 13:37:49.480786 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:49 crc kubenswrapper[4768]: I0217 13:37:49.480821 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:49 crc kubenswrapper[4768]: I0217 13:37:49.480839 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:49 crc kubenswrapper[4768]: I0217 13:37:49.480854 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:49 crc kubenswrapper[4768]: I0217 13:37:49.480866 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:49Z","lastTransitionTime":"2026-02-17T13:37:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:49 crc kubenswrapper[4768]: I0217 13:37:49.518751 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:49 crc kubenswrapper[4768]: I0217 13:37:49.518796 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:49 crc kubenswrapper[4768]: I0217 13:37:49.518804 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:49 crc kubenswrapper[4768]: I0217 13:37:49.518822 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:49 crc kubenswrapper[4768]: I0217 13:37:49.518833 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:49Z","lastTransitionTime":"2026-02-17T13:37:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:49 crc kubenswrapper[4768]: E0217 13:37:49.531583 4768 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T13:37:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T13:37:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T13:37:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T13:37:49Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T13:37:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T13:37:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T13:37:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T13:37:49Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"72b2b2e1-552d-4984-900d-b4db18ea60be\\\",\\\"systemUUID\\\":\\\"85ded5cb-c1f6-4d1b-b23e-ee8660dcd6ef\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:37:49Z is after 2025-08-24T17:21:41Z" Feb 17 13:37:49 crc kubenswrapper[4768]: I0217 13:37:49.533652 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 13:37:49 crc kubenswrapper[4768]: I0217 13:37:49.533703 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 13:37:49 crc kubenswrapper[4768]: E0217 13:37:49.533875 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 13:37:49 crc kubenswrapper[4768]: I0217 13:37:49.533937 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 13:37:49 crc kubenswrapper[4768]: E0217 13:37:49.534009 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 13:37:49 crc kubenswrapper[4768]: E0217 13:37:49.534084 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 13:37:49 crc kubenswrapper[4768]: I0217 13:37:49.534866 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:49 crc kubenswrapper[4768]: I0217 13:37:49.534894 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:49 crc kubenswrapper[4768]: I0217 13:37:49.534904 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:49 crc kubenswrapper[4768]: I0217 13:37:49.534918 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:49 crc kubenswrapper[4768]: I0217 13:37:49.534928 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:49Z","lastTransitionTime":"2026-02-17T13:37:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:49 crc kubenswrapper[4768]: E0217 13:37:49.548069 4768 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T13:37:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T13:37:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T13:37:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T13:37:49Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T13:37:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T13:37:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T13:37:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T13:37:49Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"72b2b2e1-552d-4984-900d-b4db18ea60be\\\",\\\"systemUUID\\\":\\\"85ded5cb-c1f6-4d1b-b23e-ee8660dcd6ef\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:37:49Z is after 2025-08-24T17:21:41Z" Feb 17 13:37:49 crc kubenswrapper[4768]: I0217 13:37:49.551625 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:49 crc kubenswrapper[4768]: I0217 13:37:49.551713 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:49 crc kubenswrapper[4768]: I0217 13:37:49.551731 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:49 crc kubenswrapper[4768]: I0217 13:37:49.551786 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:49 crc kubenswrapper[4768]: I0217 13:37:49.551804 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:49Z","lastTransitionTime":"2026-02-17T13:37:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:49 crc kubenswrapper[4768]: I0217 13:37:49.560591 4768 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-08 06:26:24.68057842 +0000 UTC Feb 17 13:37:49 crc kubenswrapper[4768]: E0217 13:37:49.567842 4768 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T13:37:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T13:37:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T13:37:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T13:37:49Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T13:37:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T13:37:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T13:37:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T13:37:49Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"72b2b2e1-552d-4984-900d-b4db18ea60be\\\",\\\"systemUUID\\\":\\\"85ded5cb-c1f6-4d1b-b23e-ee8660dcd6ef\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:37:49Z is after 2025-08-24T17:21:41Z" Feb 17 13:37:49 crc kubenswrapper[4768]: I0217 13:37:49.570556 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:49 crc kubenswrapper[4768]: I0217 13:37:49.570594 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:49 crc kubenswrapper[4768]: I0217 13:37:49.570605 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:49 crc kubenswrapper[4768]: I0217 13:37:49.570621 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:49 crc kubenswrapper[4768]: I0217 13:37:49.570634 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:49Z","lastTransitionTime":"2026-02-17T13:37:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:49 crc kubenswrapper[4768]: E0217 13:37:49.580590 4768 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T13:37:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T13:37:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T13:37:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T13:37:49Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T13:37:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T13:37:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T13:37:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T13:37:49Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"72b2b2e1-552d-4984-900d-b4db18ea60be\\\",\\\"systemUUID\\\":\\\"85ded5cb-c1f6-4d1b-b23e-ee8660dcd6ef\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:37:49Z is after 2025-08-24T17:21:41Z" Feb 17 13:37:49 crc kubenswrapper[4768]: I0217 13:37:49.583733 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:49 crc kubenswrapper[4768]: I0217 13:37:49.583941 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:49 crc kubenswrapper[4768]: I0217 13:37:49.584092 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:49 crc kubenswrapper[4768]: I0217 13:37:49.584241 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:49 crc kubenswrapper[4768]: I0217 13:37:49.584353 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:49Z","lastTransitionTime":"2026-02-17T13:37:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:49 crc kubenswrapper[4768]: E0217 13:37:49.603232 4768 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T13:37:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T13:37:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T13:37:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T13:37:49Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T13:37:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T13:37:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T13:37:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T13:37:49Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"72b2b2e1-552d-4984-900d-b4db18ea60be\\\",\\\"systemUUID\\\":\\\"85ded5cb-c1f6-4d1b-b23e-ee8660dcd6ef\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:37:49Z is after 2025-08-24T17:21:41Z" Feb 17 13:37:49 crc kubenswrapper[4768]: E0217 13:37:49.603498 4768 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 17 13:37:49 crc kubenswrapper[4768]: I0217 13:37:49.605203 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:49 crc kubenswrapper[4768]: I0217 13:37:49.605232 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:49 crc kubenswrapper[4768]: I0217 13:37:49.605240 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:49 crc kubenswrapper[4768]: I0217 13:37:49.605254 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:49 crc kubenswrapper[4768]: I0217 13:37:49.605262 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:49Z","lastTransitionTime":"2026-02-17T13:37:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:49 crc kubenswrapper[4768]: I0217 13:37:49.707534 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:49 crc kubenswrapper[4768]: I0217 13:37:49.707777 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:49 crc kubenswrapper[4768]: I0217 13:37:49.707851 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:49 crc kubenswrapper[4768]: I0217 13:37:49.707920 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:49 crc kubenswrapper[4768]: I0217 13:37:49.707998 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:49Z","lastTransitionTime":"2026-02-17T13:37:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:49 crc kubenswrapper[4768]: I0217 13:37:49.811036 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:49 crc kubenswrapper[4768]: I0217 13:37:49.811088 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:49 crc kubenswrapper[4768]: I0217 13:37:49.811122 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:49 crc kubenswrapper[4768]: I0217 13:37:49.811140 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:49 crc kubenswrapper[4768]: I0217 13:37:49.811152 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:49Z","lastTransitionTime":"2026-02-17T13:37:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:49 crc kubenswrapper[4768]: I0217 13:37:49.913904 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:49 crc kubenswrapper[4768]: I0217 13:37:49.913954 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:49 crc kubenswrapper[4768]: I0217 13:37:49.913966 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:49 crc kubenswrapper[4768]: I0217 13:37:49.913981 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:49 crc kubenswrapper[4768]: I0217 13:37:49.913993 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:49Z","lastTransitionTime":"2026-02-17T13:37:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:50 crc kubenswrapper[4768]: I0217 13:37:50.016571 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:50 crc kubenswrapper[4768]: I0217 13:37:50.016615 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:50 crc kubenswrapper[4768]: I0217 13:37:50.016627 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:50 crc kubenswrapper[4768]: I0217 13:37:50.016644 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:50 crc kubenswrapper[4768]: I0217 13:37:50.016656 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:50Z","lastTransitionTime":"2026-02-17T13:37:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:50 crc kubenswrapper[4768]: I0217 13:37:50.125011 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:50 crc kubenswrapper[4768]: I0217 13:37:50.125087 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:50 crc kubenswrapper[4768]: I0217 13:37:50.125151 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:50 crc kubenswrapper[4768]: I0217 13:37:50.125186 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:50 crc kubenswrapper[4768]: I0217 13:37:50.125207 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:50Z","lastTransitionTime":"2026-02-17T13:37:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:50 crc kubenswrapper[4768]: I0217 13:37:50.227604 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:50 crc kubenswrapper[4768]: I0217 13:37:50.227657 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:50 crc kubenswrapper[4768]: I0217 13:37:50.227671 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:50 crc kubenswrapper[4768]: I0217 13:37:50.227692 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:50 crc kubenswrapper[4768]: I0217 13:37:50.227706 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:50Z","lastTransitionTime":"2026-02-17T13:37:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:50 crc kubenswrapper[4768]: I0217 13:37:50.330899 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:50 crc kubenswrapper[4768]: I0217 13:37:50.330981 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:50 crc kubenswrapper[4768]: I0217 13:37:50.331005 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:50 crc kubenswrapper[4768]: I0217 13:37:50.331035 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:50 crc kubenswrapper[4768]: I0217 13:37:50.331060 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:50Z","lastTransitionTime":"2026-02-17T13:37:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:50 crc kubenswrapper[4768]: I0217 13:37:50.434529 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:50 crc kubenswrapper[4768]: I0217 13:37:50.434641 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:50 crc kubenswrapper[4768]: I0217 13:37:50.434661 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:50 crc kubenswrapper[4768]: I0217 13:37:50.434691 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:50 crc kubenswrapper[4768]: I0217 13:37:50.434710 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:50Z","lastTransitionTime":"2026-02-17T13:37:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:50 crc kubenswrapper[4768]: I0217 13:37:50.533834 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5bxh7" Feb 17 13:37:50 crc kubenswrapper[4768]: E0217 13:37:50.534693 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5bxh7" podUID="8c8b1469-ed55-4743-9553-f81efd79e5f1" Feb 17 13:37:50 crc kubenswrapper[4768]: I0217 13:37:50.536935 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:50 crc kubenswrapper[4768]: I0217 13:37:50.536969 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:50 crc kubenswrapper[4768]: I0217 13:37:50.536979 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:50 crc kubenswrapper[4768]: I0217 13:37:50.536994 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:50 crc kubenswrapper[4768]: I0217 13:37:50.537008 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:50Z","lastTransitionTime":"2026-02-17T13:37:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:50 crc kubenswrapper[4768]: I0217 13:37:50.561382 4768 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-10 19:06:21.281637746 +0000 UTC Feb 17 13:37:50 crc kubenswrapper[4768]: I0217 13:37:50.639490 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:50 crc kubenswrapper[4768]: I0217 13:37:50.639570 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:50 crc kubenswrapper[4768]: I0217 13:37:50.639597 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:50 crc kubenswrapper[4768]: I0217 13:37:50.639628 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:50 crc kubenswrapper[4768]: I0217 13:37:50.639652 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:50Z","lastTransitionTime":"2026-02-17T13:37:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:50 crc kubenswrapper[4768]: I0217 13:37:50.742291 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:50 crc kubenswrapper[4768]: I0217 13:37:50.742616 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:50 crc kubenswrapper[4768]: I0217 13:37:50.742698 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:50 crc kubenswrapper[4768]: I0217 13:37:50.742789 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:50 crc kubenswrapper[4768]: I0217 13:37:50.742875 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:50Z","lastTransitionTime":"2026-02-17T13:37:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:50 crc kubenswrapper[4768]: I0217 13:37:50.845496 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:50 crc kubenswrapper[4768]: I0217 13:37:50.845538 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:50 crc kubenswrapper[4768]: I0217 13:37:50.845549 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:50 crc kubenswrapper[4768]: I0217 13:37:50.845567 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:50 crc kubenswrapper[4768]: I0217 13:37:50.845576 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:50Z","lastTransitionTime":"2026-02-17T13:37:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:50 crc kubenswrapper[4768]: I0217 13:37:50.948456 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:50 crc kubenswrapper[4768]: I0217 13:37:50.948501 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:50 crc kubenswrapper[4768]: I0217 13:37:50.948509 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:50 crc kubenswrapper[4768]: I0217 13:37:50.948524 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:50 crc kubenswrapper[4768]: I0217 13:37:50.948533 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:50Z","lastTransitionTime":"2026-02-17T13:37:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:51 crc kubenswrapper[4768]: I0217 13:37:51.051001 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:51 crc kubenswrapper[4768]: I0217 13:37:51.051089 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:51 crc kubenswrapper[4768]: I0217 13:37:51.051136 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:51 crc kubenswrapper[4768]: I0217 13:37:51.051157 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:51 crc kubenswrapper[4768]: I0217 13:37:51.051170 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:51Z","lastTransitionTime":"2026-02-17T13:37:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:51 crc kubenswrapper[4768]: I0217 13:37:51.154280 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:51 crc kubenswrapper[4768]: I0217 13:37:51.154377 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:51 crc kubenswrapper[4768]: I0217 13:37:51.154395 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:51 crc kubenswrapper[4768]: I0217 13:37:51.154455 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:51 crc kubenswrapper[4768]: I0217 13:37:51.154474 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:51Z","lastTransitionTime":"2026-02-17T13:37:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:51 crc kubenswrapper[4768]: I0217 13:37:51.257521 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:51 crc kubenswrapper[4768]: I0217 13:37:51.257574 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:51 crc kubenswrapper[4768]: I0217 13:37:51.257586 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:51 crc kubenswrapper[4768]: I0217 13:37:51.257603 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:51 crc kubenswrapper[4768]: I0217 13:37:51.257616 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:51Z","lastTransitionTime":"2026-02-17T13:37:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:51 crc kubenswrapper[4768]: I0217 13:37:51.360325 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:51 crc kubenswrapper[4768]: I0217 13:37:51.360447 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:51 crc kubenswrapper[4768]: I0217 13:37:51.360473 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:51 crc kubenswrapper[4768]: I0217 13:37:51.360503 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:51 crc kubenswrapper[4768]: I0217 13:37:51.360525 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:51Z","lastTransitionTime":"2026-02-17T13:37:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:51 crc kubenswrapper[4768]: I0217 13:37:51.463309 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:51 crc kubenswrapper[4768]: I0217 13:37:51.463366 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:51 crc kubenswrapper[4768]: I0217 13:37:51.463383 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:51 crc kubenswrapper[4768]: I0217 13:37:51.463406 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:51 crc kubenswrapper[4768]: I0217 13:37:51.463422 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:51Z","lastTransitionTime":"2026-02-17T13:37:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:51 crc kubenswrapper[4768]: I0217 13:37:51.534351 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 13:37:51 crc kubenswrapper[4768]: I0217 13:37:51.534382 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 13:37:51 crc kubenswrapper[4768]: I0217 13:37:51.534357 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 13:37:51 crc kubenswrapper[4768]: E0217 13:37:51.535193 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 13:37:51 crc kubenswrapper[4768]: E0217 13:37:51.535360 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 13:37:51 crc kubenswrapper[4768]: E0217 13:37:51.535467 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 13:37:51 crc kubenswrapper[4768]: I0217 13:37:51.535744 4768 scope.go:117] "RemoveContainer" containerID="115c47b6c343076ece428facfb56ccbf0b490cbfe7eff1470c731d6c304df3b9" Feb 17 13:37:51 crc kubenswrapper[4768]: E0217 13:37:51.535993 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-5cplg_openshift-ovn-kubernetes(742e6df8-2a68-426e-982c-ef825c6efca3)\"" pod="openshift-ovn-kubernetes/ovnkube-node-5cplg" podUID="742e6df8-2a68-426e-982c-ef825c6efca3" Feb 17 13:37:51 crc kubenswrapper[4768]: I0217 13:37:51.555645 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-6xvnz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"68ae92b3-aced-409b-901b-252d2364cc01\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c3e6dc9e6ef1908b60694a76b89114accbccc8e33d2334e751125f348ff68191\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42dxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ccce43f2773621b91f6782909e74d6c1cb29f3f2e0d4280753e2719f256d694\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3ccce43f2773621b91f6782909e74d6c1cb29f3f2e0d4280753e2719f256d694\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:36:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:36:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42dxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8fa44e73498e2987dca421d954e3c6b71c36138496d25e3498934a0ae03c25de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8fa44e73498e2987dca421d954e3c6b71c36138496d25e3498934a0ae03c25de\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:36:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:36:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42dxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8eee5a030051c479dc3e7d690a338260f3a886f6c8b6764037de0a4b3dcea101\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8eee5a030051c479dc3e7d690a338260f3a886f6c8b6764037de0a4b3dcea101\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:36:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:36:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42dxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://473b18f574df0c69d4f70ded0dee7c42ee64daab422dff19c6a58a0cde52d045\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://473b18f574df0c69d4f70ded0dee7c42ee64daab422dff19c6a58a0cde52d045\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:36:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:36:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42dxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a84122e995b4744d706f85250dc149036364dc1e476c38b54f08494e0cb41ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3a84122e995b4744d706f85250dc149036364dc1e476c38b54f08494e0cb41ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:36:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:36:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42dxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f77e278dccb14a5e674dad398d194080a57e6949529914ddfa9f8a63dbea8a34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f77e278dccb14a5e674dad398d194080a57e6949529914ddfa9f8a63dbea8a34\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:36:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:36:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42dxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:36:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-6xvnz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:37:51Z is after 2025-08-24T17:21:41Z" Feb 17 13:37:51 crc kubenswrapper[4768]: I0217 13:37:51.561843 4768 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-09 09:00:13.418879118 +0000 UTC Feb 17 13:37:51 crc kubenswrapper[4768]: I0217 13:37:51.565655 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:51 crc kubenswrapper[4768]: I0217 13:37:51.565712 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:51 crc kubenswrapper[4768]: I0217 13:37:51.565729 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:51 crc kubenswrapper[4768]: I0217 13:37:51.565750 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:51 crc kubenswrapper[4768]: I0217 13:37:51.565766 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:51Z","lastTransitionTime":"2026-02-17T13:37:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:51 crc kubenswrapper[4768]: I0217 13:37:51.569911 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-62mzv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f7e6dba9-bf9f-464a-9842-f4f2a793dedf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25c892cb5b274e124423e1da0778d7b3415022fb0280eed30a6b48055d44c3bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nz8rj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4dade5f6bbbe0473f9330ea16398a4522405ca6b22a0e65851a98fe57943b38f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nz8rj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:36:53Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-62mzv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:37:51Z is after 2025-08-24T17:21:41Z" Feb 17 13:37:51 crc kubenswrapper[4768]: I0217 13:37:51.582870 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-5bxh7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8c8b1469-ed55-4743-9553-f81efd79e5f1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ltsbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ltsbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:36:55Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-5bxh7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:37:51Z is after 2025-08-24T17:21:41Z" Feb 17 13:37:51 crc kubenswrapper[4768]: I0217 13:37:51.597081 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"88441663-1346-4328-a433-63cb4d9b6722\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:37:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:37:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1dd90501c9cf3ad0b27fefe4999afdfeaf0a8262728c8d47cd4714b0ef71d340\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://52dd97c46a2e371a68dcd9558be0f94d3bdd20c7a6ba6a65178fc09cdceea903\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4f0320f35632987c0e1f7eb6655c7221d7c159f1473088fa143a7c69a60f5b0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1670eca5569373c8bcdf22d19faf8794c43207f7711d502eeb2ade60d643f5dd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9ecebe409455190192bdd157c1aa1c3eaec5d7930ccfaa6c456a1d04be7fff2f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-17T13:36:41Z\\\",\\\"message\\\":\\\"le observer\\\\nW0217 13:36:40.862072 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0217 13:36:40.862247 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0217 13:36:40.862863 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1936381186/tls.crt::/tmp/serving-cert-1936381186/tls.key\\\\\\\"\\\\nI0217 13:36:41.350711 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0217 13:36:41.368959 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0217 13:36:41.368982 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0217 13:36:41.369003 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0217 13:36:41.369008 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0217 13:36:41.374960 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0217 13:36:41.374983 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 13:36:41.374989 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0217 13:36:41.374990 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0217 13:36:41.374994 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0217 13:36:41.375023 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0217 13:36:41.375028 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0217 13:36:41.375032 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0217 13:36:41.377284 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T13:36:25Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7641c2023d65064decc69606567dff6eff07116a6cc62f9d1b1cddb34a39a407\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:24Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9dffd0f3cc3b89ababe812ea2e550f319242913da2908f466cbb8d9ac541e4b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9dffd0f3cc3b89ababe812ea2e550f319242913da2908f466cbb8d9ac541e4b9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:36:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:36:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:36:21Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:37:51Z is after 2025-08-24T17:21:41Z" Feb 17 13:37:51 crc kubenswrapper[4768]: I0217 13:37:51.609489 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c0c5a46beaa5e6c9df638b288967eaff1605591faafc373623753e3c434c6ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a28c91cfb782b4976153d2afced3117498aaabaafa5bba655b020c13aebb1dc5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:37:51Z is after 2025-08-24T17:21:41Z" Feb 17 13:37:51 crc kubenswrapper[4768]: I0217 13:37:51.625348 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:37:51Z is after 2025-08-24T17:21:41Z" Feb 17 13:37:51 crc kubenswrapper[4768]: I0217 13:37:51.637280 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:41Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:37:51Z is after 2025-08-24T17:21:41Z" Feb 17 13:37:51 crc kubenswrapper[4768]: I0217 13:37:51.647631 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6l7rv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dac64b2f-4b0b-454c-96e0-fc7d563d300f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://903f9d31bd5f357a5a1f439a158788c1930d0f63cd50e809028689cfa7884e96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7njgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:36:41Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6l7rv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:37:51Z is after 2025-08-24T17:21:41Z" Feb 17 13:37:51 crc kubenswrapper[4768]: I0217 13:37:51.657478 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c2320ec4-2250-4b21-9540-a1cba167b158\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://69f54a203eacaea308baea34fc388357ae2762ef503475c8438f326ed643b401\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3517c374f232f5fc5707d00d0596ee543879b70ba6f3e35f0c2819cebaa41d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3517c374f232f5fc5707d00d0596ee543879b70ba6f3e35f0c2819cebaa41d0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:36:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:36:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:36:21Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:37:51Z is after 2025-08-24T17:21:41Z" Feb 17 13:37:51 crc kubenswrapper[4768]: I0217 13:37:51.668533 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:51 crc kubenswrapper[4768]: I0217 13:37:51.668586 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:51 crc kubenswrapper[4768]: I0217 13:37:51.668601 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:51 crc kubenswrapper[4768]: I0217 13:37:51.668592 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://07aa34038d6332ca2ddf610df510586d343655b1e38a2912ea1513cca35f4dcc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:37:51Z is after 2025-08-24T17:21:41Z" Feb 17 13:37:51 crc kubenswrapper[4768]: I0217 13:37:51.668625 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:51 crc kubenswrapper[4768]: I0217 13:37:51.668762 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:51Z","lastTransitionTime":"2026-02-17T13:37:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:51 crc kubenswrapper[4768]: I0217 13:37:51.681140 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:41Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:37:51Z is after 2025-08-24T17:21:41Z" Feb 17 13:37:51 crc kubenswrapper[4768]: I0217 13:37:51.693908 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f9b2d1282c12fcb8597c7b4c60dc1bab1049afc179eaddbc8a72203b392f78d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:37:51Z is after 2025-08-24T17:21:41Z" Feb 17 13:37:51 crc kubenswrapper[4768]: I0217 13:37:51.704074 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-p97z4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"10c685ba-8fe0-425c-958c-3fb6754d3d84\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d56cb97a69888610458dcdfe5fa47e01b6641b0ab8ce5b11c01042001017d633\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b2h62\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://69b662556c0f67ed5b73c21d8dda6de43cd7b0e7c3dc7a57578f3ce94a4be7cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b2h62\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:36:42Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-p97z4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:37:51Z is after 2025-08-24T17:21:41Z" Feb 17 13:37:51 crc kubenswrapper[4768]: I0217 13:37:51.715938 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-jjjqj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e044bf1f-26b2-4a39-86e6-0440eff3eaa9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:37:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:37:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8710b3ee2ff8d7aa849b863cfe8b99fd97e02f57ece68e528c7c23994608bedd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://19bc9ea99c2b3b0015849f2a4c730a14e048c35cf69f5b4025a4d51d707cf9f1\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-17T13:37:30Z\\\",\\\"message\\\":\\\"2026-02-17T13:36:44+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_6f8da95b-d2bc-4d33-8a29-38d7a4b0eb01\\\\n2026-02-17T13:36:44+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_6f8da95b-d2bc-4d33-8a29-38d7a4b0eb01 to /host/opt/cni/bin/\\\\n2026-02-17T13:36:45Z [verbose] multus-daemon started\\\\n2026-02-17T13:36:45Z [verbose] Readiness Indicator file check\\\\n2026-02-17T13:37:30Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T13:36:42Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:37:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jlkgb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:36:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-jjjqj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:37:51Z is after 2025-08-24T17:21:41Z" Feb 17 13:37:51 crc kubenswrapper[4768]: I0217 13:37:51.733363 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5cplg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"742e6df8-2a68-426e-982c-ef825c6efca3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://88f4574f4bb95ef3e580f5e3e98feb55cd1381df1cc14a3797f040f3a8d92e86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg6ql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c29effffff4df037300ce8f32cae5337e3c60634d26fb82fde99ad7994a4fd4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg6ql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bca83097ac9d4b621e9bffa8f2f815a6060db34cb0272281b765aa7705d3ab5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg6ql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aaa535fc7f87d47d5bb54797f60b4adcabe1adc1d5d79720acc7a9ba7bb355c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg6ql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e14dfa7f4c26b9ae731392907658bf0a72b811243137b32320a1be70a334a1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg6ql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0849644593ce9a9b0ce3191eb62e47c5986c4b20f93313dd9fa721c7879ede5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg6ql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://115c47b6c343076ece428facfb56ccbf0b490cbfe7eff1470c731d6c304df3b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://115c47b6c343076ece428facfb56ccbf0b490cbfe7eff1470c731d6c304df3b9\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-17T13:37:35Z\\\",\\\"message\\\":\\\"/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0217 13:37:35.213969 6847 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0217 13:37:35.214004 6847 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0217 13:37:35.214057 6847 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0217 13:37:35.214085 6847 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0217 13:37:35.214131 6847 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0217 13:37:35.222069 6847 factory.go:656] Stopping watch factory\\\\nI0217 13:37:35.266234 6847 shared_informer.go:320] Caches are synced for node-tracker-controller\\\\nI0217 13:37:35.266283 6847 services_controller.go:204] Setting up event handlers for services for network=default\\\\nI0217 13:37:35.266359 6847 ovnkube.go:599] Stopped ovnkube\\\\nI0217 13:37:35.266398 6847 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0217 13:37:35.266476 6847 ovnkube.go:137] failed to run ov\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T13:37:34Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-5cplg_openshift-ovn-kubernetes(742e6df8-2a68-426e-982c-ef825c6efca3)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg6ql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8e5ca206927b1e2a6968ce42ecdb1441da4c68ad11b65c657a68889ac73af37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg6ql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6fcb3dc45db5b2c74f9bdd4c7c453815a392de1f2afe91ebed618327cfe5cb7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6fcb3dc45db5b2c74f9bdd4c7c453815a392de1f2afe91ebed618327cfe5cb7c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:36:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:36:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg6ql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:36:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-5cplg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:37:51Z is after 2025-08-24T17:21:41Z" Feb 17 13:37:51 crc kubenswrapper[4768]: I0217 13:37:51.744499 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-hngsc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a9682c-5dd5-49ce-bd8c-60e91527ec2a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db596eafde85d9ad74007393b8576b424cd8561f68f694af82cde80f997a5688\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fbjbb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:36:46Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-hngsc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:37:51Z is after 2025-08-24T17:21:41Z" Feb 17 13:37:51 crc kubenswrapper[4768]: I0217 13:37:51.756180 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"99a4349d-f5a6-431b-b8d6-9de1f9bbe63a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f45aa99773693c07f27add72aa8305d47c48deeb24382aa9ac8bca232fd26ff3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://47db5ef94c07ecce7b368bb809a736c9532fe3e47081cfa7e248b8b40d1d243f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0adb0161ea24422b8b1443a81f00e9c38f60bf8d0718075ce189158200b28a78\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22e1c78ff231c7a76ebb06589f5f14caed8c0b72475083230c80b4512ae2a048\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:36:21Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:37:51Z is after 2025-08-24T17:21:41Z" Feb 17 13:37:51 crc kubenswrapper[4768]: I0217 13:37:51.768510 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1bbd8d51-f387-4de0-b841-e602f7249532\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:37:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:37:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T13:36:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a9d10a3cac9b0b43e1e7c3fb80828cd2d694a46314618dc023ffec8a43488c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e06287439e9f86ed8e81d6d3e8b07691439a8227d2eb2b819db1a7e6de078c18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://28f9a3cd67e4819d5c4e24b1b0d8576898a01c75184c958d65ce81f2523ce7f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T13:36:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3492da352fa4e1b54d266098578ea4f42e40cf1ebe2d6e511dc804473efd9e64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3492da352fa4e1b54d266098578ea4f42e40cf1ebe2d6e511dc804473efd9e64\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:36:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:36:22Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T13:36:21Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T13:37:51Z is after 2025-08-24T17:21:41Z" Feb 17 13:37:51 crc kubenswrapper[4768]: I0217 13:37:51.774456 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:51 crc kubenswrapper[4768]: I0217 13:37:51.774528 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:51 crc kubenswrapper[4768]: I0217 13:37:51.774550 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:51 crc kubenswrapper[4768]: I0217 13:37:51.774579 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:51 crc kubenswrapper[4768]: I0217 13:37:51.774601 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:51Z","lastTransitionTime":"2026-02-17T13:37:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:51 crc kubenswrapper[4768]: I0217 13:37:51.877574 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:51 crc kubenswrapper[4768]: I0217 13:37:51.877647 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:51 crc kubenswrapper[4768]: I0217 13:37:51.877665 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:51 crc kubenswrapper[4768]: I0217 13:37:51.877690 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:51 crc kubenswrapper[4768]: I0217 13:37:51.877707 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:51Z","lastTransitionTime":"2026-02-17T13:37:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:51 crc kubenswrapper[4768]: I0217 13:37:51.981006 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:51 crc kubenswrapper[4768]: I0217 13:37:51.981064 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:51 crc kubenswrapper[4768]: I0217 13:37:51.981084 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:51 crc kubenswrapper[4768]: I0217 13:37:51.981129 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:51 crc kubenswrapper[4768]: I0217 13:37:51.981148 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:51Z","lastTransitionTime":"2026-02-17T13:37:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:52 crc kubenswrapper[4768]: I0217 13:37:52.084676 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:52 crc kubenswrapper[4768]: I0217 13:37:52.084895 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:52 crc kubenswrapper[4768]: I0217 13:37:52.084997 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:52 crc kubenswrapper[4768]: I0217 13:37:52.085065 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:52 crc kubenswrapper[4768]: I0217 13:37:52.085157 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:52Z","lastTransitionTime":"2026-02-17T13:37:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:52 crc kubenswrapper[4768]: I0217 13:37:52.187943 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:52 crc kubenswrapper[4768]: I0217 13:37:52.187985 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:52 crc kubenswrapper[4768]: I0217 13:37:52.187993 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:52 crc kubenswrapper[4768]: I0217 13:37:52.188010 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:52 crc kubenswrapper[4768]: I0217 13:37:52.188019 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:52Z","lastTransitionTime":"2026-02-17T13:37:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:52 crc kubenswrapper[4768]: I0217 13:37:52.290115 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:52 crc kubenswrapper[4768]: I0217 13:37:52.290146 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:52 crc kubenswrapper[4768]: I0217 13:37:52.290154 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:52 crc kubenswrapper[4768]: I0217 13:37:52.290168 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:52 crc kubenswrapper[4768]: I0217 13:37:52.290178 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:52Z","lastTransitionTime":"2026-02-17T13:37:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:52 crc kubenswrapper[4768]: I0217 13:37:52.405499 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:52 crc kubenswrapper[4768]: I0217 13:37:52.405528 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:52 crc kubenswrapper[4768]: I0217 13:37:52.405536 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:52 crc kubenswrapper[4768]: I0217 13:37:52.405550 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:52 crc kubenswrapper[4768]: I0217 13:37:52.405559 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:52Z","lastTransitionTime":"2026-02-17T13:37:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:52 crc kubenswrapper[4768]: I0217 13:37:52.514136 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:52 crc kubenswrapper[4768]: I0217 13:37:52.514189 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:52 crc kubenswrapper[4768]: I0217 13:37:52.514200 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:52 crc kubenswrapper[4768]: I0217 13:37:52.514218 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:52 crc kubenswrapper[4768]: I0217 13:37:52.514231 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:52Z","lastTransitionTime":"2026-02-17T13:37:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:52 crc kubenswrapper[4768]: I0217 13:37:52.533660 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5bxh7" Feb 17 13:37:52 crc kubenswrapper[4768]: E0217 13:37:52.533843 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5bxh7" podUID="8c8b1469-ed55-4743-9553-f81efd79e5f1" Feb 17 13:37:52 crc kubenswrapper[4768]: I0217 13:37:52.562058 4768 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-27 15:14:44.305645668 +0000 UTC Feb 17 13:37:52 crc kubenswrapper[4768]: I0217 13:37:52.616023 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:52 crc kubenswrapper[4768]: I0217 13:37:52.616285 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:52 crc kubenswrapper[4768]: I0217 13:37:52.616371 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:52 crc kubenswrapper[4768]: I0217 13:37:52.616437 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:52 crc kubenswrapper[4768]: I0217 13:37:52.616499 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:52Z","lastTransitionTime":"2026-02-17T13:37:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:52 crc kubenswrapper[4768]: I0217 13:37:52.720442 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:52 crc kubenswrapper[4768]: I0217 13:37:52.720886 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:52 crc kubenswrapper[4768]: I0217 13:37:52.720981 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:52 crc kubenswrapper[4768]: I0217 13:37:52.721117 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:52 crc kubenswrapper[4768]: I0217 13:37:52.721222 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:52Z","lastTransitionTime":"2026-02-17T13:37:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:52 crc kubenswrapper[4768]: I0217 13:37:52.824005 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:52 crc kubenswrapper[4768]: I0217 13:37:52.824066 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:52 crc kubenswrapper[4768]: I0217 13:37:52.824077 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:52 crc kubenswrapper[4768]: I0217 13:37:52.824115 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:52 crc kubenswrapper[4768]: I0217 13:37:52.824129 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:52Z","lastTransitionTime":"2026-02-17T13:37:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:52 crc kubenswrapper[4768]: I0217 13:37:52.927229 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:52 crc kubenswrapper[4768]: I0217 13:37:52.927474 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:52 crc kubenswrapper[4768]: I0217 13:37:52.927537 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:52 crc kubenswrapper[4768]: I0217 13:37:52.927602 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:52 crc kubenswrapper[4768]: I0217 13:37:52.927656 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:52Z","lastTransitionTime":"2026-02-17T13:37:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:53 crc kubenswrapper[4768]: I0217 13:37:53.029641 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:53 crc kubenswrapper[4768]: I0217 13:37:53.029676 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:53 crc kubenswrapper[4768]: I0217 13:37:53.029688 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:53 crc kubenswrapper[4768]: I0217 13:37:53.029704 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:53 crc kubenswrapper[4768]: I0217 13:37:53.029716 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:53Z","lastTransitionTime":"2026-02-17T13:37:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:53 crc kubenswrapper[4768]: I0217 13:37:53.132497 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:53 crc kubenswrapper[4768]: I0217 13:37:53.132820 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:53 crc kubenswrapper[4768]: I0217 13:37:53.132918 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:53 crc kubenswrapper[4768]: I0217 13:37:53.133048 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:53 crc kubenswrapper[4768]: I0217 13:37:53.133159 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:53Z","lastTransitionTime":"2026-02-17T13:37:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:53 crc kubenswrapper[4768]: I0217 13:37:53.235988 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:53 crc kubenswrapper[4768]: I0217 13:37:53.236335 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:53 crc kubenswrapper[4768]: I0217 13:37:53.236656 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:53 crc kubenswrapper[4768]: I0217 13:37:53.237040 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:53 crc kubenswrapper[4768]: I0217 13:37:53.237228 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:53Z","lastTransitionTime":"2026-02-17T13:37:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:53 crc kubenswrapper[4768]: I0217 13:37:53.339894 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:53 crc kubenswrapper[4768]: I0217 13:37:53.339947 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:53 crc kubenswrapper[4768]: I0217 13:37:53.339969 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:53 crc kubenswrapper[4768]: I0217 13:37:53.340000 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:53 crc kubenswrapper[4768]: I0217 13:37:53.340017 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:53Z","lastTransitionTime":"2026-02-17T13:37:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:53 crc kubenswrapper[4768]: I0217 13:37:53.442761 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:53 crc kubenswrapper[4768]: I0217 13:37:53.442837 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:53 crc kubenswrapper[4768]: I0217 13:37:53.442857 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:53 crc kubenswrapper[4768]: I0217 13:37:53.442884 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:53 crc kubenswrapper[4768]: I0217 13:37:53.442902 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:53Z","lastTransitionTime":"2026-02-17T13:37:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:53 crc kubenswrapper[4768]: I0217 13:37:53.533978 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 13:37:53 crc kubenswrapper[4768]: E0217 13:37:53.534245 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 13:37:53 crc kubenswrapper[4768]: I0217 13:37:53.534272 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 13:37:53 crc kubenswrapper[4768]: I0217 13:37:53.534377 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 13:37:53 crc kubenswrapper[4768]: E0217 13:37:53.534452 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 13:37:53 crc kubenswrapper[4768]: E0217 13:37:53.534574 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 13:37:53 crc kubenswrapper[4768]: I0217 13:37:53.547040 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:53 crc kubenswrapper[4768]: I0217 13:37:53.553386 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:53 crc kubenswrapper[4768]: I0217 13:37:53.554178 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:53 crc kubenswrapper[4768]: I0217 13:37:53.554218 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:53 crc kubenswrapper[4768]: I0217 13:37:53.554242 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:53Z","lastTransitionTime":"2026-02-17T13:37:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:53 crc kubenswrapper[4768]: I0217 13:37:53.562245 4768 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-22 06:41:50.761302852 +0000 UTC Feb 17 13:37:53 crc kubenswrapper[4768]: I0217 13:37:53.657666 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:53 crc kubenswrapper[4768]: I0217 13:37:53.657944 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:53 crc kubenswrapper[4768]: I0217 13:37:53.658036 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:53 crc kubenswrapper[4768]: I0217 13:37:53.658187 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:53 crc kubenswrapper[4768]: I0217 13:37:53.658290 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:53Z","lastTransitionTime":"2026-02-17T13:37:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:53 crc kubenswrapper[4768]: I0217 13:37:53.760978 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:53 crc kubenswrapper[4768]: I0217 13:37:53.761058 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:53 crc kubenswrapper[4768]: I0217 13:37:53.761087 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:53 crc kubenswrapper[4768]: I0217 13:37:53.761159 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:53 crc kubenswrapper[4768]: I0217 13:37:53.761185 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:53Z","lastTransitionTime":"2026-02-17T13:37:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:53 crc kubenswrapper[4768]: I0217 13:37:53.863981 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:53 crc kubenswrapper[4768]: I0217 13:37:53.864034 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:53 crc kubenswrapper[4768]: I0217 13:37:53.864049 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:53 crc kubenswrapper[4768]: I0217 13:37:53.864075 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:53 crc kubenswrapper[4768]: I0217 13:37:53.864089 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:53Z","lastTransitionTime":"2026-02-17T13:37:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:53 crc kubenswrapper[4768]: I0217 13:37:53.967532 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:53 crc kubenswrapper[4768]: I0217 13:37:53.967590 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:53 crc kubenswrapper[4768]: I0217 13:37:53.967609 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:53 crc kubenswrapper[4768]: I0217 13:37:53.967634 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:53 crc kubenswrapper[4768]: I0217 13:37:53.967650 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:53Z","lastTransitionTime":"2026-02-17T13:37:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:54 crc kubenswrapper[4768]: I0217 13:37:54.071208 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:54 crc kubenswrapper[4768]: I0217 13:37:54.071591 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:54 crc kubenswrapper[4768]: I0217 13:37:54.071744 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:54 crc kubenswrapper[4768]: I0217 13:37:54.071881 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:54 crc kubenswrapper[4768]: I0217 13:37:54.072004 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:54Z","lastTransitionTime":"2026-02-17T13:37:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:54 crc kubenswrapper[4768]: I0217 13:37:54.174717 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:54 crc kubenswrapper[4768]: I0217 13:37:54.174959 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:54 crc kubenswrapper[4768]: I0217 13:37:54.175068 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:54 crc kubenswrapper[4768]: I0217 13:37:54.175217 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:54 crc kubenswrapper[4768]: I0217 13:37:54.175398 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:54Z","lastTransitionTime":"2026-02-17T13:37:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:54 crc kubenswrapper[4768]: I0217 13:37:54.277778 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:54 crc kubenswrapper[4768]: I0217 13:37:54.277835 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:54 crc kubenswrapper[4768]: I0217 13:37:54.277851 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:54 crc kubenswrapper[4768]: I0217 13:37:54.277876 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:54 crc kubenswrapper[4768]: I0217 13:37:54.277895 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:54Z","lastTransitionTime":"2026-02-17T13:37:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:54 crc kubenswrapper[4768]: I0217 13:37:54.382232 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:54 crc kubenswrapper[4768]: I0217 13:37:54.382335 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:54 crc kubenswrapper[4768]: I0217 13:37:54.382360 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:54 crc kubenswrapper[4768]: I0217 13:37:54.382391 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:54 crc kubenswrapper[4768]: I0217 13:37:54.382414 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:54Z","lastTransitionTime":"2026-02-17T13:37:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:54 crc kubenswrapper[4768]: I0217 13:37:54.485286 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:54 crc kubenswrapper[4768]: I0217 13:37:54.485328 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:54 crc kubenswrapper[4768]: I0217 13:37:54.485339 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:54 crc kubenswrapper[4768]: I0217 13:37:54.485358 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:54 crc kubenswrapper[4768]: I0217 13:37:54.485374 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:54Z","lastTransitionTime":"2026-02-17T13:37:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:54 crc kubenswrapper[4768]: I0217 13:37:54.534021 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5bxh7" Feb 17 13:37:54 crc kubenswrapper[4768]: E0217 13:37:54.534797 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5bxh7" podUID="8c8b1469-ed55-4743-9553-f81efd79e5f1" Feb 17 13:37:54 crc kubenswrapper[4768]: I0217 13:37:54.563430 4768 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-27 20:47:37.323672758 +0000 UTC Feb 17 13:37:54 crc kubenswrapper[4768]: I0217 13:37:54.587763 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:54 crc kubenswrapper[4768]: I0217 13:37:54.587811 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:54 crc kubenswrapper[4768]: I0217 13:37:54.587824 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:54 crc kubenswrapper[4768]: I0217 13:37:54.587846 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:54 crc kubenswrapper[4768]: I0217 13:37:54.587868 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:54Z","lastTransitionTime":"2026-02-17T13:37:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:54 crc kubenswrapper[4768]: I0217 13:37:54.690897 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:54 crc kubenswrapper[4768]: I0217 13:37:54.691358 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:54 crc kubenswrapper[4768]: I0217 13:37:54.691587 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:54 crc kubenswrapper[4768]: I0217 13:37:54.691859 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:54 crc kubenswrapper[4768]: I0217 13:37:54.692074 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:54Z","lastTransitionTime":"2026-02-17T13:37:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:54 crc kubenswrapper[4768]: I0217 13:37:54.794850 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:54 crc kubenswrapper[4768]: I0217 13:37:54.794924 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:54 crc kubenswrapper[4768]: I0217 13:37:54.794941 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:54 crc kubenswrapper[4768]: I0217 13:37:54.794967 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:54 crc kubenswrapper[4768]: I0217 13:37:54.794983 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:54Z","lastTransitionTime":"2026-02-17T13:37:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:54 crc kubenswrapper[4768]: I0217 13:37:54.898030 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:54 crc kubenswrapper[4768]: I0217 13:37:54.898091 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:54 crc kubenswrapper[4768]: I0217 13:37:54.898136 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:54 crc kubenswrapper[4768]: I0217 13:37:54.898161 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:54 crc kubenswrapper[4768]: I0217 13:37:54.898177 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:54Z","lastTransitionTime":"2026-02-17T13:37:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:55 crc kubenswrapper[4768]: I0217 13:37:55.001076 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:55 crc kubenswrapper[4768]: I0217 13:37:55.001131 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:55 crc kubenswrapper[4768]: I0217 13:37:55.001139 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:55 crc kubenswrapper[4768]: I0217 13:37:55.001154 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:55 crc kubenswrapper[4768]: I0217 13:37:55.001163 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:55Z","lastTransitionTime":"2026-02-17T13:37:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:55 crc kubenswrapper[4768]: I0217 13:37:55.103533 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:55 crc kubenswrapper[4768]: I0217 13:37:55.103584 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:55 crc kubenswrapper[4768]: I0217 13:37:55.103597 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:55 crc kubenswrapper[4768]: I0217 13:37:55.103620 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:55 crc kubenswrapper[4768]: I0217 13:37:55.103632 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:55Z","lastTransitionTime":"2026-02-17T13:37:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:55 crc kubenswrapper[4768]: I0217 13:37:55.207354 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:55 crc kubenswrapper[4768]: I0217 13:37:55.207430 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:55 crc kubenswrapper[4768]: I0217 13:37:55.207455 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:55 crc kubenswrapper[4768]: I0217 13:37:55.207490 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:55 crc kubenswrapper[4768]: I0217 13:37:55.207520 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:55Z","lastTransitionTime":"2026-02-17T13:37:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:55 crc kubenswrapper[4768]: I0217 13:37:55.311216 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:55 crc kubenswrapper[4768]: I0217 13:37:55.311273 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:55 crc kubenswrapper[4768]: I0217 13:37:55.311285 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:55 crc kubenswrapper[4768]: I0217 13:37:55.311304 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:55 crc kubenswrapper[4768]: I0217 13:37:55.311318 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:55Z","lastTransitionTime":"2026-02-17T13:37:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:55 crc kubenswrapper[4768]: I0217 13:37:55.414766 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:55 crc kubenswrapper[4768]: I0217 13:37:55.414822 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:55 crc kubenswrapper[4768]: I0217 13:37:55.414839 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:55 crc kubenswrapper[4768]: I0217 13:37:55.414868 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:55 crc kubenswrapper[4768]: I0217 13:37:55.414886 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:55Z","lastTransitionTime":"2026-02-17T13:37:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:55 crc kubenswrapper[4768]: I0217 13:37:55.518606 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:55 crc kubenswrapper[4768]: I0217 13:37:55.519019 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:55 crc kubenswrapper[4768]: I0217 13:37:55.519218 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:55 crc kubenswrapper[4768]: I0217 13:37:55.519373 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:55 crc kubenswrapper[4768]: I0217 13:37:55.519517 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:55Z","lastTransitionTime":"2026-02-17T13:37:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:55 crc kubenswrapper[4768]: I0217 13:37:55.534046 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 13:37:55 crc kubenswrapper[4768]: I0217 13:37:55.534056 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 13:37:55 crc kubenswrapper[4768]: I0217 13:37:55.534187 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 13:37:55 crc kubenswrapper[4768]: E0217 13:37:55.534737 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 13:37:55 crc kubenswrapper[4768]: E0217 13:37:55.534839 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 13:37:55 crc kubenswrapper[4768]: E0217 13:37:55.534914 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 13:37:55 crc kubenswrapper[4768]: I0217 13:37:55.564544 4768 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-08 05:10:17.601592048 +0000 UTC Feb 17 13:37:55 crc kubenswrapper[4768]: I0217 13:37:55.623726 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:55 crc kubenswrapper[4768]: I0217 13:37:55.623949 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:55 crc kubenswrapper[4768]: I0217 13:37:55.623971 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:55 crc kubenswrapper[4768]: I0217 13:37:55.623994 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:55 crc kubenswrapper[4768]: I0217 13:37:55.624012 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:55Z","lastTransitionTime":"2026-02-17T13:37:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:55 crc kubenswrapper[4768]: I0217 13:37:55.727275 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:55 crc kubenswrapper[4768]: I0217 13:37:55.727329 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:55 crc kubenswrapper[4768]: I0217 13:37:55.727346 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:55 crc kubenswrapper[4768]: I0217 13:37:55.727371 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:55 crc kubenswrapper[4768]: I0217 13:37:55.727387 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:55Z","lastTransitionTime":"2026-02-17T13:37:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:55 crc kubenswrapper[4768]: I0217 13:37:55.830327 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:55 crc kubenswrapper[4768]: I0217 13:37:55.830381 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:55 crc kubenswrapper[4768]: I0217 13:37:55.830391 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:55 crc kubenswrapper[4768]: I0217 13:37:55.830407 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:55 crc kubenswrapper[4768]: I0217 13:37:55.830417 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:55Z","lastTransitionTime":"2026-02-17T13:37:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:55 crc kubenswrapper[4768]: I0217 13:37:55.933428 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:55 crc kubenswrapper[4768]: I0217 13:37:55.933479 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:55 crc kubenswrapper[4768]: I0217 13:37:55.933491 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:55 crc kubenswrapper[4768]: I0217 13:37:55.933513 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:55 crc kubenswrapper[4768]: I0217 13:37:55.933526 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:55Z","lastTransitionTime":"2026-02-17T13:37:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:56 crc kubenswrapper[4768]: I0217 13:37:56.036459 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:56 crc kubenswrapper[4768]: I0217 13:37:56.036513 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:56 crc kubenswrapper[4768]: I0217 13:37:56.036525 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:56 crc kubenswrapper[4768]: I0217 13:37:56.036546 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:56 crc kubenswrapper[4768]: I0217 13:37:56.036559 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:56Z","lastTransitionTime":"2026-02-17T13:37:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:56 crc kubenswrapper[4768]: I0217 13:37:56.140047 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:56 crc kubenswrapper[4768]: I0217 13:37:56.140163 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:56 crc kubenswrapper[4768]: I0217 13:37:56.140175 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:56 crc kubenswrapper[4768]: I0217 13:37:56.140191 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:56 crc kubenswrapper[4768]: I0217 13:37:56.140201 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:56Z","lastTransitionTime":"2026-02-17T13:37:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:56 crc kubenswrapper[4768]: I0217 13:37:56.243554 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:56 crc kubenswrapper[4768]: I0217 13:37:56.243593 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:56 crc kubenswrapper[4768]: I0217 13:37:56.243660 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:56 crc kubenswrapper[4768]: I0217 13:37:56.243677 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:56 crc kubenswrapper[4768]: I0217 13:37:56.243689 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:56Z","lastTransitionTime":"2026-02-17T13:37:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:56 crc kubenswrapper[4768]: I0217 13:37:56.346467 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:56 crc kubenswrapper[4768]: I0217 13:37:56.346542 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:56 crc kubenswrapper[4768]: I0217 13:37:56.346567 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:56 crc kubenswrapper[4768]: I0217 13:37:56.346596 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:56 crc kubenswrapper[4768]: I0217 13:37:56.346615 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:56Z","lastTransitionTime":"2026-02-17T13:37:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:56 crc kubenswrapper[4768]: I0217 13:37:56.450467 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:56 crc kubenswrapper[4768]: I0217 13:37:56.450546 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:56 crc kubenswrapper[4768]: I0217 13:37:56.450573 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:56 crc kubenswrapper[4768]: I0217 13:37:56.450602 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:56 crc kubenswrapper[4768]: I0217 13:37:56.450625 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:56Z","lastTransitionTime":"2026-02-17T13:37:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:56 crc kubenswrapper[4768]: I0217 13:37:56.534066 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5bxh7" Feb 17 13:37:56 crc kubenswrapper[4768]: E0217 13:37:56.534462 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5bxh7" podUID="8c8b1469-ed55-4743-9553-f81efd79e5f1" Feb 17 13:37:56 crc kubenswrapper[4768]: I0217 13:37:56.559689 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:56 crc kubenswrapper[4768]: I0217 13:37:56.559713 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:56 crc kubenswrapper[4768]: I0217 13:37:56.559721 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:56 crc kubenswrapper[4768]: I0217 13:37:56.559737 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:56 crc kubenswrapper[4768]: I0217 13:37:56.559747 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:56Z","lastTransitionTime":"2026-02-17T13:37:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:56 crc kubenswrapper[4768]: I0217 13:37:56.565993 4768 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-02 01:35:42.628426248 +0000 UTC Feb 17 13:37:56 crc kubenswrapper[4768]: I0217 13:37:56.663492 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:56 crc kubenswrapper[4768]: I0217 13:37:56.663643 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:56 crc kubenswrapper[4768]: I0217 13:37:56.663658 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:56 crc kubenswrapper[4768]: I0217 13:37:56.663681 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:56 crc kubenswrapper[4768]: I0217 13:37:56.663695 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:56Z","lastTransitionTime":"2026-02-17T13:37:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:56 crc kubenswrapper[4768]: I0217 13:37:56.766529 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:56 crc kubenswrapper[4768]: I0217 13:37:56.766613 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:56 crc kubenswrapper[4768]: I0217 13:37:56.766635 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:56 crc kubenswrapper[4768]: I0217 13:37:56.766669 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:56 crc kubenswrapper[4768]: I0217 13:37:56.766690 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:56Z","lastTransitionTime":"2026-02-17T13:37:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:56 crc kubenswrapper[4768]: I0217 13:37:56.869471 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:56 crc kubenswrapper[4768]: I0217 13:37:56.869514 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:56 crc kubenswrapper[4768]: I0217 13:37:56.869529 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:56 crc kubenswrapper[4768]: I0217 13:37:56.869552 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:56 crc kubenswrapper[4768]: I0217 13:37:56.869569 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:56Z","lastTransitionTime":"2026-02-17T13:37:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:56 crc kubenswrapper[4768]: I0217 13:37:56.971964 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:56 crc kubenswrapper[4768]: I0217 13:37:56.972024 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:56 crc kubenswrapper[4768]: I0217 13:37:56.972035 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:56 crc kubenswrapper[4768]: I0217 13:37:56.972054 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:56 crc kubenswrapper[4768]: I0217 13:37:56.972070 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:56Z","lastTransitionTime":"2026-02-17T13:37:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:57 crc kubenswrapper[4768]: I0217 13:37:57.082639 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:57 crc kubenswrapper[4768]: I0217 13:37:57.082689 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:57 crc kubenswrapper[4768]: I0217 13:37:57.082702 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:57 crc kubenswrapper[4768]: I0217 13:37:57.082719 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:57 crc kubenswrapper[4768]: I0217 13:37:57.082730 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:57Z","lastTransitionTime":"2026-02-17T13:37:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:57 crc kubenswrapper[4768]: I0217 13:37:57.185392 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:57 crc kubenswrapper[4768]: I0217 13:37:57.185453 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:57 crc kubenswrapper[4768]: I0217 13:37:57.185468 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:57 crc kubenswrapper[4768]: I0217 13:37:57.185488 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:57 crc kubenswrapper[4768]: I0217 13:37:57.185500 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:57Z","lastTransitionTime":"2026-02-17T13:37:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:57 crc kubenswrapper[4768]: I0217 13:37:57.288272 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:57 crc kubenswrapper[4768]: I0217 13:37:57.288569 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:57 crc kubenswrapper[4768]: I0217 13:37:57.288577 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:57 crc kubenswrapper[4768]: I0217 13:37:57.288592 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:57 crc kubenswrapper[4768]: I0217 13:37:57.288600 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:57Z","lastTransitionTime":"2026-02-17T13:37:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:57 crc kubenswrapper[4768]: I0217 13:37:57.391228 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:57 crc kubenswrapper[4768]: I0217 13:37:57.391255 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:57 crc kubenswrapper[4768]: I0217 13:37:57.391267 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:57 crc kubenswrapper[4768]: I0217 13:37:57.391282 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:57 crc kubenswrapper[4768]: I0217 13:37:57.391294 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:57Z","lastTransitionTime":"2026-02-17T13:37:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:57 crc kubenswrapper[4768]: I0217 13:37:57.493498 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:57 crc kubenswrapper[4768]: I0217 13:37:57.493530 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:57 crc kubenswrapper[4768]: I0217 13:37:57.493539 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:57 crc kubenswrapper[4768]: I0217 13:37:57.493553 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:57 crc kubenswrapper[4768]: I0217 13:37:57.493561 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:57Z","lastTransitionTime":"2026-02-17T13:37:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:57 crc kubenswrapper[4768]: I0217 13:37:57.533313 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 13:37:57 crc kubenswrapper[4768]: I0217 13:37:57.533342 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 13:37:57 crc kubenswrapper[4768]: E0217 13:37:57.533457 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 13:37:57 crc kubenswrapper[4768]: E0217 13:37:57.533526 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 13:37:57 crc kubenswrapper[4768]: I0217 13:37:57.533347 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 13:37:57 crc kubenswrapper[4768]: E0217 13:37:57.533853 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 13:37:57 crc kubenswrapper[4768]: I0217 13:37:57.566606 4768 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-03 00:06:31.04114087 +0000 UTC Feb 17 13:37:57 crc kubenswrapper[4768]: I0217 13:37:57.596165 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:57 crc kubenswrapper[4768]: I0217 13:37:57.596221 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:57 crc kubenswrapper[4768]: I0217 13:37:57.596238 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:57 crc kubenswrapper[4768]: I0217 13:37:57.596259 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:57 crc kubenswrapper[4768]: I0217 13:37:57.596276 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:57Z","lastTransitionTime":"2026-02-17T13:37:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:57 crc kubenswrapper[4768]: I0217 13:37:57.698920 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:57 crc kubenswrapper[4768]: I0217 13:37:57.698956 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:57 crc kubenswrapper[4768]: I0217 13:37:57.698968 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:57 crc kubenswrapper[4768]: I0217 13:37:57.698982 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:57 crc kubenswrapper[4768]: I0217 13:37:57.698990 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:57Z","lastTransitionTime":"2026-02-17T13:37:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:57 crc kubenswrapper[4768]: I0217 13:37:57.801624 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:57 crc kubenswrapper[4768]: I0217 13:37:57.801983 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:57 crc kubenswrapper[4768]: I0217 13:37:57.802204 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:57 crc kubenswrapper[4768]: I0217 13:37:57.802383 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:57 crc kubenswrapper[4768]: I0217 13:37:57.802528 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:57Z","lastTransitionTime":"2026-02-17T13:37:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:57 crc kubenswrapper[4768]: I0217 13:37:57.905256 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:57 crc kubenswrapper[4768]: I0217 13:37:57.905304 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:57 crc kubenswrapper[4768]: I0217 13:37:57.905318 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:57 crc kubenswrapper[4768]: I0217 13:37:57.905337 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:57 crc kubenswrapper[4768]: I0217 13:37:57.905349 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:57Z","lastTransitionTime":"2026-02-17T13:37:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:58 crc kubenswrapper[4768]: I0217 13:37:58.008026 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:58 crc kubenswrapper[4768]: I0217 13:37:58.008060 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:58 crc kubenswrapper[4768]: I0217 13:37:58.008071 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:58 crc kubenswrapper[4768]: I0217 13:37:58.008088 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:58 crc kubenswrapper[4768]: I0217 13:37:58.008123 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:58Z","lastTransitionTime":"2026-02-17T13:37:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:58 crc kubenswrapper[4768]: I0217 13:37:58.111543 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:58 crc kubenswrapper[4768]: I0217 13:37:58.111617 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:58 crc kubenswrapper[4768]: I0217 13:37:58.111642 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:58 crc kubenswrapper[4768]: I0217 13:37:58.111671 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:58 crc kubenswrapper[4768]: I0217 13:37:58.111692 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:58Z","lastTransitionTime":"2026-02-17T13:37:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:58 crc kubenswrapper[4768]: I0217 13:37:58.215200 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:58 crc kubenswrapper[4768]: I0217 13:37:58.215267 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:58 crc kubenswrapper[4768]: I0217 13:37:58.215287 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:58 crc kubenswrapper[4768]: I0217 13:37:58.215316 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:58 crc kubenswrapper[4768]: I0217 13:37:58.215336 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:58Z","lastTransitionTime":"2026-02-17T13:37:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:58 crc kubenswrapper[4768]: I0217 13:37:58.318442 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:58 crc kubenswrapper[4768]: I0217 13:37:58.318583 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:58 crc kubenswrapper[4768]: I0217 13:37:58.318609 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:58 crc kubenswrapper[4768]: I0217 13:37:58.318637 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:58 crc kubenswrapper[4768]: I0217 13:37:58.318655 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:58Z","lastTransitionTime":"2026-02-17T13:37:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:58 crc kubenswrapper[4768]: I0217 13:37:58.423688 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:58 crc kubenswrapper[4768]: I0217 13:37:58.423748 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:58 crc kubenswrapper[4768]: I0217 13:37:58.423765 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:58 crc kubenswrapper[4768]: I0217 13:37:58.423789 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:58 crc kubenswrapper[4768]: I0217 13:37:58.423805 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:58Z","lastTransitionTime":"2026-02-17T13:37:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:58 crc kubenswrapper[4768]: I0217 13:37:58.527367 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:58 crc kubenswrapper[4768]: I0217 13:37:58.527437 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:58 crc kubenswrapper[4768]: I0217 13:37:58.527452 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:58 crc kubenswrapper[4768]: I0217 13:37:58.527477 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:58 crc kubenswrapper[4768]: I0217 13:37:58.527493 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:58Z","lastTransitionTime":"2026-02-17T13:37:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:58 crc kubenswrapper[4768]: I0217 13:37:58.533717 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5bxh7" Feb 17 13:37:58 crc kubenswrapper[4768]: E0217 13:37:58.533885 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5bxh7" podUID="8c8b1469-ed55-4743-9553-f81efd79e5f1" Feb 17 13:37:58 crc kubenswrapper[4768]: I0217 13:37:58.548230 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-crc"] Feb 17 13:37:58 crc kubenswrapper[4768]: I0217 13:37:58.567350 4768 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-11 17:12:35.276154553 +0000 UTC Feb 17 13:37:58 crc kubenswrapper[4768]: I0217 13:37:58.632896 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:58 crc kubenswrapper[4768]: I0217 13:37:58.632984 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:58 crc kubenswrapper[4768]: I0217 13:37:58.633004 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:58 crc kubenswrapper[4768]: I0217 13:37:58.633029 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:58 crc kubenswrapper[4768]: I0217 13:37:58.633080 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:58Z","lastTransitionTime":"2026-02-17T13:37:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:58 crc kubenswrapper[4768]: I0217 13:37:58.737538 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:58 crc kubenswrapper[4768]: I0217 13:37:58.737621 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:58 crc kubenswrapper[4768]: I0217 13:37:58.737643 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:58 crc kubenswrapper[4768]: I0217 13:37:58.737671 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:58 crc kubenswrapper[4768]: I0217 13:37:58.737690 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:58Z","lastTransitionTime":"2026-02-17T13:37:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:58 crc kubenswrapper[4768]: I0217 13:37:58.840020 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:58 crc kubenswrapper[4768]: I0217 13:37:58.840059 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:58 crc kubenswrapper[4768]: I0217 13:37:58.840067 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:58 crc kubenswrapper[4768]: I0217 13:37:58.840083 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:58 crc kubenswrapper[4768]: I0217 13:37:58.840094 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:58Z","lastTransitionTime":"2026-02-17T13:37:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:58 crc kubenswrapper[4768]: I0217 13:37:58.947674 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:58 crc kubenswrapper[4768]: I0217 13:37:58.947729 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:58 crc kubenswrapper[4768]: I0217 13:37:58.947739 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:58 crc kubenswrapper[4768]: I0217 13:37:58.947756 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:58 crc kubenswrapper[4768]: I0217 13:37:58.947766 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:58Z","lastTransitionTime":"2026-02-17T13:37:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:59 crc kubenswrapper[4768]: I0217 13:37:59.051185 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:59 crc kubenswrapper[4768]: I0217 13:37:59.051503 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:59 crc kubenswrapper[4768]: I0217 13:37:59.051780 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:59 crc kubenswrapper[4768]: I0217 13:37:59.052064 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:59 crc kubenswrapper[4768]: I0217 13:37:59.052291 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:59Z","lastTransitionTime":"2026-02-17T13:37:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:59 crc kubenswrapper[4768]: I0217 13:37:59.154157 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:59 crc kubenswrapper[4768]: I0217 13:37:59.154218 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:59 crc kubenswrapper[4768]: I0217 13:37:59.154236 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:59 crc kubenswrapper[4768]: I0217 13:37:59.154261 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:59 crc kubenswrapper[4768]: I0217 13:37:59.154279 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:59Z","lastTransitionTime":"2026-02-17T13:37:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:59 crc kubenswrapper[4768]: I0217 13:37:59.257141 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:59 crc kubenswrapper[4768]: I0217 13:37:59.257202 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:59 crc kubenswrapper[4768]: I0217 13:37:59.257220 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:59 crc kubenswrapper[4768]: I0217 13:37:59.257245 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:59 crc kubenswrapper[4768]: I0217 13:37:59.257261 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:59Z","lastTransitionTime":"2026-02-17T13:37:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:59 crc kubenswrapper[4768]: I0217 13:37:59.360075 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:59 crc kubenswrapper[4768]: I0217 13:37:59.360178 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:59 crc kubenswrapper[4768]: I0217 13:37:59.360201 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:59 crc kubenswrapper[4768]: I0217 13:37:59.360230 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:59 crc kubenswrapper[4768]: I0217 13:37:59.360252 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:59Z","lastTransitionTime":"2026-02-17T13:37:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:59 crc kubenswrapper[4768]: I0217 13:37:59.381086 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/8c8b1469-ed55-4743-9553-f81efd79e5f1-metrics-certs\") pod \"network-metrics-daemon-5bxh7\" (UID: \"8c8b1469-ed55-4743-9553-f81efd79e5f1\") " pod="openshift-multus/network-metrics-daemon-5bxh7" Feb 17 13:37:59 crc kubenswrapper[4768]: E0217 13:37:59.381268 4768 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 17 13:37:59 crc kubenswrapper[4768]: E0217 13:37:59.381367 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8c8b1469-ed55-4743-9553-f81efd79e5f1-metrics-certs podName:8c8b1469-ed55-4743-9553-f81efd79e5f1 nodeName:}" failed. No retries permitted until 2026-02-17 13:39:03.381344123 +0000 UTC m=+162.660730585 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/8c8b1469-ed55-4743-9553-f81efd79e5f1-metrics-certs") pod "network-metrics-daemon-5bxh7" (UID: "8c8b1469-ed55-4743-9553-f81efd79e5f1") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 17 13:37:59 crc kubenswrapper[4768]: I0217 13:37:59.468017 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:59 crc kubenswrapper[4768]: I0217 13:37:59.468057 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:59 crc kubenswrapper[4768]: I0217 13:37:59.468065 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:59 crc kubenswrapper[4768]: I0217 13:37:59.468079 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:59 crc kubenswrapper[4768]: I0217 13:37:59.468087 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:59Z","lastTransitionTime":"2026-02-17T13:37:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:59 crc kubenswrapper[4768]: I0217 13:37:59.533944 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 13:37:59 crc kubenswrapper[4768]: E0217 13:37:59.534322 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 13:37:59 crc kubenswrapper[4768]: I0217 13:37:59.534033 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 13:37:59 crc kubenswrapper[4768]: E0217 13:37:59.534565 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 13:37:59 crc kubenswrapper[4768]: I0217 13:37:59.533984 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 13:37:59 crc kubenswrapper[4768]: E0217 13:37:59.534798 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 13:37:59 crc kubenswrapper[4768]: I0217 13:37:59.568418 4768 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-03 00:18:58.519618427 +0000 UTC Feb 17 13:37:59 crc kubenswrapper[4768]: I0217 13:37:59.571211 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:59 crc kubenswrapper[4768]: I0217 13:37:59.571258 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:59 crc kubenswrapper[4768]: I0217 13:37:59.571277 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:59 crc kubenswrapper[4768]: I0217 13:37:59.571302 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:59 crc kubenswrapper[4768]: I0217 13:37:59.571320 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:59Z","lastTransitionTime":"2026-02-17T13:37:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:59 crc kubenswrapper[4768]: I0217 13:37:59.673807 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:59 crc kubenswrapper[4768]: I0217 13:37:59.673878 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:59 crc kubenswrapper[4768]: I0217 13:37:59.673902 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:59 crc kubenswrapper[4768]: I0217 13:37:59.673931 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:59 crc kubenswrapper[4768]: I0217 13:37:59.673954 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:59Z","lastTransitionTime":"2026-02-17T13:37:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:59 crc kubenswrapper[4768]: I0217 13:37:59.777001 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:59 crc kubenswrapper[4768]: I0217 13:37:59.777072 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:59 crc kubenswrapper[4768]: I0217 13:37:59.777090 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:59 crc kubenswrapper[4768]: I0217 13:37:59.777173 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:59 crc kubenswrapper[4768]: I0217 13:37:59.777186 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:59Z","lastTransitionTime":"2026-02-17T13:37:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:59 crc kubenswrapper[4768]: I0217 13:37:59.879964 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:59 crc kubenswrapper[4768]: I0217 13:37:59.880006 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:59 crc kubenswrapper[4768]: I0217 13:37:59.880018 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:59 crc kubenswrapper[4768]: I0217 13:37:59.880035 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:59 crc kubenswrapper[4768]: I0217 13:37:59.880046 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:59Z","lastTransitionTime":"2026-02-17T13:37:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:59 crc kubenswrapper[4768]: I0217 13:37:59.913531 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 13:37:59 crc kubenswrapper[4768]: I0217 13:37:59.913607 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 13:37:59 crc kubenswrapper[4768]: I0217 13:37:59.913627 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 13:37:59 crc kubenswrapper[4768]: I0217 13:37:59.913655 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 13:37:59 crc kubenswrapper[4768]: I0217 13:37:59.913673 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T13:37:59Z","lastTransitionTime":"2026-02-17T13:37:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 13:37:59 crc kubenswrapper[4768]: I0217 13:37:59.977758 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-5c965bbfc6-zxdb7"] Feb 17 13:37:59 crc kubenswrapper[4768]: I0217 13:37:59.978195 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-zxdb7" Feb 17 13:37:59 crc kubenswrapper[4768]: I0217 13:37:59.981153 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Feb 17 13:37:59 crc kubenswrapper[4768]: I0217 13:37:59.982274 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Feb 17 13:37:59 crc kubenswrapper[4768]: I0217 13:37:59.984556 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Feb 17 13:37:59 crc kubenswrapper[4768]: I0217 13:37:59.985142 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Feb 17 13:38:00 crc kubenswrapper[4768]: I0217 13:38:00.033528 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-6xvnz" podStartSLOduration=79.03350205 podStartE2EDuration="1m19.03350205s" podCreationTimestamp="2026-02-17 13:36:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 13:38:00.012561758 +0000 UTC m=+99.291948250" watchObservedRunningTime="2026-02-17 13:38:00.03350205 +0000 UTC m=+99.312888492" Feb 17 13:38:00 crc kubenswrapper[4768]: I0217 13:38:00.050011 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-62mzv" podStartSLOduration=78.049983686 podStartE2EDuration="1m18.049983686s" podCreationTimestamp="2026-02-17 13:36:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 13:38:00.034329133 +0000 UTC m=+99.313715575" watchObservedRunningTime="2026-02-17 13:38:00.049983686 +0000 UTC m=+99.329370168" Feb 17 13:38:00 crc kubenswrapper[4768]: I0217 13:38:00.089230 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/1cb2dd75-6f0d-4b06-962f-3dba6dbe9d93-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-zxdb7\" (UID: \"1cb2dd75-6f0d-4b06-962f-3dba6dbe9d93\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-zxdb7" Feb 17 13:38:00 crc kubenswrapper[4768]: I0217 13:38:00.089318 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/1cb2dd75-6f0d-4b06-962f-3dba6dbe9d93-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-zxdb7\" (UID: \"1cb2dd75-6f0d-4b06-962f-3dba6dbe9d93\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-zxdb7" Feb 17 13:38:00 crc kubenswrapper[4768]: I0217 13:38:00.089351 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1cb2dd75-6f0d-4b06-962f-3dba6dbe9d93-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-zxdb7\" (UID: \"1cb2dd75-6f0d-4b06-962f-3dba6dbe9d93\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-zxdb7" Feb 17 13:38:00 crc kubenswrapper[4768]: I0217 13:38:00.089382 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1cb2dd75-6f0d-4b06-962f-3dba6dbe9d93-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-zxdb7\" (UID: \"1cb2dd75-6f0d-4b06-962f-3dba6dbe9d93\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-zxdb7" Feb 17 13:38:00 crc kubenswrapper[4768]: I0217 13:38:00.089465 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/1cb2dd75-6f0d-4b06-962f-3dba6dbe9d93-service-ca\") pod \"cluster-version-operator-5c965bbfc6-zxdb7\" (UID: \"1cb2dd75-6f0d-4b06-962f-3dba6dbe9d93\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-zxdb7" Feb 17 13:38:00 crc kubenswrapper[4768]: I0217 13:38:00.103252 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=79.103232 podStartE2EDuration="1m19.103232s" podCreationTimestamp="2026-02-17 13:36:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 13:38:00.086872257 +0000 UTC m=+99.366258719" watchObservedRunningTime="2026-02-17 13:38:00.103232 +0000 UTC m=+99.382618442" Feb 17 13:38:00 crc kubenswrapper[4768]: I0217 13:38:00.179130 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-6l7rv" podStartSLOduration=80.179093822 podStartE2EDuration="1m20.179093822s" podCreationTimestamp="2026-02-17 13:36:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 13:38:00.164522781 +0000 UTC m=+99.443909223" watchObservedRunningTime="2026-02-17 13:38:00.179093822 +0000 UTC m=+99.458480264" Feb 17 13:38:00 crc kubenswrapper[4768]: I0217 13:38:00.190821 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/1cb2dd75-6f0d-4b06-962f-3dba6dbe9d93-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-zxdb7\" (UID: \"1cb2dd75-6f0d-4b06-962f-3dba6dbe9d93\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-zxdb7" Feb 17 13:38:00 crc kubenswrapper[4768]: I0217 13:38:00.190944 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/1cb2dd75-6f0d-4b06-962f-3dba6dbe9d93-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-zxdb7\" (UID: \"1cb2dd75-6f0d-4b06-962f-3dba6dbe9d93\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-zxdb7" Feb 17 13:38:00 crc kubenswrapper[4768]: I0217 13:38:00.190971 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1cb2dd75-6f0d-4b06-962f-3dba6dbe9d93-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-zxdb7\" (UID: \"1cb2dd75-6f0d-4b06-962f-3dba6dbe9d93\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-zxdb7" Feb 17 13:38:00 crc kubenswrapper[4768]: I0217 13:38:00.190987 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1cb2dd75-6f0d-4b06-962f-3dba6dbe9d93-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-zxdb7\" (UID: \"1cb2dd75-6f0d-4b06-962f-3dba6dbe9d93\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-zxdb7" Feb 17 13:38:00 crc kubenswrapper[4768]: I0217 13:38:00.191023 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/1cb2dd75-6f0d-4b06-962f-3dba6dbe9d93-service-ca\") pod \"cluster-version-operator-5c965bbfc6-zxdb7\" (UID: \"1cb2dd75-6f0d-4b06-962f-3dba6dbe9d93\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-zxdb7" Feb 17 13:38:00 crc kubenswrapper[4768]: I0217 13:38:00.191115 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/1cb2dd75-6f0d-4b06-962f-3dba6dbe9d93-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-zxdb7\" (UID: \"1cb2dd75-6f0d-4b06-962f-3dba6dbe9d93\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-zxdb7" Feb 17 13:38:00 crc kubenswrapper[4768]: I0217 13:38:00.191150 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/1cb2dd75-6f0d-4b06-962f-3dba6dbe9d93-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-zxdb7\" (UID: \"1cb2dd75-6f0d-4b06-962f-3dba6dbe9d93\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-zxdb7" Feb 17 13:38:00 crc kubenswrapper[4768]: I0217 13:38:00.191790 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/1cb2dd75-6f0d-4b06-962f-3dba6dbe9d93-service-ca\") pod \"cluster-version-operator-5c965bbfc6-zxdb7\" (UID: \"1cb2dd75-6f0d-4b06-962f-3dba6dbe9d93\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-zxdb7" Feb 17 13:38:00 crc kubenswrapper[4768]: I0217 13:38:00.195660 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" podStartSLOduration=18.195645619 podStartE2EDuration="18.195645619s" podCreationTimestamp="2026-02-17 13:37:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 13:38:00.179799512 +0000 UTC m=+99.459185954" watchObservedRunningTime="2026-02-17 13:38:00.195645619 +0000 UTC m=+99.475032061" Feb 17 13:38:00 crc kubenswrapper[4768]: I0217 13:38:00.202697 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1cb2dd75-6f0d-4b06-962f-3dba6dbe9d93-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-zxdb7\" (UID: \"1cb2dd75-6f0d-4b06-962f-3dba6dbe9d93\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-zxdb7" Feb 17 13:38:00 crc kubenswrapper[4768]: I0217 13:38:00.237521 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1cb2dd75-6f0d-4b06-962f-3dba6dbe9d93-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-zxdb7\" (UID: \"1cb2dd75-6f0d-4b06-962f-3dba6dbe9d93\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-zxdb7" Feb 17 13:38:00 crc kubenswrapper[4768]: I0217 13:38:00.280305 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-daemon-p97z4" podStartSLOduration=79.280282109 podStartE2EDuration="1m19.280282109s" podCreationTimestamp="2026-02-17 13:36:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 13:38:00.268064515 +0000 UTC m=+99.547450947" watchObservedRunningTime="2026-02-17 13:38:00.280282109 +0000 UTC m=+99.559668551" Feb 17 13:38:00 crc kubenswrapper[4768]: I0217 13:38:00.280758 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-jjjqj" podStartSLOduration=79.280752622 podStartE2EDuration="1m19.280752622s" podCreationTimestamp="2026-02-17 13:36:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 13:38:00.2806544 +0000 UTC m=+99.560040872" watchObservedRunningTime="2026-02-17 13:38:00.280752622 +0000 UTC m=+99.560139064" Feb 17 13:38:00 crc kubenswrapper[4768]: I0217 13:38:00.300878 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-zxdb7" Feb 17 13:38:00 crc kubenswrapper[4768]: I0217 13:38:00.347312 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-hngsc" podStartSLOduration=79.347293432 podStartE2EDuration="1m19.347293432s" podCreationTimestamp="2026-02-17 13:36:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 13:38:00.334808219 +0000 UTC m=+99.614194661" watchObservedRunningTime="2026-02-17 13:38:00.347293432 +0000 UTC m=+99.626679894" Feb 17 13:38:00 crc kubenswrapper[4768]: I0217 13:38:00.347743 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podStartSLOduration=75.347736334 podStartE2EDuration="1m15.347736334s" podCreationTimestamp="2026-02-17 13:36:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 13:38:00.346687755 +0000 UTC m=+99.626074197" watchObservedRunningTime="2026-02-17 13:38:00.347736334 +0000 UTC m=+99.627122786" Feb 17 13:38:00 crc kubenswrapper[4768]: I0217 13:38:00.382361 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podStartSLOduration=50.382343281 podStartE2EDuration="50.382343281s" podCreationTimestamp="2026-02-17 13:37:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 13:38:00.361281687 +0000 UTC m=+99.640668129" watchObservedRunningTime="2026-02-17 13:38:00.382343281 +0000 UTC m=+99.661729723" Feb 17 13:38:00 crc kubenswrapper[4768]: I0217 13:38:00.382499 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-crc" podStartSLOduration=2.382494905 podStartE2EDuration="2.382494905s" podCreationTimestamp="2026-02-17 13:37:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 13:38:00.381659212 +0000 UTC m=+99.661045664" watchObservedRunningTime="2026-02-17 13:38:00.382494905 +0000 UTC m=+99.661881347" Feb 17 13:38:00 crc kubenswrapper[4768]: I0217 13:38:00.533732 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5bxh7" Feb 17 13:38:00 crc kubenswrapper[4768]: E0217 13:38:00.534154 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5bxh7" podUID="8c8b1469-ed55-4743-9553-f81efd79e5f1" Feb 17 13:38:00 crc kubenswrapper[4768]: I0217 13:38:00.569221 4768 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-23 14:56:40.768346259 +0000 UTC Feb 17 13:38:00 crc kubenswrapper[4768]: I0217 13:38:00.569293 4768 certificate_manager.go:356] kubernetes.io/kubelet-serving: Rotating certificates Feb 17 13:38:00 crc kubenswrapper[4768]: I0217 13:38:00.575818 4768 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Feb 17 13:38:01 crc kubenswrapper[4768]: I0217 13:38:01.061589 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-zxdb7" event={"ID":"1cb2dd75-6f0d-4b06-962f-3dba6dbe9d93","Type":"ContainerStarted","Data":"e067a5ffa39a9a0d182683eae123f77ae29eddd2a46f464e4d71e2863bc95132"} Feb 17 13:38:01 crc kubenswrapper[4768]: I0217 13:38:01.534286 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 13:38:01 crc kubenswrapper[4768]: I0217 13:38:01.534316 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 13:38:01 crc kubenswrapper[4768]: I0217 13:38:01.534445 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 13:38:01 crc kubenswrapper[4768]: E0217 13:38:01.543724 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 13:38:01 crc kubenswrapper[4768]: E0217 13:38:01.544247 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 13:38:01 crc kubenswrapper[4768]: E0217 13:38:01.544397 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 13:38:02 crc kubenswrapper[4768]: I0217 13:38:02.065427 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-zxdb7" event={"ID":"1cb2dd75-6f0d-4b06-962f-3dba6dbe9d93","Type":"ContainerStarted","Data":"862bc289fac3ee2881261e5acf7526c47e178011fd8913ce9bdf570d680dae64"} Feb 17 13:38:02 crc kubenswrapper[4768]: I0217 13:38:02.533876 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5bxh7" Feb 17 13:38:02 crc kubenswrapper[4768]: E0217 13:38:02.534204 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5bxh7" podUID="8c8b1469-ed55-4743-9553-f81efd79e5f1" Feb 17 13:38:03 crc kubenswrapper[4768]: I0217 13:38:03.533903 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 13:38:03 crc kubenswrapper[4768]: I0217 13:38:03.534045 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 13:38:03 crc kubenswrapper[4768]: I0217 13:38:03.534553 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 13:38:03 crc kubenswrapper[4768]: E0217 13:38:03.534801 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 13:38:03 crc kubenswrapper[4768]: E0217 13:38:03.534895 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 13:38:03 crc kubenswrapper[4768]: E0217 13:38:03.534984 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 13:38:03 crc kubenswrapper[4768]: I0217 13:38:03.535301 4768 scope.go:117] "RemoveContainer" containerID="115c47b6c343076ece428facfb56ccbf0b490cbfe7eff1470c731d6c304df3b9" Feb 17 13:38:03 crc kubenswrapper[4768]: E0217 13:38:03.535622 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-5cplg_openshift-ovn-kubernetes(742e6df8-2a68-426e-982c-ef825c6efca3)\"" pod="openshift-ovn-kubernetes/ovnkube-node-5cplg" podUID="742e6df8-2a68-426e-982c-ef825c6efca3" Feb 17 13:38:04 crc kubenswrapper[4768]: I0217 13:38:04.534212 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5bxh7" Feb 17 13:38:04 crc kubenswrapper[4768]: E0217 13:38:04.534394 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5bxh7" podUID="8c8b1469-ed55-4743-9553-f81efd79e5f1" Feb 17 13:38:05 crc kubenswrapper[4768]: I0217 13:38:05.533983 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 13:38:05 crc kubenswrapper[4768]: I0217 13:38:05.534050 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 13:38:05 crc kubenswrapper[4768]: E0217 13:38:05.534262 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 13:38:05 crc kubenswrapper[4768]: I0217 13:38:05.534622 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 13:38:05 crc kubenswrapper[4768]: E0217 13:38:05.534763 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 13:38:05 crc kubenswrapper[4768]: E0217 13:38:05.535083 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 13:38:06 crc kubenswrapper[4768]: I0217 13:38:06.533485 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5bxh7" Feb 17 13:38:06 crc kubenswrapper[4768]: E0217 13:38:06.534083 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5bxh7" podUID="8c8b1469-ed55-4743-9553-f81efd79e5f1" Feb 17 13:38:07 crc kubenswrapper[4768]: I0217 13:38:07.533489 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 13:38:07 crc kubenswrapper[4768]: I0217 13:38:07.533539 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 13:38:07 crc kubenswrapper[4768]: E0217 13:38:07.533618 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 13:38:07 crc kubenswrapper[4768]: I0217 13:38:07.533539 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 13:38:07 crc kubenswrapper[4768]: E0217 13:38:07.533704 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 13:38:07 crc kubenswrapper[4768]: E0217 13:38:07.533770 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 13:38:08 crc kubenswrapper[4768]: I0217 13:38:08.534167 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5bxh7" Feb 17 13:38:08 crc kubenswrapper[4768]: E0217 13:38:08.534777 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5bxh7" podUID="8c8b1469-ed55-4743-9553-f81efd79e5f1" Feb 17 13:38:09 crc kubenswrapper[4768]: I0217 13:38:09.533888 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 13:38:09 crc kubenswrapper[4768]: E0217 13:38:09.534014 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 13:38:09 crc kubenswrapper[4768]: I0217 13:38:09.533906 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 13:38:09 crc kubenswrapper[4768]: I0217 13:38:09.534123 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 13:38:09 crc kubenswrapper[4768]: E0217 13:38:09.534197 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 13:38:09 crc kubenswrapper[4768]: E0217 13:38:09.534284 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 13:38:10 crc kubenswrapper[4768]: I0217 13:38:10.533989 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5bxh7" Feb 17 13:38:10 crc kubenswrapper[4768]: E0217 13:38:10.534398 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5bxh7" podUID="8c8b1469-ed55-4743-9553-f81efd79e5f1" Feb 17 13:38:11 crc kubenswrapper[4768]: I0217 13:38:11.534022 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 13:38:11 crc kubenswrapper[4768]: I0217 13:38:11.534184 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 13:38:11 crc kubenswrapper[4768]: I0217 13:38:11.535387 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 13:38:11 crc kubenswrapper[4768]: E0217 13:38:11.535377 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 13:38:11 crc kubenswrapper[4768]: E0217 13:38:11.535463 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 13:38:11 crc kubenswrapper[4768]: E0217 13:38:11.535739 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 13:38:12 crc kubenswrapper[4768]: I0217 13:38:12.533734 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5bxh7" Feb 17 13:38:12 crc kubenswrapper[4768]: E0217 13:38:12.533972 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5bxh7" podUID="8c8b1469-ed55-4743-9553-f81efd79e5f1" Feb 17 13:38:13 crc kubenswrapper[4768]: I0217 13:38:13.533438 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 13:38:13 crc kubenswrapper[4768]: I0217 13:38:13.533483 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 13:38:13 crc kubenswrapper[4768]: E0217 13:38:13.534266 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 13:38:13 crc kubenswrapper[4768]: E0217 13:38:13.534274 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 13:38:13 crc kubenswrapper[4768]: I0217 13:38:13.533521 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 13:38:13 crc kubenswrapper[4768]: E0217 13:38:13.534599 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 13:38:14 crc kubenswrapper[4768]: I0217 13:38:14.533612 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5bxh7" Feb 17 13:38:14 crc kubenswrapper[4768]: E0217 13:38:14.533779 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5bxh7" podUID="8c8b1469-ed55-4743-9553-f81efd79e5f1" Feb 17 13:38:15 crc kubenswrapper[4768]: I0217 13:38:15.533311 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 13:38:15 crc kubenswrapper[4768]: E0217 13:38:15.533767 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 13:38:15 crc kubenswrapper[4768]: I0217 13:38:15.533450 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 13:38:15 crc kubenswrapper[4768]: E0217 13:38:15.534251 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 13:38:15 crc kubenswrapper[4768]: I0217 13:38:15.533360 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 13:38:15 crc kubenswrapper[4768]: E0217 13:38:15.534550 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 13:38:16 crc kubenswrapper[4768]: I0217 13:38:16.533781 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5bxh7" Feb 17 13:38:16 crc kubenswrapper[4768]: E0217 13:38:16.533915 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5bxh7" podUID="8c8b1469-ed55-4743-9553-f81efd79e5f1" Feb 17 13:38:17 crc kubenswrapper[4768]: I0217 13:38:17.118372 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-jjjqj_e044bf1f-26b2-4a39-86e6-0440eff3eaa9/kube-multus/1.log" Feb 17 13:38:17 crc kubenswrapper[4768]: I0217 13:38:17.119130 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-jjjqj_e044bf1f-26b2-4a39-86e6-0440eff3eaa9/kube-multus/0.log" Feb 17 13:38:17 crc kubenswrapper[4768]: I0217 13:38:17.119250 4768 generic.go:334] "Generic (PLEG): container finished" podID="e044bf1f-26b2-4a39-86e6-0440eff3eaa9" containerID="8710b3ee2ff8d7aa849b863cfe8b99fd97e02f57ece68e528c7c23994608bedd" exitCode=1 Feb 17 13:38:17 crc kubenswrapper[4768]: I0217 13:38:17.119297 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-jjjqj" event={"ID":"e044bf1f-26b2-4a39-86e6-0440eff3eaa9","Type":"ContainerDied","Data":"8710b3ee2ff8d7aa849b863cfe8b99fd97e02f57ece68e528c7c23994608bedd"} Feb 17 13:38:17 crc kubenswrapper[4768]: I0217 13:38:17.119405 4768 scope.go:117] "RemoveContainer" containerID="19bc9ea99c2b3b0015849f2a4c730a14e048c35cf69f5b4025a4d51d707cf9f1" Feb 17 13:38:17 crc kubenswrapper[4768]: I0217 13:38:17.120367 4768 scope.go:117] "RemoveContainer" containerID="8710b3ee2ff8d7aa849b863cfe8b99fd97e02f57ece68e528c7c23994608bedd" Feb 17 13:38:17 crc kubenswrapper[4768]: E0217 13:38:17.120903 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-multus pod=multus-jjjqj_openshift-multus(e044bf1f-26b2-4a39-86e6-0440eff3eaa9)\"" pod="openshift-multus/multus-jjjqj" podUID="e044bf1f-26b2-4a39-86e6-0440eff3eaa9" Feb 17 13:38:17 crc kubenswrapper[4768]: I0217 13:38:17.144986 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-zxdb7" podStartSLOduration=96.144867703 podStartE2EDuration="1m36.144867703s" podCreationTimestamp="2026-02-17 13:36:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 13:38:02.079970491 +0000 UTC m=+101.359356933" watchObservedRunningTime="2026-02-17 13:38:17.144867703 +0000 UTC m=+116.424254145" Feb 17 13:38:17 crc kubenswrapper[4768]: I0217 13:38:17.533425 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 13:38:17 crc kubenswrapper[4768]: I0217 13:38:17.533425 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 13:38:17 crc kubenswrapper[4768]: E0217 13:38:17.533616 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 13:38:17 crc kubenswrapper[4768]: I0217 13:38:17.533762 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 13:38:17 crc kubenswrapper[4768]: E0217 13:38:17.533806 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 13:38:17 crc kubenswrapper[4768]: E0217 13:38:17.534303 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 13:38:17 crc kubenswrapper[4768]: I0217 13:38:17.534447 4768 scope.go:117] "RemoveContainer" containerID="115c47b6c343076ece428facfb56ccbf0b490cbfe7eff1470c731d6c304df3b9" Feb 17 13:38:18 crc kubenswrapper[4768]: I0217 13:38:18.123746 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-jjjqj_e044bf1f-26b2-4a39-86e6-0440eff3eaa9/kube-multus/1.log" Feb 17 13:38:18 crc kubenswrapper[4768]: I0217 13:38:18.126186 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-5cplg_742e6df8-2a68-426e-982c-ef825c6efca3/ovnkube-controller/3.log" Feb 17 13:38:18 crc kubenswrapper[4768]: I0217 13:38:18.128678 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5cplg" event={"ID":"742e6df8-2a68-426e-982c-ef825c6efca3","Type":"ContainerStarted","Data":"d61025c117feb912aa62fab47e8cea3a9df5a2d323840571cec1ecd64c4b3fbe"} Feb 17 13:38:18 crc kubenswrapper[4768]: I0217 13:38:18.129080 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-5cplg" Feb 17 13:38:18 crc kubenswrapper[4768]: I0217 13:38:18.156987 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-5cplg" podStartSLOduration=96.156970404 podStartE2EDuration="1m36.156970404s" podCreationTimestamp="2026-02-17 13:36:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 13:38:18.155437971 +0000 UTC m=+117.434824433" watchObservedRunningTime="2026-02-17 13:38:18.156970404 +0000 UTC m=+117.436356846" Feb 17 13:38:18 crc kubenswrapper[4768]: I0217 13:38:18.450516 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-5bxh7"] Feb 17 13:38:18 crc kubenswrapper[4768]: I0217 13:38:18.450720 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5bxh7" Feb 17 13:38:18 crc kubenswrapper[4768]: E0217 13:38:18.450813 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5bxh7" podUID="8c8b1469-ed55-4743-9553-f81efd79e5f1" Feb 17 13:38:19 crc kubenswrapper[4768]: I0217 13:38:19.533456 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 13:38:19 crc kubenswrapper[4768]: E0217 13:38:19.533586 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 13:38:19 crc kubenswrapper[4768]: I0217 13:38:19.533591 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 13:38:19 crc kubenswrapper[4768]: I0217 13:38:19.533676 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 13:38:19 crc kubenswrapper[4768]: E0217 13:38:19.533718 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 13:38:19 crc kubenswrapper[4768]: E0217 13:38:19.533747 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 13:38:19 crc kubenswrapper[4768]: I0217 13:38:19.533791 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5bxh7" Feb 17 13:38:19 crc kubenswrapper[4768]: E0217 13:38:19.533862 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5bxh7" podUID="8c8b1469-ed55-4743-9553-f81efd79e5f1" Feb 17 13:38:21 crc kubenswrapper[4768]: I0217 13:38:21.533454 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 13:38:21 crc kubenswrapper[4768]: I0217 13:38:21.533479 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 13:38:21 crc kubenswrapper[4768]: I0217 13:38:21.533537 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5bxh7" Feb 17 13:38:21 crc kubenswrapper[4768]: E0217 13:38:21.535984 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 13:38:21 crc kubenswrapper[4768]: I0217 13:38:21.536057 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 13:38:21 crc kubenswrapper[4768]: E0217 13:38:21.536239 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 13:38:21 crc kubenswrapper[4768]: E0217 13:38:21.536367 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5bxh7" podUID="8c8b1469-ed55-4743-9553-f81efd79e5f1" Feb 17 13:38:21 crc kubenswrapper[4768]: E0217 13:38:21.536490 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 13:38:21 crc kubenswrapper[4768]: E0217 13:38:21.569067 4768 kubelet_node_status.go:497] "Node not becoming ready in time after startup" Feb 17 13:38:21 crc kubenswrapper[4768]: E0217 13:38:21.620063 4768 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 17 13:38:23 crc kubenswrapper[4768]: I0217 13:38:23.533837 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 13:38:23 crc kubenswrapper[4768]: I0217 13:38:23.533951 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 13:38:23 crc kubenswrapper[4768]: E0217 13:38:23.534233 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 13:38:23 crc kubenswrapper[4768]: I0217 13:38:23.534293 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 13:38:23 crc kubenswrapper[4768]: I0217 13:38:23.534320 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5bxh7" Feb 17 13:38:23 crc kubenswrapper[4768]: E0217 13:38:23.534394 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 13:38:23 crc kubenswrapper[4768]: E0217 13:38:23.534444 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 13:38:23 crc kubenswrapper[4768]: E0217 13:38:23.534591 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5bxh7" podUID="8c8b1469-ed55-4743-9553-f81efd79e5f1" Feb 17 13:38:25 crc kubenswrapper[4768]: I0217 13:38:25.534150 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 13:38:25 crc kubenswrapper[4768]: I0217 13:38:25.534212 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 13:38:25 crc kubenswrapper[4768]: E0217 13:38:25.534287 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 13:38:25 crc kubenswrapper[4768]: I0217 13:38:25.534301 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 13:38:25 crc kubenswrapper[4768]: E0217 13:38:25.534439 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 13:38:25 crc kubenswrapper[4768]: E0217 13:38:25.534610 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 13:38:25 crc kubenswrapper[4768]: I0217 13:38:25.534925 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5bxh7" Feb 17 13:38:25 crc kubenswrapper[4768]: E0217 13:38:25.535192 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5bxh7" podUID="8c8b1469-ed55-4743-9553-f81efd79e5f1" Feb 17 13:38:26 crc kubenswrapper[4768]: E0217 13:38:26.621772 4768 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 17 13:38:27 crc kubenswrapper[4768]: I0217 13:38:27.533736 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 13:38:27 crc kubenswrapper[4768]: I0217 13:38:27.533913 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 13:38:27 crc kubenswrapper[4768]: I0217 13:38:27.533946 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 13:38:27 crc kubenswrapper[4768]: E0217 13:38:27.533936 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 13:38:27 crc kubenswrapper[4768]: I0217 13:38:27.533986 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5bxh7" Feb 17 13:38:27 crc kubenswrapper[4768]: E0217 13:38:27.534269 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 13:38:27 crc kubenswrapper[4768]: E0217 13:38:27.534345 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 13:38:27 crc kubenswrapper[4768]: E0217 13:38:27.534520 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5bxh7" podUID="8c8b1469-ed55-4743-9553-f81efd79e5f1" Feb 17 13:38:29 crc kubenswrapper[4768]: I0217 13:38:29.533586 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 13:38:29 crc kubenswrapper[4768]: I0217 13:38:29.533618 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 13:38:29 crc kubenswrapper[4768]: I0217 13:38:29.533648 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5bxh7" Feb 17 13:38:29 crc kubenswrapper[4768]: I0217 13:38:29.533614 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 13:38:29 crc kubenswrapper[4768]: E0217 13:38:29.533710 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 13:38:29 crc kubenswrapper[4768]: E0217 13:38:29.533843 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5bxh7" podUID="8c8b1469-ed55-4743-9553-f81efd79e5f1" Feb 17 13:38:29 crc kubenswrapper[4768]: E0217 13:38:29.533891 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 13:38:29 crc kubenswrapper[4768]: E0217 13:38:29.533943 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 13:38:31 crc kubenswrapper[4768]: I0217 13:38:31.533699 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5bxh7" Feb 17 13:38:31 crc kubenswrapper[4768]: I0217 13:38:31.533812 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 13:38:31 crc kubenswrapper[4768]: E0217 13:38:31.535048 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5bxh7" podUID="8c8b1469-ed55-4743-9553-f81efd79e5f1" Feb 17 13:38:31 crc kubenswrapper[4768]: I0217 13:38:31.535145 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 13:38:31 crc kubenswrapper[4768]: I0217 13:38:31.535165 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 13:38:31 crc kubenswrapper[4768]: E0217 13:38:31.535299 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 13:38:31 crc kubenswrapper[4768]: E0217 13:38:31.535472 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 13:38:31 crc kubenswrapper[4768]: I0217 13:38:31.535552 4768 scope.go:117] "RemoveContainer" containerID="8710b3ee2ff8d7aa849b863cfe8b99fd97e02f57ece68e528c7c23994608bedd" Feb 17 13:38:31 crc kubenswrapper[4768]: E0217 13:38:31.535590 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 13:38:31 crc kubenswrapper[4768]: E0217 13:38:31.623697 4768 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 17 13:38:32 crc kubenswrapper[4768]: I0217 13:38:32.171142 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-jjjqj_e044bf1f-26b2-4a39-86e6-0440eff3eaa9/kube-multus/1.log" Feb 17 13:38:32 crc kubenswrapper[4768]: I0217 13:38:32.171199 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-jjjqj" event={"ID":"e044bf1f-26b2-4a39-86e6-0440eff3eaa9","Type":"ContainerStarted","Data":"f0eabe9e6b5551e88ed34f7b32f5573dd3d736e0c52761e08b1a6b74957522ef"} Feb 17 13:38:33 crc kubenswrapper[4768]: I0217 13:38:33.534198 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 13:38:33 crc kubenswrapper[4768]: E0217 13:38:33.534898 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 13:38:33 crc kubenswrapper[4768]: I0217 13:38:33.534254 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5bxh7" Feb 17 13:38:33 crc kubenswrapper[4768]: I0217 13:38:33.534245 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 13:38:33 crc kubenswrapper[4768]: I0217 13:38:33.534389 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 13:38:33 crc kubenswrapper[4768]: E0217 13:38:33.535240 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 13:38:33 crc kubenswrapper[4768]: E0217 13:38:33.535381 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5bxh7" podUID="8c8b1469-ed55-4743-9553-f81efd79e5f1" Feb 17 13:38:33 crc kubenswrapper[4768]: E0217 13:38:33.535559 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 13:38:35 crc kubenswrapper[4768]: I0217 13:38:35.534021 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 13:38:35 crc kubenswrapper[4768]: I0217 13:38:35.534068 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5bxh7" Feb 17 13:38:35 crc kubenswrapper[4768]: I0217 13:38:35.534082 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 13:38:35 crc kubenswrapper[4768]: I0217 13:38:35.534031 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 13:38:35 crc kubenswrapper[4768]: E0217 13:38:35.534221 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 13:38:35 crc kubenswrapper[4768]: E0217 13:38:35.534452 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 13:38:35 crc kubenswrapper[4768]: E0217 13:38:35.534559 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 13:38:35 crc kubenswrapper[4768]: E0217 13:38:35.534668 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5bxh7" podUID="8c8b1469-ed55-4743-9553-f81efd79e5f1" Feb 17 13:38:37 crc kubenswrapper[4768]: I0217 13:38:37.534041 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5bxh7" Feb 17 13:38:37 crc kubenswrapper[4768]: I0217 13:38:37.534146 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 13:38:37 crc kubenswrapper[4768]: I0217 13:38:37.534440 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 13:38:37 crc kubenswrapper[4768]: I0217 13:38:37.535496 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 13:38:37 crc kubenswrapper[4768]: I0217 13:38:37.535724 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Feb 17 13:38:37 crc kubenswrapper[4768]: I0217 13:38:37.536469 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Feb 17 13:38:37 crc kubenswrapper[4768]: I0217 13:38:37.537273 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Feb 17 13:38:37 crc kubenswrapper[4768]: I0217 13:38:37.537438 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Feb 17 13:38:37 crc kubenswrapper[4768]: I0217 13:38:37.537578 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Feb 17 13:38:37 crc kubenswrapper[4768]: I0217 13:38:37.538807 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Feb 17 13:38:40 crc kubenswrapper[4768]: I0217 13:38:40.483828 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-5cplg" Feb 17 13:38:40 crc kubenswrapper[4768]: I0217 13:38:40.743811 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeReady" Feb 17 13:38:40 crc kubenswrapper[4768]: I0217 13:38:40.788073 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-c55bg"] Feb 17 13:38:40 crc kubenswrapper[4768]: I0217 13:38:40.788458 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-c55bg" Feb 17 13:38:40 crc kubenswrapper[4768]: I0217 13:38:40.789843 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-z2b5c"] Feb 17 13:38:40 crc kubenswrapper[4768]: I0217 13:38:40.790182 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-z2b5c" Feb 17 13:38:40 crc kubenswrapper[4768]: I0217 13:38:40.793095 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Feb 17 13:38:40 crc kubenswrapper[4768]: I0217 13:38:40.793318 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 17 13:38:40 crc kubenswrapper[4768]: I0217 13:38:40.793331 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 17 13:38:40 crc kubenswrapper[4768]: I0217 13:38:40.793336 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Feb 17 13:38:40 crc kubenswrapper[4768]: I0217 13:38:40.793355 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 17 13:38:40 crc kubenswrapper[4768]: I0217 13:38:40.793411 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Feb 17 13:38:40 crc kubenswrapper[4768]: I0217 13:38:40.793454 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 17 13:38:40 crc kubenswrapper[4768]: I0217 13:38:40.793531 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 17 13:38:40 crc kubenswrapper[4768]: I0217 13:38:40.793626 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Feb 17 13:38:40 crc kubenswrapper[4768]: I0217 13:38:40.794400 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Feb 17 13:38:40 crc kubenswrapper[4768]: I0217 13:38:40.794466 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Feb 17 13:38:40 crc kubenswrapper[4768]: I0217 13:38:40.795736 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Feb 17 13:38:40 crc kubenswrapper[4768]: I0217 13:38:40.802034 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-gwd2q"] Feb 17 13:38:40 crc kubenswrapper[4768]: I0217 13:38:40.802991 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-f9d7485db-9fmzj"] Feb 17 13:38:40 crc kubenswrapper[4768]: I0217 13:38:40.803163 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-gwd2q" Feb 17 13:38:40 crc kubenswrapper[4768]: I0217 13:38:40.803509 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-n5tl8"] Feb 17 13:38:40 crc kubenswrapper[4768]: I0217 13:38:40.803666 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-9fmzj" Feb 17 13:38:40 crc kubenswrapper[4768]: I0217 13:38:40.805214 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-n5tl8" Feb 17 13:38:40 crc kubenswrapper[4768]: I0217 13:38:40.805714 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-t54pq"] Feb 17 13:38:40 crc kubenswrapper[4768]: I0217 13:38:40.806131 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-56656f9798-vx6f4"] Feb 17 13:38:40 crc kubenswrapper[4768]: I0217 13:38:40.806348 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-t54pq" Feb 17 13:38:40 crc kubenswrapper[4768]: I0217 13:38:40.806928 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-p7h6m"] Feb 17 13:38:40 crc kubenswrapper[4768]: I0217 13:38:40.807217 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-kqwfk"] Feb 17 13:38:40 crc kubenswrapper[4768]: I0217 13:38:40.807217 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-vx6f4" Feb 17 13:38:40 crc kubenswrapper[4768]: I0217 13:38:40.807538 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-p7h6m" Feb 17 13:38:40 crc kubenswrapper[4768]: I0217 13:38:40.807981 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-5nwnr"] Feb 17 13:38:40 crc kubenswrapper[4768]: I0217 13:38:40.807996 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-kqwfk" Feb 17 13:38:40 crc kubenswrapper[4768]: I0217 13:38:40.808627 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-5nwnr" Feb 17 13:38:40 crc kubenswrapper[4768]: I0217 13:38:40.808995 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-7954f5f757-mvb69"] Feb 17 13:38:40 crc kubenswrapper[4768]: I0217 13:38:40.809691 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-mvb69" Feb 17 13:38:40 crc kubenswrapper[4768]: I0217 13:38:40.810788 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-r7ksx"] Feb 17 13:38:40 crc kubenswrapper[4768]: I0217 13:38:40.811289 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-vfpbq"] Feb 17 13:38:40 crc kubenswrapper[4768]: I0217 13:38:40.811623 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-vfpbq" Feb 17 13:38:40 crc kubenswrapper[4768]: I0217 13:38:40.811939 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-r7ksx" Feb 17 13:38:40 crc kubenswrapper[4768]: I0217 13:38:40.815768 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-2p2tw"] Feb 17 13:38:40 crc kubenswrapper[4768]: I0217 13:38:40.816079 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-2p2tw" Feb 17 13:38:40 crc kubenswrapper[4768]: I0217 13:38:40.816584 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-gvm54"] Feb 17 13:38:40 crc kubenswrapper[4768]: I0217 13:38:40.817114 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-gvm54" Feb 17 13:38:40 crc kubenswrapper[4768]: I0217 13:38:40.818799 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-bsrtm"] Feb 17 13:38:40 crc kubenswrapper[4768]: I0217 13:38:40.819148 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-bsrtm" Feb 17 13:38:40 crc kubenswrapper[4768]: I0217 13:38:40.819611 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-nf6w5"] Feb 17 13:38:40 crc kubenswrapper[4768]: I0217 13:38:40.819903 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-nf6w5" Feb 17 13:38:40 crc kubenswrapper[4768]: W0217 13:38:40.828416 4768 reflector.go:561] object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d": failed to list *v1.Secret: secrets "kube-storage-version-migrator-operator-dockercfg-2bh8d" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-kube-storage-version-migrator-operator": no relationship found between node 'crc' and this object Feb 17 13:38:40 crc kubenswrapper[4768]: E0217 13:38:40.828486 4768 reflector.go:158] "Unhandled Error" err="object-\"openshift-kube-storage-version-migrator-operator\"/\"kube-storage-version-migrator-operator-dockercfg-2bh8d\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"kube-storage-version-migrator-operator-dockercfg-2bh8d\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-kube-storage-version-migrator-operator\": no relationship found between node 'crc' and this object" logger="UnhandledError" Feb 17 13:38:40 crc kubenswrapper[4768]: W0217 13:38:40.828605 4768 reflector.go:561] object-"openshift-cluster-machine-approver"/"machine-approver-tls": failed to list *v1.Secret: secrets "machine-approver-tls" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-cluster-machine-approver": no relationship found between node 'crc' and this object Feb 17 13:38:40 crc kubenswrapper[4768]: W0217 13:38:40.828616 4768 reflector.go:561] object-"openshift-etcd-operator"/"etcd-operator-config": failed to list *v1.ConfigMap: configmaps "etcd-operator-config" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-etcd-operator": no relationship found between node 'crc' and this object Feb 17 13:38:40 crc kubenswrapper[4768]: E0217 13:38:40.828627 4768 reflector.go:158] "Unhandled Error" err="object-\"openshift-cluster-machine-approver\"/\"machine-approver-tls\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"machine-approver-tls\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-cluster-machine-approver\": no relationship found between node 'crc' and this object" logger="UnhandledError" Feb 17 13:38:40 crc kubenswrapper[4768]: E0217 13:38:40.828685 4768 reflector.go:158] "Unhandled Error" err="object-\"openshift-etcd-operator\"/\"etcd-operator-config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"etcd-operator-config\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-etcd-operator\": no relationship found between node 'crc' and this object" logger="UnhandledError" Feb 17 13:38:40 crc kubenswrapper[4768]: W0217 13:38:40.828732 4768 reflector.go:561] object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: configmaps "openshift-service-ca.crt" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-cluster-machine-approver": no relationship found between node 'crc' and this object Feb 17 13:38:40 crc kubenswrapper[4768]: E0217 13:38:40.828751 4768 reflector.go:158] "Unhandled Error" err="object-\"openshift-cluster-machine-approver\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"openshift-service-ca.crt\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-cluster-machine-approver\": no relationship found between node 'crc' and this object" logger="UnhandledError" Feb 17 13:38:40 crc kubenswrapper[4768]: W0217 13:38:40.828778 4768 reflector.go:561] object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: configmaps "openshift-service-ca.crt" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-kube-storage-version-migrator-operator": no relationship found between node 'crc' and this object Feb 17 13:38:40 crc kubenswrapper[4768]: E0217 13:38:40.828793 4768 reflector.go:158] "Unhandled Error" err="object-\"openshift-kube-storage-version-migrator-operator\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"openshift-service-ca.crt\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-kube-storage-version-migrator-operator\": no relationship found between node 'crc' and this object" logger="UnhandledError" Feb 17 13:38:40 crc kubenswrapper[4768]: W0217 13:38:40.828832 4768 reflector.go:561] object-"openshift-cluster-machine-approver"/"kube-rbac-proxy": failed to list *v1.ConfigMap: configmaps "kube-rbac-proxy" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-cluster-machine-approver": no relationship found between node 'crc' and this object Feb 17 13:38:40 crc kubenswrapper[4768]: E0217 13:38:40.828852 4768 reflector.go:158] "Unhandled Error" err="object-\"openshift-cluster-machine-approver\"/\"kube-rbac-proxy\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-rbac-proxy\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-cluster-machine-approver\": no relationship found between node 'crc' and this object" logger="UnhandledError" Feb 17 13:38:40 crc kubenswrapper[4768]: W0217 13:38:40.828921 4768 reflector.go:561] object-"openshift-cluster-machine-approver"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-cluster-machine-approver": no relationship found between node 'crc' and this object Feb 17 13:38:40 crc kubenswrapper[4768]: E0217 13:38:40.828937 4768 reflector.go:158] "Unhandled Error" err="object-\"openshift-cluster-machine-approver\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-cluster-machine-approver\": no relationship found between node 'crc' and this object" logger="UnhandledError" Feb 17 13:38:40 crc kubenswrapper[4768]: W0217 13:38:40.829089 4768 reflector.go:561] object-"openshift-image-registry"/"trusted-ca": failed to list *v1.ConfigMap: configmaps "trusted-ca" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-image-registry": no relationship found between node 'crc' and this object Feb 17 13:38:40 crc kubenswrapper[4768]: E0217 13:38:40.829133 4768 reflector.go:158] "Unhandled Error" err="object-\"openshift-image-registry\"/\"trusted-ca\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"trusted-ca\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-image-registry\": no relationship found between node 'crc' and this object" logger="UnhandledError" Feb 17 13:38:40 crc kubenswrapper[4768]: I0217 13:38:40.829354 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Feb 17 13:38:40 crc kubenswrapper[4768]: W0217 13:38:40.829600 4768 reflector.go:561] object-"openshift-etcd-operator"/"etcd-service-ca-bundle": failed to list *v1.ConfigMap: configmaps "etcd-service-ca-bundle" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-etcd-operator": no relationship found between node 'crc' and this object Feb 17 13:38:40 crc kubenswrapper[4768]: E0217 13:38:40.829629 4768 reflector.go:158] "Unhandled Error" err="object-\"openshift-etcd-operator\"/\"etcd-service-ca-bundle\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"etcd-service-ca-bundle\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-etcd-operator\": no relationship found between node 'crc' and this object" logger="UnhandledError" Feb 17 13:38:40 crc kubenswrapper[4768]: W0217 13:38:40.829912 4768 reflector.go:561] object-"openshift-etcd-operator"/"etcd-ca-bundle": failed to list *v1.ConfigMap: configmaps "etcd-ca-bundle" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-etcd-operator": no relationship found between node 'crc' and this object Feb 17 13:38:40 crc kubenswrapper[4768]: E0217 13:38:40.829939 4768 reflector.go:158] "Unhandled Error" err="object-\"openshift-etcd-operator\"/\"etcd-ca-bundle\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"etcd-ca-bundle\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-etcd-operator\": no relationship found between node 'crc' and this object" logger="UnhandledError" Feb 17 13:38:40 crc kubenswrapper[4768]: W0217 13:38:40.830134 4768 reflector.go:561] object-"openshift-etcd-operator"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: configmaps "openshift-service-ca.crt" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-etcd-operator": no relationship found between node 'crc' and this object Feb 17 13:38:40 crc kubenswrapper[4768]: E0217 13:38:40.830156 4768 reflector.go:158] "Unhandled Error" err="object-\"openshift-etcd-operator\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"openshift-service-ca.crt\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-etcd-operator\": no relationship found between node 'crc' and this object" logger="UnhandledError" Feb 17 13:38:40 crc kubenswrapper[4768]: I0217 13:38:40.830190 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Feb 17 13:38:40 crc kubenswrapper[4768]: I0217 13:38:40.830274 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Feb 17 13:38:40 crc kubenswrapper[4768]: I0217 13:38:40.830490 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Feb 17 13:38:40 crc kubenswrapper[4768]: W0217 13:38:40.830567 4768 reflector.go:561] object-"openshift-etcd-operator"/"etcd-operator-serving-cert": failed to list *v1.Secret: secrets "etcd-operator-serving-cert" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-etcd-operator": no relationship found between node 'crc' and this object Feb 17 13:38:40 crc kubenswrapper[4768]: E0217 13:38:40.830593 4768 reflector.go:158] "Unhandled Error" err="object-\"openshift-etcd-operator\"/\"etcd-operator-serving-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"etcd-operator-serving-cert\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-etcd-operator\": no relationship found between node 'crc' and this object" logger="UnhandledError" Feb 17 13:38:40 crc kubenswrapper[4768]: W0217 13:38:40.830491 4768 reflector.go:561] object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn": failed to list *v1.Secret: secrets "etcd-operator-dockercfg-r9srn" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-etcd-operator": no relationship found between node 'crc' and this object Feb 17 13:38:40 crc kubenswrapper[4768]: E0217 13:38:40.830670 4768 reflector.go:158] "Unhandled Error" err="object-\"openshift-etcd-operator\"/\"etcd-operator-dockercfg-r9srn\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"etcd-operator-dockercfg-r9srn\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-etcd-operator\": no relationship found between node 'crc' and this object" logger="UnhandledError" Feb 17 13:38:40 crc kubenswrapper[4768]: I0217 13:38:40.830797 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Feb 17 13:38:40 crc kubenswrapper[4768]: I0217 13:38:40.830943 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Feb 17 13:38:40 crc kubenswrapper[4768]: I0217 13:38:40.831273 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Feb 17 13:38:40 crc kubenswrapper[4768]: I0217 13:38:40.831404 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Feb 17 13:38:40 crc kubenswrapper[4768]: I0217 13:38:40.831589 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Feb 17 13:38:40 crc kubenswrapper[4768]: I0217 13:38:40.832043 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Feb 17 13:38:40 crc kubenswrapper[4768]: I0217 13:38:40.832413 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Feb 17 13:38:40 crc kubenswrapper[4768]: I0217 13:38:40.832557 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-h77q6"] Feb 17 13:38:40 crc kubenswrapper[4768]: I0217 13:38:40.833428 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-h77q6" Feb 17 13:38:40 crc kubenswrapper[4768]: I0217 13:38:40.833486 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-ds4bl"] Feb 17 13:38:40 crc kubenswrapper[4768]: I0217 13:38:40.834301 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-ds4bl" Feb 17 13:38:40 crc kubenswrapper[4768]: I0217 13:38:40.834719 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-m66tk"] Feb 17 13:38:40 crc kubenswrapper[4768]: W0217 13:38:40.834877 4768 reflector.go:561] object-"openshift-cluster-machine-approver"/"machine-approver-config": failed to list *v1.ConfigMap: configmaps "machine-approver-config" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-cluster-machine-approver": no relationship found between node 'crc' and this object Feb 17 13:38:40 crc kubenswrapper[4768]: E0217 13:38:40.834912 4768 reflector.go:158] "Unhandled Error" err="object-\"openshift-cluster-machine-approver\"/\"machine-approver-config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"machine-approver-config\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-cluster-machine-approver\": no relationship found between node 'crc' and this object" logger="UnhandledError" Feb 17 13:38:40 crc kubenswrapper[4768]: W0217 13:38:40.834984 4768 reflector.go:561] object-"openshift-kube-storage-version-migrator-operator"/"serving-cert": failed to list *v1.Secret: secrets "serving-cert" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-kube-storage-version-migrator-operator": no relationship found between node 'crc' and this object Feb 17 13:38:40 crc kubenswrapper[4768]: E0217 13:38:40.835003 4768 reflector.go:158] "Unhandled Error" err="object-\"openshift-kube-storage-version-migrator-operator\"/\"serving-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"serving-cert\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-kube-storage-version-migrator-operator\": no relationship found between node 'crc' and this object" logger="UnhandledError" Feb 17 13:38:40 crc kubenswrapper[4768]: I0217 13:38:40.835095 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Feb 17 13:38:40 crc kubenswrapper[4768]: W0217 13:38:40.835271 4768 reflector.go:561] object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4": failed to list *v1.Secret: secrets "machine-approver-sa-dockercfg-nl2j4" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-cluster-machine-approver": no relationship found between node 'crc' and this object Feb 17 13:38:40 crc kubenswrapper[4768]: E0217 13:38:40.835292 4768 reflector.go:158] "Unhandled Error" err="object-\"openshift-cluster-machine-approver\"/\"machine-approver-sa-dockercfg-nl2j4\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"machine-approver-sa-dockercfg-nl2j4\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-cluster-machine-approver\": no relationship found between node 'crc' and this object" logger="UnhandledError" Feb 17 13:38:40 crc kubenswrapper[4768]: W0217 13:38:40.835345 4768 reflector.go:561] object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-kube-storage-version-migrator-operator": no relationship found between node 'crc' and this object Feb 17 13:38:40 crc kubenswrapper[4768]: E0217 13:38:40.835361 4768 reflector.go:158] "Unhandled Error" err="object-\"openshift-kube-storage-version-migrator-operator\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-kube-storage-version-migrator-operator\": no relationship found between node 'crc' and this object" logger="UnhandledError" Feb 17 13:38:40 crc kubenswrapper[4768]: I0217 13:38:40.835434 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Feb 17 13:38:40 crc kubenswrapper[4768]: I0217 13:38:40.835483 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-m66tk" Feb 17 13:38:40 crc kubenswrapper[4768]: I0217 13:38:40.835569 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Feb 17 13:38:40 crc kubenswrapper[4768]: I0217 13:38:40.835694 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Feb 17 13:38:40 crc kubenswrapper[4768]: I0217 13:38:40.835796 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Feb 17 13:38:40 crc kubenswrapper[4768]: I0217 13:38:40.835858 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Feb 17 13:38:40 crc kubenswrapper[4768]: I0217 13:38:40.842953 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Feb 17 13:38:40 crc kubenswrapper[4768]: I0217 13:38:40.843434 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Feb 17 13:38:40 crc kubenswrapper[4768]: W0217 13:38:40.843665 4768 reflector.go:561] object-"openshift-etcd-operator"/"etcd-client": failed to list *v1.Secret: secrets "etcd-client" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-etcd-operator": no relationship found between node 'crc' and this object Feb 17 13:38:40 crc kubenswrapper[4768]: E0217 13:38:40.843727 4768 reflector.go:158] "Unhandled Error" err="object-\"openshift-etcd-operator\"/\"etcd-client\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"etcd-client\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-etcd-operator\": no relationship found between node 'crc' and this object" logger="UnhandledError" Feb 17 13:38:40 crc kubenswrapper[4768]: W0217 13:38:40.844015 4768 reflector.go:561] object-"openshift-kube-storage-version-migrator-operator"/"config": failed to list *v1.ConfigMap: configmaps "config" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-kube-storage-version-migrator-operator": no relationship found between node 'crc' and this object Feb 17 13:38:40 crc kubenswrapper[4768]: E0217 13:38:40.844041 4768 reflector.go:158] "Unhandled Error" err="object-\"openshift-kube-storage-version-migrator-operator\"/\"config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"config\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-kube-storage-version-migrator-operator\": no relationship found between node 'crc' and this object" logger="UnhandledError" Feb 17 13:38:40 crc kubenswrapper[4768]: I0217 13:38:40.845969 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Feb 17 13:38:40 crc kubenswrapper[4768]: I0217 13:38:40.846714 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Feb 17 13:38:40 crc kubenswrapper[4768]: I0217 13:38:40.848044 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Feb 17 13:38:40 crc kubenswrapper[4768]: I0217 13:38:40.850084 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Feb 17 13:38:40 crc kubenswrapper[4768]: I0217 13:38:40.856853 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Feb 17 13:38:40 crc kubenswrapper[4768]: I0217 13:38:40.858556 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Feb 17 13:38:40 crc kubenswrapper[4768]: I0217 13:38:40.867552 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Feb 17 13:38:40 crc kubenswrapper[4768]: I0217 13:38:40.867816 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Feb 17 13:38:40 crc kubenswrapper[4768]: I0217 13:38:40.867972 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Feb 17 13:38:40 crc kubenswrapper[4768]: I0217 13:38:40.868092 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Feb 17 13:38:40 crc kubenswrapper[4768]: I0217 13:38:40.868244 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Feb 17 13:38:40 crc kubenswrapper[4768]: I0217 13:38:40.870170 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console-operator/console-operator-58897d9998-8c6lh"] Feb 17 13:38:40 crc kubenswrapper[4768]: I0217 13:38:40.870374 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Feb 17 13:38:40 crc kubenswrapper[4768]: I0217 13:38:40.870728 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-hxzgb"] Feb 17 13:38:40 crc kubenswrapper[4768]: I0217 13:38:40.870805 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-8c6lh" Feb 17 13:38:40 crc kubenswrapper[4768]: I0217 13:38:40.871855 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-8lzlh"] Feb 17 13:38:40 crc kubenswrapper[4768]: I0217 13:38:40.872085 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-hxzgb" Feb 17 13:38:40 crc kubenswrapper[4768]: I0217 13:38:40.872214 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-8lzlh" Feb 17 13:38:40 crc kubenswrapper[4768]: I0217 13:38:40.872522 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Feb 17 13:38:40 crc kubenswrapper[4768]: W0217 13:38:40.874915 4768 reflector.go:561] object-"openshift-controller-manager"/"openshift-global-ca": failed to list *v1.ConfigMap: configmaps "openshift-global-ca" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-controller-manager": no relationship found between node 'crc' and this object Feb 17 13:38:40 crc kubenswrapper[4768]: E0217 13:38:40.874953 4768 reflector.go:158] "Unhandled Error" err="object-\"openshift-controller-manager\"/\"openshift-global-ca\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"openshift-global-ca\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-controller-manager\": no relationship found between node 'crc' and this object" logger="UnhandledError" Feb 17 13:38:40 crc kubenswrapper[4768]: I0217 13:38:40.875408 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Feb 17 13:38:40 crc kubenswrapper[4768]: W0217 13:38:40.877456 4768 reflector.go:561] object-"openshift-controller-manager"/"client-ca": failed to list *v1.ConfigMap: configmaps "client-ca" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-controller-manager": no relationship found between node 'crc' and this object Feb 17 13:38:40 crc kubenswrapper[4768]: E0217 13:38:40.877490 4768 reflector.go:158] "Unhandled Error" err="object-\"openshift-controller-manager\"/\"client-ca\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"client-ca\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-controller-manager\": no relationship found between node 'crc' and this object" logger="UnhandledError" Feb 17 13:38:40 crc kubenswrapper[4768]: W0217 13:38:40.877549 4768 reflector.go:561] object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw": failed to list *v1.Secret: secrets "openshift-controller-manager-operator-dockercfg-vw8fw" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-controller-manager-operator": no relationship found between node 'crc' and this object Feb 17 13:38:40 crc kubenswrapper[4768]: E0217 13:38:40.877567 4768 reflector.go:158] "Unhandled Error" err="object-\"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-dockercfg-vw8fw\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"openshift-controller-manager-operator-dockercfg-vw8fw\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-controller-manager-operator\": no relationship found between node 'crc' and this object" logger="UnhandledError" Feb 17 13:38:40 crc kubenswrapper[4768]: I0217 13:38:40.877989 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Feb 17 13:38:40 crc kubenswrapper[4768]: W0217 13:38:40.878221 4768 reflector.go:561] object-"openshift-controller-manager"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: configmaps "openshift-service-ca.crt" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-controller-manager": no relationship found between node 'crc' and this object Feb 17 13:38:40 crc kubenswrapper[4768]: E0217 13:38:40.878253 4768 reflector.go:158] "Unhandled Error" err="object-\"openshift-controller-manager\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"openshift-service-ca.crt\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-controller-manager\": no relationship found between node 'crc' and this object" logger="UnhandledError" Feb 17 13:38:40 crc kubenswrapper[4768]: I0217 13:38:40.878261 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Feb 17 13:38:40 crc kubenswrapper[4768]: W0217 13:38:40.878296 4768 reflector.go:561] object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c": failed to list *v1.Secret: secrets "openshift-controller-manager-sa-dockercfg-msq4c" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-controller-manager": no relationship found between node 'crc' and this object Feb 17 13:38:40 crc kubenswrapper[4768]: E0217 13:38:40.878309 4768 reflector.go:158] "Unhandled Error" err="object-\"openshift-controller-manager\"/\"openshift-controller-manager-sa-dockercfg-msq4c\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"openshift-controller-manager-sa-dockercfg-msq4c\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-controller-manager\": no relationship found between node 'crc' and this object" logger="UnhandledError" Feb 17 13:38:40 crc kubenswrapper[4768]: I0217 13:38:40.878318 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Feb 17 13:38:40 crc kubenswrapper[4768]: I0217 13:38:40.878355 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Feb 17 13:38:40 crc kubenswrapper[4768]: I0217 13:38:40.878460 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Feb 17 13:38:40 crc kubenswrapper[4768]: I0217 13:38:40.878517 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Feb 17 13:38:40 crc kubenswrapper[4768]: I0217 13:38:40.878562 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Feb 17 13:38:40 crc kubenswrapper[4768]: I0217 13:38:40.878663 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Feb 17 13:38:40 crc kubenswrapper[4768]: I0217 13:38:40.878787 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Feb 17 13:38:40 crc kubenswrapper[4768]: I0217 13:38:40.879032 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Feb 17 13:38:40 crc kubenswrapper[4768]: W0217 13:38:40.879094 4768 reflector.go:561] object-"openshift-controller-manager-operator"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-controller-manager-operator": no relationship found between node 'crc' and this object Feb 17 13:38:40 crc kubenswrapper[4768]: W0217 13:38:40.879161 4768 reflector.go:561] object-"openshift-authentication"/"v4-0-config-system-session": failed to list *v1.Secret: secrets "v4-0-config-system-session" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-authentication": no relationship found between node 'crc' and this object Feb 17 13:38:40 crc kubenswrapper[4768]: E0217 13:38:40.879176 4768 reflector.go:158] "Unhandled Error" err="object-\"openshift-authentication\"/\"v4-0-config-system-session\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"v4-0-config-system-session\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-authentication\": no relationship found between node 'crc' and this object" logger="UnhandledError" Feb 17 13:38:40 crc kubenswrapper[4768]: W0217 13:38:40.879180 4768 reflector.go:561] object-"openshift-authentication"/"v4-0-config-user-template-login": failed to list *v1.Secret: secrets "v4-0-config-user-template-login" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-authentication": no relationship found between node 'crc' and this object Feb 17 13:38:40 crc kubenswrapper[4768]: E0217 13:38:40.879201 4768 reflector.go:158] "Unhandled Error" err="object-\"openshift-authentication\"/\"v4-0-config-user-template-login\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"v4-0-config-user-template-login\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-authentication\": no relationship found between node 'crc' and this object" logger="UnhandledError" Feb 17 13:38:40 crc kubenswrapper[4768]: W0217 13:38:40.879208 4768 reflector.go:561] object-"openshift-controller-manager-operator"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: configmaps "openshift-service-ca.crt" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-controller-manager-operator": no relationship found between node 'crc' and this object Feb 17 13:38:40 crc kubenswrapper[4768]: E0217 13:38:40.879194 4768 reflector.go:158] "Unhandled Error" err="object-\"openshift-controller-manager-operator\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-controller-manager-operator\": no relationship found between node 'crc' and this object" logger="UnhandledError" Feb 17 13:38:40 crc kubenswrapper[4768]: E0217 13:38:40.879236 4768 reflector.go:158] "Unhandled Error" err="object-\"openshift-controller-manager-operator\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"openshift-service-ca.crt\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-controller-manager-operator\": no relationship found between node 'crc' and this object" logger="UnhandledError" Feb 17 13:38:40 crc kubenswrapper[4768]: W0217 13:38:40.879235 4768 reflector.go:561] object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle": failed to list *v1.ConfigMap: configmaps "v4-0-config-system-trusted-ca-bundle" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-authentication": no relationship found between node 'crc' and this object Feb 17 13:38:40 crc kubenswrapper[4768]: E0217 13:38:40.879272 4768 reflector.go:158] "Unhandled Error" err="object-\"openshift-authentication\"/\"v4-0-config-system-trusted-ca-bundle\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"v4-0-config-system-trusted-ca-bundle\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-authentication\": no relationship found between node 'crc' and this object" logger="UnhandledError" Feb 17 13:38:40 crc kubenswrapper[4768]: W0217 13:38:40.879259 4768 reflector.go:561] object-"openshift-authentication"/"v4-0-config-system-service-ca": failed to list *v1.ConfigMap: configmaps "v4-0-config-system-service-ca" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-authentication": no relationship found between node 'crc' and this object Feb 17 13:38:40 crc kubenswrapper[4768]: E0217 13:38:40.879291 4768 reflector.go:158] "Unhandled Error" err="object-\"openshift-authentication\"/\"v4-0-config-system-service-ca\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"v4-0-config-system-service-ca\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-authentication\": no relationship found between node 'crc' and this object" logger="UnhandledError" Feb 17 13:38:40 crc kubenswrapper[4768]: W0217 13:38:40.879302 4768 reflector.go:561] object-"openshift-controller-manager"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-controller-manager": no relationship found between node 'crc' and this object Feb 17 13:38:40 crc kubenswrapper[4768]: W0217 13:38:40.879326 4768 reflector.go:561] object-"openshift-controller-manager"/"config": failed to list *v1.ConfigMap: configmaps "config" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-controller-manager": no relationship found between node 'crc' and this object Feb 17 13:38:40 crc kubenswrapper[4768]: E0217 13:38:40.879327 4768 reflector.go:158] "Unhandled Error" err="object-\"openshift-controller-manager\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-controller-manager\": no relationship found between node 'crc' and this object" logger="UnhandledError" Feb 17 13:38:40 crc kubenswrapper[4768]: E0217 13:38:40.879339 4768 reflector.go:158] "Unhandled Error" err="object-\"openshift-controller-manager\"/\"config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"config\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-controller-manager\": no relationship found between node 'crc' and this object" logger="UnhandledError" Feb 17 13:38:40 crc kubenswrapper[4768]: W0217 13:38:40.879370 4768 reflector.go:561] object-"openshift-controller-manager"/"serving-cert": failed to list *v1.Secret: secrets "serving-cert" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-controller-manager": no relationship found between node 'crc' and this object Feb 17 13:38:40 crc kubenswrapper[4768]: E0217 13:38:40.879379 4768 reflector.go:158] "Unhandled Error" err="object-\"openshift-controller-manager\"/\"serving-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"serving-cert\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-controller-manager\": no relationship found between node 'crc' and this object" logger="UnhandledError" Feb 17 13:38:40 crc kubenswrapper[4768]: W0217 13:38:40.879390 4768 reflector.go:561] object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config": failed to list *v1.ConfigMap: configmaps "openshift-controller-manager-operator-config" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-controller-manager-operator": no relationship found between node 'crc' and this object Feb 17 13:38:40 crc kubenswrapper[4768]: E0217 13:38:40.879412 4768 reflector.go:158] "Unhandled Error" err="object-\"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"openshift-controller-manager-operator-config\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-controller-manager-operator\": no relationship found between node 'crc' and this object" logger="UnhandledError" Feb 17 13:38:40 crc kubenswrapper[4768]: W0217 13:38:40.879473 4768 reflector.go:561] object-"openshift-authentication"/"v4-0-config-system-router-certs": failed to list *v1.Secret: secrets "v4-0-config-system-router-certs" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-authentication": no relationship found between node 'crc' and this object Feb 17 13:38:40 crc kubenswrapper[4768]: I0217 13:38:40.879485 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Feb 17 13:38:40 crc kubenswrapper[4768]: E0217 13:38:40.879490 4768 reflector.go:158] "Unhandled Error" err="object-\"openshift-authentication\"/\"v4-0-config-system-router-certs\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"v4-0-config-system-router-certs\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-authentication\": no relationship found between node 'crc' and this object" logger="UnhandledError" Feb 17 13:38:40 crc kubenswrapper[4768]: W0217 13:38:40.879517 4768 reflector.go:561] object-"openshift-authentication"/"v4-0-config-system-serving-cert": failed to list *v1.Secret: secrets "v4-0-config-system-serving-cert" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-authentication": no relationship found between node 'crc' and this object Feb 17 13:38:40 crc kubenswrapper[4768]: W0217 13:38:40.879548 4768 reflector.go:561] object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert": failed to list *v1.Secret: secrets "openshift-controller-manager-operator-serving-cert" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-controller-manager-operator": no relationship found between node 'crc' and this object Feb 17 13:38:40 crc kubenswrapper[4768]: E0217 13:38:40.879566 4768 reflector.go:158] "Unhandled Error" err="object-\"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-serving-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"openshift-controller-manager-operator-serving-cert\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-controller-manager-operator\": no relationship found between node 'crc' and this object" logger="UnhandledError" Feb 17 13:38:40 crc kubenswrapper[4768]: W0217 13:38:40.879139 4768 reflector.go:561] object-"openshift-authentication"/"v4-0-config-system-cliconfig": failed to list *v1.ConfigMap: configmaps "v4-0-config-system-cliconfig" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-authentication": no relationship found between node 'crc' and this object Feb 17 13:38:40 crc kubenswrapper[4768]: E0217 13:38:40.879576 4768 reflector.go:158] "Unhandled Error" err="object-\"openshift-authentication\"/\"v4-0-config-system-serving-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"v4-0-config-system-serving-cert\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-authentication\": no relationship found between node 'crc' and this object" logger="UnhandledError" Feb 17 13:38:40 crc kubenswrapper[4768]: E0217 13:38:40.879599 4768 reflector.go:158] "Unhandled Error" err="object-\"openshift-authentication\"/\"v4-0-config-system-cliconfig\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"v4-0-config-system-cliconfig\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-authentication\": no relationship found between node 'crc' and this object" logger="UnhandledError" Feb 17 13:38:40 crc kubenswrapper[4768]: I0217 13:38:40.879690 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Feb 17 13:38:40 crc kubenswrapper[4768]: I0217 13:38:40.879733 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Feb 17 13:38:40 crc kubenswrapper[4768]: I0217 13:38:40.879815 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Feb 17 13:38:40 crc kubenswrapper[4768]: I0217 13:38:40.879856 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Feb 17 13:38:40 crc kubenswrapper[4768]: I0217 13:38:40.881085 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Feb 17 13:38:40 crc kubenswrapper[4768]: I0217 13:38:40.881794 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-xwxcr"] Feb 17 13:38:40 crc kubenswrapper[4768]: I0217 13:38:40.882606 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-ghmbf"] Feb 17 13:38:40 crc kubenswrapper[4768]: I0217 13:38:40.882949 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-ghmbf" Feb 17 13:38:40 crc kubenswrapper[4768]: I0217 13:38:40.883354 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-xwxcr" Feb 17 13:38:40 crc kubenswrapper[4768]: I0217 13:38:40.883510 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Feb 17 13:38:40 crc kubenswrapper[4768]: I0217 13:38:40.886158 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-rm2vj"] Feb 17 13:38:40 crc kubenswrapper[4768]: I0217 13:38:40.886535 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-vgjr8"] Feb 17 13:38:40 crc kubenswrapper[4768]: I0217 13:38:40.886983 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-vgjr8" Feb 17 13:38:40 crc kubenswrapper[4768]: I0217 13:38:40.887189 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-rm2vj" Feb 17 13:38:40 crc kubenswrapper[4768]: I0217 13:38:40.887851 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-c55bg"] Feb 17 13:38:40 crc kubenswrapper[4768]: I0217 13:38:40.887872 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Feb 17 13:38:40 crc kubenswrapper[4768]: I0217 13:38:40.887942 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Feb 17 13:38:40 crc kubenswrapper[4768]: I0217 13:38:40.889028 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-gwd2q"] Feb 17 13:38:40 crc kubenswrapper[4768]: I0217 13:38:40.892446 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Feb 17 13:38:40 crc kubenswrapper[4768]: I0217 13:38:40.893374 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/39ee8ba0-977c-48f3-8ac9-65b958991220-audit-policies\") pod \"apiserver-7bbb656c7d-n5tl8\" (UID: \"39ee8ba0-977c-48f3-8ac9-65b958991220\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-n5tl8" Feb 17 13:38:40 crc kubenswrapper[4768]: I0217 13:38:40.893412 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0030a046-d1bb-4a34-830c-c275306cee43-trusted-ca-bundle\") pod \"console-f9d7485db-9fmzj\" (UID: \"0030a046-d1bb-4a34-830c-c275306cee43\") " pod="openshift-console/console-f9d7485db-9fmzj" Feb 17 13:38:40 crc kubenswrapper[4768]: I0217 13:38:40.893449 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mz8kk\" (UniqueName: \"kubernetes.io/projected/5d543cd6-dc4d-4ad7-b617-389465cd2cd7-kube-api-access-mz8kk\") pod \"openshift-controller-manager-operator-756b6f6bc6-nf6w5\" (UID: \"5d543cd6-dc4d-4ad7-b617-389465cd2cd7\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-nf6w5" Feb 17 13:38:40 crc kubenswrapper[4768]: I0217 13:38:40.893480 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-srjnj\" (UniqueName: \"kubernetes.io/projected/74e5b268-67e3-4e32-bccb-1a1f0717a2db-kube-api-access-srjnj\") pod \"ingress-operator-5b745b69d9-r7ksx\" (UID: \"74e5b268-67e3-4e32-bccb-1a1f0717a2db\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-r7ksx" Feb 17 13:38:40 crc kubenswrapper[4768]: I0217 13:38:40.893509 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/39ee8ba0-977c-48f3-8ac9-65b958991220-etcd-client\") pod \"apiserver-7bbb656c7d-n5tl8\" (UID: \"39ee8ba0-977c-48f3-8ac9-65b958991220\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-n5tl8" Feb 17 13:38:40 crc kubenswrapper[4768]: I0217 13:38:40.893534 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/0030a046-d1bb-4a34-830c-c275306cee43-console-serving-cert\") pod \"console-f9d7485db-9fmzj\" (UID: \"0030a046-d1bb-4a34-830c-c275306cee43\") " pod="openshift-console/console-f9d7485db-9fmzj" Feb 17 13:38:40 crc kubenswrapper[4768]: I0217 13:38:40.893554 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4bd94acf-ca75-4475-b5ca-445219fccb15-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-5nwnr\" (UID: \"4bd94acf-ca75-4475-b5ca-445219fccb15\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-5nwnr" Feb 17 13:38:40 crc kubenswrapper[4768]: I0217 13:38:40.893582 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/39ee8ba0-977c-48f3-8ac9-65b958991220-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-n5tl8\" (UID: \"39ee8ba0-977c-48f3-8ac9-65b958991220\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-n5tl8" Feb 17 13:38:40 crc kubenswrapper[4768]: I0217 13:38:40.893608 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ff5cbab6-dd03-463e-9940-ad55678c9e38-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-p7h6m\" (UID: \"ff5cbab6-dd03-463e-9940-ad55678c9e38\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-p7h6m" Feb 17 13:38:40 crc kubenswrapper[4768]: I0217 13:38:40.893631 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d5nr7\" (UniqueName: \"kubernetes.io/projected/97d3e088-9eff-4b50-a5ac-c5bd6bfcb773-kube-api-access-d5nr7\") pod \"route-controller-manager-6576b87f9c-c55bg\" (UID: \"97d3e088-9eff-4b50-a5ac-c5bd6bfcb773\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-c55bg" Feb 17 13:38:40 crc kubenswrapper[4768]: I0217 13:38:40.893658 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f4mq2\" (UniqueName: \"kubernetes.io/projected/38dc9a37-3332-40e5-b4cd-3c702455584d-kube-api-access-f4mq2\") pod \"machine-api-operator-5694c8668f-z2b5c\" (UID: \"38dc9a37-3332-40e5-b4cd-3c702455584d\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-z2b5c" Feb 17 13:38:40 crc kubenswrapper[4768]: I0217 13:38:40.893682 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ff5cbab6-dd03-463e-9940-ad55678c9e38-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-p7h6m\" (UID: \"ff5cbab6-dd03-463e-9940-ad55678c9e38\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-p7h6m" Feb 17 13:38:40 crc kubenswrapper[4768]: I0217 13:38:40.893702 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/38dc9a37-3332-40e5-b4cd-3c702455584d-config\") pod \"machine-api-operator-5694c8668f-z2b5c\" (UID: \"38dc9a37-3332-40e5-b4cd-3c702455584d\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-z2b5c" Feb 17 13:38:40 crc kubenswrapper[4768]: I0217 13:38:40.893726 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/97d3e088-9eff-4b50-a5ac-c5bd6bfcb773-serving-cert\") pod \"route-controller-manager-6576b87f9c-c55bg\" (UID: \"97d3e088-9eff-4b50-a5ac-c5bd6bfcb773\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-c55bg" Feb 17 13:38:40 crc kubenswrapper[4768]: I0217 13:38:40.893748 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/39ee8ba0-977c-48f3-8ac9-65b958991220-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-n5tl8\" (UID: \"39ee8ba0-977c-48f3-8ac9-65b958991220\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-n5tl8" Feb 17 13:38:40 crc kubenswrapper[4768]: I0217 13:38:40.893772 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/38dc9a37-3332-40e5-b4cd-3c702455584d-images\") pod \"machine-api-operator-5694c8668f-z2b5c\" (UID: \"38dc9a37-3332-40e5-b4cd-3c702455584d\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-z2b5c" Feb 17 13:38:40 crc kubenswrapper[4768]: I0217 13:38:40.893792 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gcrjd\" (UniqueName: \"kubernetes.io/projected/0030a046-d1bb-4a34-830c-c275306cee43-kube-api-access-gcrjd\") pod \"console-f9d7485db-9fmzj\" (UID: \"0030a046-d1bb-4a34-830c-c275306cee43\") " pod="openshift-console/console-f9d7485db-9fmzj" Feb 17 13:38:40 crc kubenswrapper[4768]: I0217 13:38:40.893814 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/74e5b268-67e3-4e32-bccb-1a1f0717a2db-trusted-ca\") pod \"ingress-operator-5b745b69d9-r7ksx\" (UID: \"74e5b268-67e3-4e32-bccb-1a1f0717a2db\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-r7ksx" Feb 17 13:38:40 crc kubenswrapper[4768]: I0217 13:38:40.893833 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/97d3e088-9eff-4b50-a5ac-c5bd6bfcb773-client-ca\") pod \"route-controller-manager-6576b87f9c-c55bg\" (UID: \"97d3e088-9eff-4b50-a5ac-c5bd6bfcb773\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-c55bg" Feb 17 13:38:40 crc kubenswrapper[4768]: I0217 13:38:40.893857 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4bd94acf-ca75-4475-b5ca-445219fccb15-config\") pod \"kube-apiserver-operator-766d6c64bb-5nwnr\" (UID: \"4bd94acf-ca75-4475-b5ca-445219fccb15\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-5nwnr" Feb 17 13:38:40 crc kubenswrapper[4768]: I0217 13:38:40.893877 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0030a046-d1bb-4a34-830c-c275306cee43-service-ca\") pod \"console-f9d7485db-9fmzj\" (UID: \"0030a046-d1bb-4a34-830c-c275306cee43\") " pod="openshift-console/console-f9d7485db-9fmzj" Feb 17 13:38:40 crc kubenswrapper[4768]: I0217 13:38:40.893897 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5d543cd6-dc4d-4ad7-b617-389465cd2cd7-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-nf6w5\" (UID: \"5d543cd6-dc4d-4ad7-b617-389465cd2cd7\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-nf6w5" Feb 17 13:38:40 crc kubenswrapper[4768]: I0217 13:38:40.893917 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/39ee8ba0-977c-48f3-8ac9-65b958991220-audit-dir\") pod \"apiserver-7bbb656c7d-n5tl8\" (UID: \"39ee8ba0-977c-48f3-8ac9-65b958991220\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-n5tl8" Feb 17 13:38:40 crc kubenswrapper[4768]: I0217 13:38:40.893940 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bqg88\" (UniqueName: \"kubernetes.io/projected/49314047-3ca6-4f00-bdbc-bfa8a611ddb5-kube-api-access-bqg88\") pod \"dns-operator-744455d44c-gwd2q\" (UID: \"49314047-3ca6-4f00-bdbc-bfa8a611ddb5\") " pod="openshift-dns-operator/dns-operator-744455d44c-gwd2q" Feb 17 13:38:40 crc kubenswrapper[4768]: I0217 13:38:40.893960 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-npp8v\" (UniqueName: \"kubernetes.io/projected/ff5cbab6-dd03-463e-9940-ad55678c9e38-kube-api-access-npp8v\") pod \"kube-storage-version-migrator-operator-b67b599dd-p7h6m\" (UID: \"ff5cbab6-dd03-463e-9940-ad55678c9e38\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-p7h6m" Feb 17 13:38:40 crc kubenswrapper[4768]: I0217 13:38:40.894001 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5d543cd6-dc4d-4ad7-b617-389465cd2cd7-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-nf6w5\" (UID: \"5d543cd6-dc4d-4ad7-b617-389465cd2cd7\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-nf6w5" Feb 17 13:38:40 crc kubenswrapper[4768]: I0217 13:38:40.894023 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/39ee8ba0-977c-48f3-8ac9-65b958991220-serving-cert\") pod \"apiserver-7bbb656c7d-n5tl8\" (UID: \"39ee8ba0-977c-48f3-8ac9-65b958991220\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-n5tl8" Feb 17 13:38:40 crc kubenswrapper[4768]: I0217 13:38:40.894043 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/0030a046-d1bb-4a34-830c-c275306cee43-console-config\") pod \"console-f9d7485db-9fmzj\" (UID: \"0030a046-d1bb-4a34-830c-c275306cee43\") " pod="openshift-console/console-f9d7485db-9fmzj" Feb 17 13:38:40 crc kubenswrapper[4768]: I0217 13:38:40.894063 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4bd94acf-ca75-4475-b5ca-445219fccb15-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-5nwnr\" (UID: \"4bd94acf-ca75-4475-b5ca-445219fccb15\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-5nwnr" Feb 17 13:38:40 crc kubenswrapper[4768]: I0217 13:38:40.894085 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8f568\" (UniqueName: \"kubernetes.io/projected/39ee8ba0-977c-48f3-8ac9-65b958991220-kube-api-access-8f568\") pod \"apiserver-7bbb656c7d-n5tl8\" (UID: \"39ee8ba0-977c-48f3-8ac9-65b958991220\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-n5tl8" Feb 17 13:38:40 crc kubenswrapper[4768]: I0217 13:38:40.894125 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/74e5b268-67e3-4e32-bccb-1a1f0717a2db-metrics-tls\") pod \"ingress-operator-5b745b69d9-r7ksx\" (UID: \"74e5b268-67e3-4e32-bccb-1a1f0717a2db\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-r7ksx" Feb 17 13:38:40 crc kubenswrapper[4768]: I0217 13:38:40.894149 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/39ee8ba0-977c-48f3-8ac9-65b958991220-encryption-config\") pod \"apiserver-7bbb656c7d-n5tl8\" (UID: \"39ee8ba0-977c-48f3-8ac9-65b958991220\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-n5tl8" Feb 17 13:38:40 crc kubenswrapper[4768]: I0217 13:38:40.894168 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/0030a046-d1bb-4a34-830c-c275306cee43-console-oauth-config\") pod \"console-f9d7485db-9fmzj\" (UID: \"0030a046-d1bb-4a34-830c-c275306cee43\") " pod="openshift-console/console-f9d7485db-9fmzj" Feb 17 13:38:40 crc kubenswrapper[4768]: I0217 13:38:40.894191 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/0030a046-d1bb-4a34-830c-c275306cee43-oauth-serving-cert\") pod \"console-f9d7485db-9fmzj\" (UID: \"0030a046-d1bb-4a34-830c-c275306cee43\") " pod="openshift-console/console-f9d7485db-9fmzj" Feb 17 13:38:40 crc kubenswrapper[4768]: I0217 13:38:40.894225 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/74e5b268-67e3-4e32-bccb-1a1f0717a2db-bound-sa-token\") pod \"ingress-operator-5b745b69d9-r7ksx\" (UID: \"74e5b268-67e3-4e32-bccb-1a1f0717a2db\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-r7ksx" Feb 17 13:38:40 crc kubenswrapper[4768]: I0217 13:38:40.894255 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/38dc9a37-3332-40e5-b4cd-3c702455584d-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-z2b5c\" (UID: \"38dc9a37-3332-40e5-b4cd-3c702455584d\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-z2b5c" Feb 17 13:38:40 crc kubenswrapper[4768]: I0217 13:38:40.894274 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jn6fn\" (UniqueName: \"kubernetes.io/projected/7483ebd8-979d-429d-9197-cf5ae208af0a-kube-api-access-jn6fn\") pod \"downloads-7954f5f757-mvb69\" (UID: \"7483ebd8-979d-429d-9197-cf5ae208af0a\") " pod="openshift-console/downloads-7954f5f757-mvb69" Feb 17 13:38:40 crc kubenswrapper[4768]: I0217 13:38:40.894292 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/49314047-3ca6-4f00-bdbc-bfa8a611ddb5-metrics-tls\") pod \"dns-operator-744455d44c-gwd2q\" (UID: \"49314047-3ca6-4f00-bdbc-bfa8a611ddb5\") " pod="openshift-dns-operator/dns-operator-744455d44c-gwd2q" Feb 17 13:38:40 crc kubenswrapper[4768]: I0217 13:38:40.894309 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/97d3e088-9eff-4b50-a5ac-c5bd6bfcb773-config\") pod \"route-controller-manager-6576b87f9c-c55bg\" (UID: \"97d3e088-9eff-4b50-a5ac-c5bd6bfcb773\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-c55bg" Feb 17 13:38:40 crc kubenswrapper[4768]: I0217 13:38:40.912106 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-wdd7d"] Feb 17 13:38:40 crc kubenswrapper[4768]: I0217 13:38:40.912672 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-wdd7d" Feb 17 13:38:40 crc kubenswrapper[4768]: I0217 13:38:40.913102 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-5444994796-ql5b5"] Feb 17 13:38:40 crc kubenswrapper[4768]: I0217 13:38:40.913822 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-ql5b5" Feb 17 13:38:40 crc kubenswrapper[4768]: I0217 13:38:40.919086 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-czhdm"] Feb 17 13:38:40 crc kubenswrapper[4768]: I0217 13:38:40.919429 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Feb 17 13:38:40 crc kubenswrapper[4768]: I0217 13:38:40.926535 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-z2b5c"] Feb 17 13:38:40 crc kubenswrapper[4768]: I0217 13:38:40.926578 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-26dvz"] Feb 17 13:38:40 crc kubenswrapper[4768]: I0217 13:38:40.927713 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-czhdm" Feb 17 13:38:40 crc kubenswrapper[4768]: I0217 13:38:40.929425 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-fc989"] Feb 17 13:38:40 crc kubenswrapper[4768]: I0217 13:38:40.930171 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-rjn87"] Feb 17 13:38:40 crc kubenswrapper[4768]: I0217 13:38:40.930305 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-26dvz" Feb 17 13:38:40 crc kubenswrapper[4768]: I0217 13:38:40.934218 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-fc989" Feb 17 13:38:40 crc kubenswrapper[4768]: I0217 13:38:40.943357 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29522250-hq8zz"] Feb 17 13:38:40 crc kubenswrapper[4768]: I0217 13:38:40.944642 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-m4thw"] Feb 17 13:38:40 crc kubenswrapper[4768]: I0217 13:38:40.950454 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-llw5g"] Feb 17 13:38:40 crc kubenswrapper[4768]: I0217 13:38:40.950711 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-m4thw" Feb 17 13:38:40 crc kubenswrapper[4768]: I0217 13:38:40.950948 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29522250-hq8zz" Feb 17 13:38:40 crc kubenswrapper[4768]: I0217 13:38:40.951237 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-rjn87" Feb 17 13:38:40 crc kubenswrapper[4768]: I0217 13:38:40.954920 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-5wgl6"] Feb 17 13:38:40 crc kubenswrapper[4768]: I0217 13:38:40.954923 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Feb 17 13:38:40 crc kubenswrapper[4768]: I0217 13:38:40.956186 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-llw5g" Feb 17 13:38:40 crc kubenswrapper[4768]: I0217 13:38:40.957171 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-5wgl6" Feb 17 13:38:40 crc kubenswrapper[4768]: I0217 13:38:40.962843 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-tpnsx"] Feb 17 13:38:40 crc kubenswrapper[4768]: I0217 13:38:40.964035 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-t54pq"] Feb 17 13:38:40 crc kubenswrapper[4768]: I0217 13:38:40.964150 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-tpnsx" Feb 17 13:38:40 crc kubenswrapper[4768]: I0217 13:38:40.965317 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Feb 17 13:38:40 crc kubenswrapper[4768]: I0217 13:38:40.965369 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-kqwfk"] Feb 17 13:38:40 crc kubenswrapper[4768]: I0217 13:38:40.967199 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-2p2tw"] Feb 17 13:38:40 crc kubenswrapper[4768]: I0217 13:38:40.968338 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-vfpbq"] Feb 17 13:38:40 crc kubenswrapper[4768]: I0217 13:38:40.970186 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Feb 17 13:38:40 crc kubenswrapper[4768]: I0217 13:38:40.971176 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-9fmzj"] Feb 17 13:38:40 crc kubenswrapper[4768]: I0217 13:38:40.972597 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-nf6w5"] Feb 17 13:38:40 crc kubenswrapper[4768]: I0217 13:38:40.975291 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-server-k6fdj"] Feb 17 13:38:40 crc kubenswrapper[4768]: I0217 13:38:40.976175 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-k6fdj" Feb 17 13:38:40 crc kubenswrapper[4768]: I0217 13:38:40.979316 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-ds4bl"] Feb 17 13:38:40 crc kubenswrapper[4768]: I0217 13:38:40.981775 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-m66tk"] Feb 17 13:38:40 crc kubenswrapper[4768]: I0217 13:38:40.983421 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-n5tl8"] Feb 17 13:38:40 crc kubenswrapper[4768]: I0217 13:38:40.985755 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-wdd7d"] Feb 17 13:38:40 crc kubenswrapper[4768]: I0217 13:38:40.987787 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-5nwnr"] Feb 17 13:38:40 crc kubenswrapper[4768]: I0217 13:38:40.989935 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Feb 17 13:38:40 crc kubenswrapper[4768]: I0217 13:38:40.990372 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-h77q6"] Feb 17 13:38:40 crc kubenswrapper[4768]: I0217 13:38:40.992015 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-mvb69"] Feb 17 13:38:40 crc kubenswrapper[4768]: I0217 13:38:40.993411 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-p7h6m"] Feb 17 13:38:40 crc kubenswrapper[4768]: I0217 13:38:40.995369 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/38dc9a37-3332-40e5-b4cd-3c702455584d-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-z2b5c\" (UID: \"38dc9a37-3332-40e5-b4cd-3c702455584d\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-z2b5c" Feb 17 13:38:40 crc kubenswrapper[4768]: I0217 13:38:40.995402 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jn6fn\" (UniqueName: \"kubernetes.io/projected/7483ebd8-979d-429d-9197-cf5ae208af0a-kube-api-access-jn6fn\") pod \"downloads-7954f5f757-mvb69\" (UID: \"7483ebd8-979d-429d-9197-cf5ae208af0a\") " pod="openshift-console/downloads-7954f5f757-mvb69" Feb 17 13:38:40 crc kubenswrapper[4768]: I0217 13:38:40.995429 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/49314047-3ca6-4f00-bdbc-bfa8a611ddb5-metrics-tls\") pod \"dns-operator-744455d44c-gwd2q\" (UID: \"49314047-3ca6-4f00-bdbc-bfa8a611ddb5\") " pod="openshift-dns-operator/dns-operator-744455d44c-gwd2q" Feb 17 13:38:40 crc kubenswrapper[4768]: I0217 13:38:40.995455 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/97d3e088-9eff-4b50-a5ac-c5bd6bfcb773-config\") pod \"route-controller-manager-6576b87f9c-c55bg\" (UID: \"97d3e088-9eff-4b50-a5ac-c5bd6bfcb773\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-c55bg" Feb 17 13:38:40 crc kubenswrapper[4768]: I0217 13:38:40.995489 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/39ee8ba0-977c-48f3-8ac9-65b958991220-audit-policies\") pod \"apiserver-7bbb656c7d-n5tl8\" (UID: \"39ee8ba0-977c-48f3-8ac9-65b958991220\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-n5tl8" Feb 17 13:38:40 crc kubenswrapper[4768]: I0217 13:38:40.995514 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0030a046-d1bb-4a34-830c-c275306cee43-trusted-ca-bundle\") pod \"console-f9d7485db-9fmzj\" (UID: \"0030a046-d1bb-4a34-830c-c275306cee43\") " pod="openshift-console/console-f9d7485db-9fmzj" Feb 17 13:38:40 crc kubenswrapper[4768]: I0217 13:38:40.995542 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mz8kk\" (UniqueName: \"kubernetes.io/projected/5d543cd6-dc4d-4ad7-b617-389465cd2cd7-kube-api-access-mz8kk\") pod \"openshift-controller-manager-operator-756b6f6bc6-nf6w5\" (UID: \"5d543cd6-dc4d-4ad7-b617-389465cd2cd7\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-nf6w5" Feb 17 13:38:40 crc kubenswrapper[4768]: I0217 13:38:40.995568 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-srjnj\" (UniqueName: \"kubernetes.io/projected/74e5b268-67e3-4e32-bccb-1a1f0717a2db-kube-api-access-srjnj\") pod \"ingress-operator-5b745b69d9-r7ksx\" (UID: \"74e5b268-67e3-4e32-bccb-1a1f0717a2db\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-r7ksx" Feb 17 13:38:40 crc kubenswrapper[4768]: I0217 13:38:40.995592 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/39ee8ba0-977c-48f3-8ac9-65b958991220-etcd-client\") pod \"apiserver-7bbb656c7d-n5tl8\" (UID: \"39ee8ba0-977c-48f3-8ac9-65b958991220\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-n5tl8" Feb 17 13:38:40 crc kubenswrapper[4768]: I0217 13:38:40.995618 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/0030a046-d1bb-4a34-830c-c275306cee43-console-serving-cert\") pod \"console-f9d7485db-9fmzj\" (UID: \"0030a046-d1bb-4a34-830c-c275306cee43\") " pod="openshift-console/console-f9d7485db-9fmzj" Feb 17 13:38:40 crc kubenswrapper[4768]: I0217 13:38:40.995641 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4bd94acf-ca75-4475-b5ca-445219fccb15-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-5nwnr\" (UID: \"4bd94acf-ca75-4475-b5ca-445219fccb15\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-5nwnr" Feb 17 13:38:40 crc kubenswrapper[4768]: I0217 13:38:40.995665 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/39ee8ba0-977c-48f3-8ac9-65b958991220-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-n5tl8\" (UID: \"39ee8ba0-977c-48f3-8ac9-65b958991220\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-n5tl8" Feb 17 13:38:40 crc kubenswrapper[4768]: I0217 13:38:40.995688 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ff5cbab6-dd03-463e-9940-ad55678c9e38-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-p7h6m\" (UID: \"ff5cbab6-dd03-463e-9940-ad55678c9e38\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-p7h6m" Feb 17 13:38:40 crc kubenswrapper[4768]: I0217 13:38:40.995715 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d5nr7\" (UniqueName: \"kubernetes.io/projected/97d3e088-9eff-4b50-a5ac-c5bd6bfcb773-kube-api-access-d5nr7\") pod \"route-controller-manager-6576b87f9c-c55bg\" (UID: \"97d3e088-9eff-4b50-a5ac-c5bd6bfcb773\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-c55bg" Feb 17 13:38:40 crc kubenswrapper[4768]: I0217 13:38:40.995741 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f4mq2\" (UniqueName: \"kubernetes.io/projected/38dc9a37-3332-40e5-b4cd-3c702455584d-kube-api-access-f4mq2\") pod \"machine-api-operator-5694c8668f-z2b5c\" (UID: \"38dc9a37-3332-40e5-b4cd-3c702455584d\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-z2b5c" Feb 17 13:38:40 crc kubenswrapper[4768]: I0217 13:38:40.995766 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ff5cbab6-dd03-463e-9940-ad55678c9e38-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-p7h6m\" (UID: \"ff5cbab6-dd03-463e-9940-ad55678c9e38\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-p7h6m" Feb 17 13:38:40 crc kubenswrapper[4768]: I0217 13:38:40.995789 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/38dc9a37-3332-40e5-b4cd-3c702455584d-config\") pod \"machine-api-operator-5694c8668f-z2b5c\" (UID: \"38dc9a37-3332-40e5-b4cd-3c702455584d\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-z2b5c" Feb 17 13:38:40 crc kubenswrapper[4768]: I0217 13:38:40.995816 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/97d3e088-9eff-4b50-a5ac-c5bd6bfcb773-serving-cert\") pod \"route-controller-manager-6576b87f9c-c55bg\" (UID: \"97d3e088-9eff-4b50-a5ac-c5bd6bfcb773\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-c55bg" Feb 17 13:38:40 crc kubenswrapper[4768]: I0217 13:38:40.995841 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/39ee8ba0-977c-48f3-8ac9-65b958991220-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-n5tl8\" (UID: \"39ee8ba0-977c-48f3-8ac9-65b958991220\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-n5tl8" Feb 17 13:38:40 crc kubenswrapper[4768]: I0217 13:38:40.995866 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/38dc9a37-3332-40e5-b4cd-3c702455584d-images\") pod \"machine-api-operator-5694c8668f-z2b5c\" (UID: \"38dc9a37-3332-40e5-b4cd-3c702455584d\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-z2b5c" Feb 17 13:38:40 crc kubenswrapper[4768]: I0217 13:38:40.995893 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gcrjd\" (UniqueName: \"kubernetes.io/projected/0030a046-d1bb-4a34-830c-c275306cee43-kube-api-access-gcrjd\") pod \"console-f9d7485db-9fmzj\" (UID: \"0030a046-d1bb-4a34-830c-c275306cee43\") " pod="openshift-console/console-f9d7485db-9fmzj" Feb 17 13:38:40 crc kubenswrapper[4768]: I0217 13:38:40.995917 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/74e5b268-67e3-4e32-bccb-1a1f0717a2db-trusted-ca\") pod \"ingress-operator-5b745b69d9-r7ksx\" (UID: \"74e5b268-67e3-4e32-bccb-1a1f0717a2db\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-r7ksx" Feb 17 13:38:40 crc kubenswrapper[4768]: I0217 13:38:40.995957 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/97d3e088-9eff-4b50-a5ac-c5bd6bfcb773-client-ca\") pod \"route-controller-manager-6576b87f9c-c55bg\" (UID: \"97d3e088-9eff-4b50-a5ac-c5bd6bfcb773\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-c55bg" Feb 17 13:38:40 crc kubenswrapper[4768]: I0217 13:38:40.995981 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4bd94acf-ca75-4475-b5ca-445219fccb15-config\") pod \"kube-apiserver-operator-766d6c64bb-5nwnr\" (UID: \"4bd94acf-ca75-4475-b5ca-445219fccb15\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-5nwnr" Feb 17 13:38:40 crc kubenswrapper[4768]: I0217 13:38:40.996007 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0030a046-d1bb-4a34-830c-c275306cee43-service-ca\") pod \"console-f9d7485db-9fmzj\" (UID: \"0030a046-d1bb-4a34-830c-c275306cee43\") " pod="openshift-console/console-f9d7485db-9fmzj" Feb 17 13:38:40 crc kubenswrapper[4768]: I0217 13:38:40.996028 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5d543cd6-dc4d-4ad7-b617-389465cd2cd7-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-nf6w5\" (UID: \"5d543cd6-dc4d-4ad7-b617-389465cd2cd7\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-nf6w5" Feb 17 13:38:40 crc kubenswrapper[4768]: I0217 13:38:40.996050 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/39ee8ba0-977c-48f3-8ac9-65b958991220-audit-dir\") pod \"apiserver-7bbb656c7d-n5tl8\" (UID: \"39ee8ba0-977c-48f3-8ac9-65b958991220\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-n5tl8" Feb 17 13:38:40 crc kubenswrapper[4768]: I0217 13:38:40.996078 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bqg88\" (UniqueName: \"kubernetes.io/projected/49314047-3ca6-4f00-bdbc-bfa8a611ddb5-kube-api-access-bqg88\") pod \"dns-operator-744455d44c-gwd2q\" (UID: \"49314047-3ca6-4f00-bdbc-bfa8a611ddb5\") " pod="openshift-dns-operator/dns-operator-744455d44c-gwd2q" Feb 17 13:38:40 crc kubenswrapper[4768]: I0217 13:38:40.996105 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-npp8v\" (UniqueName: \"kubernetes.io/projected/ff5cbab6-dd03-463e-9940-ad55678c9e38-kube-api-access-npp8v\") pod \"kube-storage-version-migrator-operator-b67b599dd-p7h6m\" (UID: \"ff5cbab6-dd03-463e-9940-ad55678c9e38\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-p7h6m" Feb 17 13:38:40 crc kubenswrapper[4768]: I0217 13:38:40.996170 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5d543cd6-dc4d-4ad7-b617-389465cd2cd7-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-nf6w5\" (UID: \"5d543cd6-dc4d-4ad7-b617-389465cd2cd7\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-nf6w5" Feb 17 13:38:40 crc kubenswrapper[4768]: I0217 13:38:40.996194 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/39ee8ba0-977c-48f3-8ac9-65b958991220-serving-cert\") pod \"apiserver-7bbb656c7d-n5tl8\" (UID: \"39ee8ba0-977c-48f3-8ac9-65b958991220\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-n5tl8" Feb 17 13:38:40 crc kubenswrapper[4768]: I0217 13:38:40.996219 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/0030a046-d1bb-4a34-830c-c275306cee43-console-config\") pod \"console-f9d7485db-9fmzj\" (UID: \"0030a046-d1bb-4a34-830c-c275306cee43\") " pod="openshift-console/console-f9d7485db-9fmzj" Feb 17 13:38:40 crc kubenswrapper[4768]: I0217 13:38:40.996240 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4bd94acf-ca75-4475-b5ca-445219fccb15-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-5nwnr\" (UID: \"4bd94acf-ca75-4475-b5ca-445219fccb15\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-5nwnr" Feb 17 13:38:40 crc kubenswrapper[4768]: I0217 13:38:40.996268 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8f568\" (UniqueName: \"kubernetes.io/projected/39ee8ba0-977c-48f3-8ac9-65b958991220-kube-api-access-8f568\") pod \"apiserver-7bbb656c7d-n5tl8\" (UID: \"39ee8ba0-977c-48f3-8ac9-65b958991220\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-n5tl8" Feb 17 13:38:40 crc kubenswrapper[4768]: I0217 13:38:40.996289 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/74e5b268-67e3-4e32-bccb-1a1f0717a2db-metrics-tls\") pod \"ingress-operator-5b745b69d9-r7ksx\" (UID: \"74e5b268-67e3-4e32-bccb-1a1f0717a2db\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-r7ksx" Feb 17 13:38:40 crc kubenswrapper[4768]: I0217 13:38:40.996310 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/39ee8ba0-977c-48f3-8ac9-65b958991220-encryption-config\") pod \"apiserver-7bbb656c7d-n5tl8\" (UID: \"39ee8ba0-977c-48f3-8ac9-65b958991220\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-n5tl8" Feb 17 13:38:40 crc kubenswrapper[4768]: I0217 13:38:40.996334 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/0030a046-d1bb-4a34-830c-c275306cee43-console-oauth-config\") pod \"console-f9d7485db-9fmzj\" (UID: \"0030a046-d1bb-4a34-830c-c275306cee43\") " pod="openshift-console/console-f9d7485db-9fmzj" Feb 17 13:38:40 crc kubenswrapper[4768]: I0217 13:38:40.996354 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/0030a046-d1bb-4a34-830c-c275306cee43-oauth-serving-cert\") pod \"console-f9d7485db-9fmzj\" (UID: \"0030a046-d1bb-4a34-830c-c275306cee43\") " pod="openshift-console/console-f9d7485db-9fmzj" Feb 17 13:38:40 crc kubenswrapper[4768]: I0217 13:38:40.996382 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/74e5b268-67e3-4e32-bccb-1a1f0717a2db-bound-sa-token\") pod \"ingress-operator-5b745b69d9-r7ksx\" (UID: \"74e5b268-67e3-4e32-bccb-1a1f0717a2db\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-r7ksx" Feb 17 13:38:40 crc kubenswrapper[4768]: I0217 13:38:40.996643 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0030a046-d1bb-4a34-830c-c275306cee43-trusted-ca-bundle\") pod \"console-f9d7485db-9fmzj\" (UID: \"0030a046-d1bb-4a34-830c-c275306cee43\") " pod="openshift-console/console-f9d7485db-9fmzj" Feb 17 13:38:40 crc kubenswrapper[4768]: I0217 13:38:40.996643 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/97d3e088-9eff-4b50-a5ac-c5bd6bfcb773-config\") pod \"route-controller-manager-6576b87f9c-c55bg\" (UID: \"97d3e088-9eff-4b50-a5ac-c5bd6bfcb773\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-c55bg" Feb 17 13:38:40 crc kubenswrapper[4768]: I0217 13:38:40.997279 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/39ee8ba0-977c-48f3-8ac9-65b958991220-audit-policies\") pod \"apiserver-7bbb656c7d-n5tl8\" (UID: \"39ee8ba0-977c-48f3-8ac9-65b958991220\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-n5tl8" Feb 17 13:38:40 crc kubenswrapper[4768]: I0217 13:38:40.997387 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/38dc9a37-3332-40e5-b4cd-3c702455584d-config\") pod \"machine-api-operator-5694c8668f-z2b5c\" (UID: \"38dc9a37-3332-40e5-b4cd-3c702455584d\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-z2b5c" Feb 17 13:38:40 crc kubenswrapper[4768]: I0217 13:38:40.997555 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/39ee8ba0-977c-48f3-8ac9-65b958991220-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-n5tl8\" (UID: \"39ee8ba0-977c-48f3-8ac9-65b958991220\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-n5tl8" Feb 17 13:38:40 crc kubenswrapper[4768]: I0217 13:38:40.997880 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/39ee8ba0-977c-48f3-8ac9-65b958991220-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-n5tl8\" (UID: \"39ee8ba0-977c-48f3-8ac9-65b958991220\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-n5tl8" Feb 17 13:38:40 crc kubenswrapper[4768]: I0217 13:38:40.998027 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/97d3e088-9eff-4b50-a5ac-c5bd6bfcb773-client-ca\") pod \"route-controller-manager-6576b87f9c-c55bg\" (UID: \"97d3e088-9eff-4b50-a5ac-c5bd6bfcb773\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-c55bg" Feb 17 13:38:40 crc kubenswrapper[4768]: I0217 13:38:40.998255 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4bd94acf-ca75-4475-b5ca-445219fccb15-config\") pod \"kube-apiserver-operator-766d6c64bb-5nwnr\" (UID: \"4bd94acf-ca75-4475-b5ca-445219fccb15\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-5nwnr" Feb 17 13:38:40 crc kubenswrapper[4768]: I0217 13:38:40.998486 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/38dc9a37-3332-40e5-b4cd-3c702455584d-images\") pod \"machine-api-operator-5694c8668f-z2b5c\" (UID: \"38dc9a37-3332-40e5-b4cd-3c702455584d\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-z2b5c" Feb 17 13:38:40 crc kubenswrapper[4768]: I0217 13:38:40.998988 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0030a046-d1bb-4a34-830c-c275306cee43-service-ca\") pod \"console-f9d7485db-9fmzj\" (UID: \"0030a046-d1bb-4a34-830c-c275306cee43\") " pod="openshift-console/console-f9d7485db-9fmzj" Feb 17 13:38:40 crc kubenswrapper[4768]: I0217 13:38:40.999098 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/39ee8ba0-977c-48f3-8ac9-65b958991220-audit-dir\") pod \"apiserver-7bbb656c7d-n5tl8\" (UID: \"39ee8ba0-977c-48f3-8ac9-65b958991220\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-n5tl8" Feb 17 13:38:40 crc kubenswrapper[4768]: I0217 13:38:40.999232 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/0030a046-d1bb-4a34-830c-c275306cee43-console-config\") pod \"console-f9d7485db-9fmzj\" (UID: \"0030a046-d1bb-4a34-830c-c275306cee43\") " pod="openshift-console/console-f9d7485db-9fmzj" Feb 17 13:38:40 crc kubenswrapper[4768]: I0217 13:38:40.999254 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/74e5b268-67e3-4e32-bccb-1a1f0717a2db-trusted-ca\") pod \"ingress-operator-5b745b69d9-r7ksx\" (UID: \"74e5b268-67e3-4e32-bccb-1a1f0717a2db\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-r7ksx" Feb 17 13:38:40 crc kubenswrapper[4768]: I0217 13:38:40.999635 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-8c6lh"] Feb 17 13:38:41 crc kubenswrapper[4768]: I0217 13:38:41.000093 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/0030a046-d1bb-4a34-830c-c275306cee43-oauth-serving-cert\") pod \"console-f9d7485db-9fmzj\" (UID: \"0030a046-d1bb-4a34-830c-c275306cee43\") " pod="openshift-console/console-f9d7485db-9fmzj" Feb 17 13:38:41 crc kubenswrapper[4768]: I0217 13:38:41.001035 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/39ee8ba0-977c-48f3-8ac9-65b958991220-etcd-client\") pod \"apiserver-7bbb656c7d-n5tl8\" (UID: \"39ee8ba0-977c-48f3-8ac9-65b958991220\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-n5tl8" Feb 17 13:38:41 crc kubenswrapper[4768]: I0217 13:38:41.001316 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-8lzlh"] Feb 17 13:38:41 crc kubenswrapper[4768]: I0217 13:38:41.002116 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/0030a046-d1bb-4a34-830c-c275306cee43-console-serving-cert\") pod \"console-f9d7485db-9fmzj\" (UID: \"0030a046-d1bb-4a34-830c-c275306cee43\") " pod="openshift-console/console-f9d7485db-9fmzj" Feb 17 13:38:41 crc kubenswrapper[4768]: I0217 13:38:41.002765 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/97d3e088-9eff-4b50-a5ac-c5bd6bfcb773-serving-cert\") pod \"route-controller-manager-6576b87f9c-c55bg\" (UID: \"97d3e088-9eff-4b50-a5ac-c5bd6bfcb773\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-c55bg" Feb 17 13:38:41 crc kubenswrapper[4768]: I0217 13:38:41.002831 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-gvm54"] Feb 17 13:38:41 crc kubenswrapper[4768]: I0217 13:38:41.003345 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/39ee8ba0-977c-48f3-8ac9-65b958991220-encryption-config\") pod \"apiserver-7bbb656c7d-n5tl8\" (UID: \"39ee8ba0-977c-48f3-8ac9-65b958991220\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-n5tl8" Feb 17 13:38:41 crc kubenswrapper[4768]: I0217 13:38:41.003547 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-bsrtm"] Feb 17 13:38:41 crc kubenswrapper[4768]: I0217 13:38:41.003975 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4bd94acf-ca75-4475-b5ca-445219fccb15-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-5nwnr\" (UID: \"4bd94acf-ca75-4475-b5ca-445219fccb15\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-5nwnr" Feb 17 13:38:41 crc kubenswrapper[4768]: I0217 13:38:41.004161 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/38dc9a37-3332-40e5-b4cd-3c702455584d-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-z2b5c\" (UID: \"38dc9a37-3332-40e5-b4cd-3c702455584d\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-z2b5c" Feb 17 13:38:41 crc kubenswrapper[4768]: I0217 13:38:41.004725 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-hxzgb"] Feb 17 13:38:41 crc kubenswrapper[4768]: I0217 13:38:41.005906 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/39ee8ba0-977c-48f3-8ac9-65b958991220-serving-cert\") pod \"apiserver-7bbb656c7d-n5tl8\" (UID: \"39ee8ba0-977c-48f3-8ac9-65b958991220\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-n5tl8" Feb 17 13:38:41 crc kubenswrapper[4768]: I0217 13:38:41.006355 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-ghmbf"] Feb 17 13:38:41 crc kubenswrapper[4768]: I0217 13:38:41.006441 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/0030a046-d1bb-4a34-830c-c275306cee43-console-oauth-config\") pod \"console-f9d7485db-9fmzj\" (UID: \"0030a046-d1bb-4a34-830c-c275306cee43\") " pod="openshift-console/console-f9d7485db-9fmzj" Feb 17 13:38:41 crc kubenswrapper[4768]: I0217 13:38:41.007505 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-vgjr8"] Feb 17 13:38:41 crc kubenswrapper[4768]: I0217 13:38:41.008014 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/74e5b268-67e3-4e32-bccb-1a1f0717a2db-metrics-tls\") pod \"ingress-operator-5b745b69d9-r7ksx\" (UID: \"74e5b268-67e3-4e32-bccb-1a1f0717a2db\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-r7ksx" Feb 17 13:38:41 crc kubenswrapper[4768]: I0217 13:38:41.008896 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-czhdm"] Feb 17 13:38:41 crc kubenswrapper[4768]: I0217 13:38:41.010843 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Feb 17 13:38:41 crc kubenswrapper[4768]: I0217 13:38:41.011080 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-fc989"] Feb 17 13:38:41 crc kubenswrapper[4768]: I0217 13:38:41.012135 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-xwxcr"] Feb 17 13:38:41 crc kubenswrapper[4768]: I0217 13:38:41.013271 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-26dvz"] Feb 17 13:38:41 crc kubenswrapper[4768]: I0217 13:38:41.014339 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-r7ksx"] Feb 17 13:38:41 crc kubenswrapper[4768]: I0217 13:38:41.015467 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-rjn87"] Feb 17 13:38:41 crc kubenswrapper[4768]: I0217 13:38:41.017158 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-ztfdt"] Feb 17 13:38:41 crc kubenswrapper[4768]: I0217 13:38:41.017866 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-ztfdt" Feb 17 13:38:41 crc kubenswrapper[4768]: I0217 13:38:41.019562 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/49314047-3ca6-4f00-bdbc-bfa8a611ddb5-metrics-tls\") pod \"dns-operator-744455d44c-gwd2q\" (UID: \"49314047-3ca6-4f00-bdbc-bfa8a611ddb5\") " pod="openshift-dns-operator/dns-operator-744455d44c-gwd2q" Feb 17 13:38:41 crc kubenswrapper[4768]: I0217 13:38:41.019638 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-5wgl6"] Feb 17 13:38:41 crc kubenswrapper[4768]: I0217 13:38:41.020071 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-pls7p"] Feb 17 13:38:41 crc kubenswrapper[4768]: I0217 13:38:41.030825 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Feb 17 13:38:41 crc kubenswrapper[4768]: I0217 13:38:41.033356 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-tpnsx"] Feb 17 13:38:41 crc kubenswrapper[4768]: I0217 13:38:41.033450 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-pls7p" Feb 17 13:38:41 crc kubenswrapper[4768]: I0217 13:38:41.035494 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-llw5g"] Feb 17 13:38:41 crc kubenswrapper[4768]: I0217 13:38:41.037127 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-m4thw"] Feb 17 13:38:41 crc kubenswrapper[4768]: I0217 13:38:41.038260 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-pls7p"] Feb 17 13:38:41 crc kubenswrapper[4768]: I0217 13:38:41.039290 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29522250-hq8zz"] Feb 17 13:38:41 crc kubenswrapper[4768]: I0217 13:38:41.040337 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-rm2vj"] Feb 17 13:38:41 crc kubenswrapper[4768]: I0217 13:38:41.041460 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-ztfdt"] Feb 17 13:38:41 crc kubenswrapper[4768]: I0217 13:38:41.043245 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-sqgwf"] Feb 17 13:38:41 crc kubenswrapper[4768]: I0217 13:38:41.043855 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-sqgwf"] Feb 17 13:38:41 crc kubenswrapper[4768]: I0217 13:38:41.043927 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-sqgwf" Feb 17 13:38:41 crc kubenswrapper[4768]: I0217 13:38:41.050585 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Feb 17 13:38:41 crc kubenswrapper[4768]: I0217 13:38:41.070770 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Feb 17 13:38:41 crc kubenswrapper[4768]: I0217 13:38:41.092233 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Feb 17 13:38:41 crc kubenswrapper[4768]: I0217 13:38:41.110337 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Feb 17 13:38:41 crc kubenswrapper[4768]: I0217 13:38:41.130675 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Feb 17 13:38:41 crc kubenswrapper[4768]: I0217 13:38:41.151450 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Feb 17 13:38:41 crc kubenswrapper[4768]: I0217 13:38:41.175200 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Feb 17 13:38:41 crc kubenswrapper[4768]: I0217 13:38:41.212666 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Feb 17 13:38:41 crc kubenswrapper[4768]: I0217 13:38:41.230596 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Feb 17 13:38:41 crc kubenswrapper[4768]: I0217 13:38:41.251482 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Feb 17 13:38:41 crc kubenswrapper[4768]: I0217 13:38:41.270149 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Feb 17 13:38:41 crc kubenswrapper[4768]: I0217 13:38:41.290667 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Feb 17 13:38:41 crc kubenswrapper[4768]: I0217 13:38:41.310955 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Feb 17 13:38:41 crc kubenswrapper[4768]: I0217 13:38:41.330132 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Feb 17 13:38:41 crc kubenswrapper[4768]: I0217 13:38:41.352689 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Feb 17 13:38:41 crc kubenswrapper[4768]: I0217 13:38:41.370053 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Feb 17 13:38:41 crc kubenswrapper[4768]: I0217 13:38:41.390956 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Feb 17 13:38:41 crc kubenswrapper[4768]: I0217 13:38:41.410791 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Feb 17 13:38:41 crc kubenswrapper[4768]: I0217 13:38:41.430938 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Feb 17 13:38:41 crc kubenswrapper[4768]: I0217 13:38:41.456596 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Feb 17 13:38:41 crc kubenswrapper[4768]: I0217 13:38:41.470173 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Feb 17 13:38:41 crc kubenswrapper[4768]: I0217 13:38:41.490856 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Feb 17 13:38:41 crc kubenswrapper[4768]: I0217 13:38:41.510630 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Feb 17 13:38:41 crc kubenswrapper[4768]: I0217 13:38:41.530740 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Feb 17 13:38:41 crc kubenswrapper[4768]: I0217 13:38:41.550722 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Feb 17 13:38:41 crc kubenswrapper[4768]: I0217 13:38:41.570508 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Feb 17 13:38:41 crc kubenswrapper[4768]: I0217 13:38:41.610934 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Feb 17 13:38:41 crc kubenswrapper[4768]: I0217 13:38:41.630662 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Feb 17 13:38:41 crc kubenswrapper[4768]: I0217 13:38:41.650765 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Feb 17 13:38:41 crc kubenswrapper[4768]: I0217 13:38:41.672051 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Feb 17 13:38:41 crc kubenswrapper[4768]: I0217 13:38:41.691708 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Feb 17 13:38:41 crc kubenswrapper[4768]: I0217 13:38:41.711433 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Feb 17 13:38:41 crc kubenswrapper[4768]: I0217 13:38:41.730883 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Feb 17 13:38:41 crc kubenswrapper[4768]: I0217 13:38:41.750685 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Feb 17 13:38:41 crc kubenswrapper[4768]: I0217 13:38:41.771378 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Feb 17 13:38:41 crc kubenswrapper[4768]: I0217 13:38:41.790858 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Feb 17 13:38:41 crc kubenswrapper[4768]: I0217 13:38:41.810485 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Feb 17 13:38:41 crc kubenswrapper[4768]: I0217 13:38:41.831792 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Feb 17 13:38:41 crc kubenswrapper[4768]: I0217 13:38:41.851242 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Feb 17 13:38:41 crc kubenswrapper[4768]: I0217 13:38:41.871229 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Feb 17 13:38:41 crc kubenswrapper[4768]: I0217 13:38:41.891347 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Feb 17 13:38:41 crc kubenswrapper[4768]: I0217 13:38:41.910802 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Feb 17 13:38:41 crc kubenswrapper[4768]: I0217 13:38:41.929877 4768 request.go:700] Waited for 1.015778216s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ingress/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&limit=500&resourceVersion=0 Feb 17 13:38:41 crc kubenswrapper[4768]: I0217 13:38:41.931281 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Feb 17 13:38:41 crc kubenswrapper[4768]: I0217 13:38:41.951648 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Feb 17 13:38:41 crc kubenswrapper[4768]: I0217 13:38:41.975122 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Feb 17 13:38:41 crc kubenswrapper[4768]: I0217 13:38:41.991833 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Feb 17 13:38:41 crc kubenswrapper[4768]: E0217 13:38:41.998716 4768 secret.go:188] Couldn't get secret openshift-kube-storage-version-migrator-operator/serving-cert: failed to sync secret cache: timed out waiting for the condition Feb 17 13:38:41 crc kubenswrapper[4768]: E0217 13:38:41.998790 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ff5cbab6-dd03-463e-9940-ad55678c9e38-serving-cert podName:ff5cbab6-dd03-463e-9940-ad55678c9e38 nodeName:}" failed. No retries permitted until 2026-02-17 13:38:42.498771429 +0000 UTC m=+141.778157871 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/ff5cbab6-dd03-463e-9940-ad55678c9e38-serving-cert") pod "kube-storage-version-migrator-operator-b67b599dd-p7h6m" (UID: "ff5cbab6-dd03-463e-9940-ad55678c9e38") : failed to sync secret cache: timed out waiting for the condition Feb 17 13:38:41 crc kubenswrapper[4768]: E0217 13:38:41.998715 4768 configmap.go:193] Couldn't get configMap openshift-kube-storage-version-migrator-operator/config: failed to sync configmap cache: timed out waiting for the condition Feb 17 13:38:41 crc kubenswrapper[4768]: E0217 13:38:41.998718 4768 configmap.go:193] Couldn't get configMap openshift-controller-manager-operator/openshift-controller-manager-operator-config: failed to sync configmap cache: timed out waiting for the condition Feb 17 13:38:41 crc kubenswrapper[4768]: E0217 13:38:41.998999 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ff5cbab6-dd03-463e-9940-ad55678c9e38-config podName:ff5cbab6-dd03-463e-9940-ad55678c9e38 nodeName:}" failed. No retries permitted until 2026-02-17 13:38:42.498966425 +0000 UTC m=+141.778352877 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/ff5cbab6-dd03-463e-9940-ad55678c9e38-config") pod "kube-storage-version-migrator-operator-b67b599dd-p7h6m" (UID: "ff5cbab6-dd03-463e-9940-ad55678c9e38") : failed to sync configmap cache: timed out waiting for the condition Feb 17 13:38:41 crc kubenswrapper[4768]: E0217 13:38:41.999025 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5d543cd6-dc4d-4ad7-b617-389465cd2cd7-config podName:5d543cd6-dc4d-4ad7-b617-389465cd2cd7 nodeName:}" failed. No retries permitted until 2026-02-17 13:38:42.499015697 +0000 UTC m=+141.778402149 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/5d543cd6-dc4d-4ad7-b617-389465cd2cd7-config") pod "openshift-controller-manager-operator-756b6f6bc6-nf6w5" (UID: "5d543cd6-dc4d-4ad7-b617-389465cd2cd7") : failed to sync configmap cache: timed out waiting for the condition Feb 17 13:38:41 crc kubenswrapper[4768]: E0217 13:38:41.999848 4768 secret.go:188] Couldn't get secret openshift-controller-manager-operator/openshift-controller-manager-operator-serving-cert: failed to sync secret cache: timed out waiting for the condition Feb 17 13:38:41 crc kubenswrapper[4768]: E0217 13:38:41.999896 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5d543cd6-dc4d-4ad7-b617-389465cd2cd7-serving-cert podName:5d543cd6-dc4d-4ad7-b617-389465cd2cd7 nodeName:}" failed. No retries permitted until 2026-02-17 13:38:42.499887874 +0000 UTC m=+141.779274316 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/5d543cd6-dc4d-4ad7-b617-389465cd2cd7-serving-cert") pod "openshift-controller-manager-operator-756b6f6bc6-nf6w5" (UID: "5d543cd6-dc4d-4ad7-b617-389465cd2cd7") : failed to sync secret cache: timed out waiting for the condition Feb 17 13:38:42 crc kubenswrapper[4768]: I0217 13:38:42.010092 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Feb 17 13:38:42 crc kubenswrapper[4768]: I0217 13:38:42.032031 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Feb 17 13:38:42 crc kubenswrapper[4768]: I0217 13:38:42.051234 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Feb 17 13:38:42 crc kubenswrapper[4768]: I0217 13:38:42.071397 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Feb 17 13:38:42 crc kubenswrapper[4768]: I0217 13:38:42.092367 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Feb 17 13:38:42 crc kubenswrapper[4768]: I0217 13:38:42.111189 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Feb 17 13:38:42 crc kubenswrapper[4768]: I0217 13:38:42.131463 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Feb 17 13:38:42 crc kubenswrapper[4768]: I0217 13:38:42.151014 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Feb 17 13:38:42 crc kubenswrapper[4768]: I0217 13:38:42.171489 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Feb 17 13:38:42 crc kubenswrapper[4768]: I0217 13:38:42.190797 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Feb 17 13:38:42 crc kubenswrapper[4768]: I0217 13:38:42.210770 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Feb 17 13:38:42 crc kubenswrapper[4768]: I0217 13:38:42.231351 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Feb 17 13:38:42 crc kubenswrapper[4768]: I0217 13:38:42.268464 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Feb 17 13:38:42 crc kubenswrapper[4768]: I0217 13:38:42.277896 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Feb 17 13:38:42 crc kubenswrapper[4768]: I0217 13:38:42.291056 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Feb 17 13:38:42 crc kubenswrapper[4768]: I0217 13:38:42.320822 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 17 13:38:42 crc kubenswrapper[4768]: I0217 13:38:42.330800 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 17 13:38:42 crc kubenswrapper[4768]: I0217 13:38:42.351373 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Feb 17 13:38:42 crc kubenswrapper[4768]: I0217 13:38:42.371587 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Feb 17 13:38:42 crc kubenswrapper[4768]: I0217 13:38:42.390919 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Feb 17 13:38:42 crc kubenswrapper[4768]: I0217 13:38:42.411126 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Feb 17 13:38:42 crc kubenswrapper[4768]: I0217 13:38:42.431856 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Feb 17 13:38:42 crc kubenswrapper[4768]: I0217 13:38:42.450576 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Feb 17 13:38:42 crc kubenswrapper[4768]: I0217 13:38:42.471133 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Feb 17 13:38:42 crc kubenswrapper[4768]: I0217 13:38:42.490911 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Feb 17 13:38:42 crc kubenswrapper[4768]: I0217 13:38:42.511000 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Feb 17 13:38:42 crc kubenswrapper[4768]: I0217 13:38:42.522215 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ff5cbab6-dd03-463e-9940-ad55678c9e38-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-p7h6m\" (UID: \"ff5cbab6-dd03-463e-9940-ad55678c9e38\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-p7h6m" Feb 17 13:38:42 crc kubenswrapper[4768]: I0217 13:38:42.522274 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ff5cbab6-dd03-463e-9940-ad55678c9e38-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-p7h6m\" (UID: \"ff5cbab6-dd03-463e-9940-ad55678c9e38\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-p7h6m" Feb 17 13:38:42 crc kubenswrapper[4768]: I0217 13:38:42.522313 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5d543cd6-dc4d-4ad7-b617-389465cd2cd7-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-nf6w5\" (UID: \"5d543cd6-dc4d-4ad7-b617-389465cd2cd7\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-nf6w5" Feb 17 13:38:42 crc kubenswrapper[4768]: I0217 13:38:42.522371 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5d543cd6-dc4d-4ad7-b617-389465cd2cd7-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-nf6w5\" (UID: \"5d543cd6-dc4d-4ad7-b617-389465cd2cd7\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-nf6w5" Feb 17 13:38:42 crc kubenswrapper[4768]: I0217 13:38:42.536248 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Feb 17 13:38:42 crc kubenswrapper[4768]: I0217 13:38:42.550681 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Feb 17 13:38:42 crc kubenswrapper[4768]: I0217 13:38:42.571246 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Feb 17 13:38:42 crc kubenswrapper[4768]: I0217 13:38:42.591026 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Feb 17 13:38:42 crc kubenswrapper[4768]: I0217 13:38:42.611365 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Feb 17 13:38:42 crc kubenswrapper[4768]: I0217 13:38:42.631011 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Feb 17 13:38:42 crc kubenswrapper[4768]: I0217 13:38:42.707079 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jn6fn\" (UniqueName: \"kubernetes.io/projected/7483ebd8-979d-429d-9197-cf5ae208af0a-kube-api-access-jn6fn\") pod \"downloads-7954f5f757-mvb69\" (UID: \"7483ebd8-979d-429d-9197-cf5ae208af0a\") " pod="openshift-console/downloads-7954f5f757-mvb69" Feb 17 13:38:42 crc kubenswrapper[4768]: I0217 13:38:42.717857 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/74e5b268-67e3-4e32-bccb-1a1f0717a2db-bound-sa-token\") pod \"ingress-operator-5b745b69d9-r7ksx\" (UID: \"74e5b268-67e3-4e32-bccb-1a1f0717a2db\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-r7ksx" Feb 17 13:38:42 crc kubenswrapper[4768]: I0217 13:38:42.720698 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4bd94acf-ca75-4475-b5ca-445219fccb15-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-5nwnr\" (UID: \"4bd94acf-ca75-4475-b5ca-445219fccb15\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-5nwnr" Feb 17 13:38:42 crc kubenswrapper[4768]: I0217 13:38:42.745966 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-srjnj\" (UniqueName: \"kubernetes.io/projected/74e5b268-67e3-4e32-bccb-1a1f0717a2db-kube-api-access-srjnj\") pod \"ingress-operator-5b745b69d9-r7ksx\" (UID: \"74e5b268-67e3-4e32-bccb-1a1f0717a2db\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-r7ksx" Feb 17 13:38:42 crc kubenswrapper[4768]: I0217 13:38:42.766874 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gcrjd\" (UniqueName: \"kubernetes.io/projected/0030a046-d1bb-4a34-830c-c275306cee43-kube-api-access-gcrjd\") pod \"console-f9d7485db-9fmzj\" (UID: \"0030a046-d1bb-4a34-830c-c275306cee43\") " pod="openshift-console/console-f9d7485db-9fmzj" Feb 17 13:38:42 crc kubenswrapper[4768]: I0217 13:38:42.773223 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-5nwnr" Feb 17 13:38:42 crc kubenswrapper[4768]: I0217 13:38:42.781641 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-mvb69" Feb 17 13:38:42 crc kubenswrapper[4768]: I0217 13:38:42.791226 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d5nr7\" (UniqueName: \"kubernetes.io/projected/97d3e088-9eff-4b50-a5ac-c5bd6bfcb773-kube-api-access-d5nr7\") pod \"route-controller-manager-6576b87f9c-c55bg\" (UID: \"97d3e088-9eff-4b50-a5ac-c5bd6bfcb773\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-c55bg" Feb 17 13:38:42 crc kubenswrapper[4768]: I0217 13:38:42.802033 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-r7ksx" Feb 17 13:38:42 crc kubenswrapper[4768]: I0217 13:38:42.808553 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f4mq2\" (UniqueName: \"kubernetes.io/projected/38dc9a37-3332-40e5-b4cd-3c702455584d-kube-api-access-f4mq2\") pod \"machine-api-operator-5694c8668f-z2b5c\" (UID: \"38dc9a37-3332-40e5-b4cd-3c702455584d\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-z2b5c" Feb 17 13:38:42 crc kubenswrapper[4768]: I0217 13:38:42.857604 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bqg88\" (UniqueName: \"kubernetes.io/projected/49314047-3ca6-4f00-bdbc-bfa8a611ddb5-kube-api-access-bqg88\") pod \"dns-operator-744455d44c-gwd2q\" (UID: \"49314047-3ca6-4f00-bdbc-bfa8a611ddb5\") " pod="openshift-dns-operator/dns-operator-744455d44c-gwd2q" Feb 17 13:38:42 crc kubenswrapper[4768]: I0217 13:38:42.865159 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8f568\" (UniqueName: \"kubernetes.io/projected/39ee8ba0-977c-48f3-8ac9-65b958991220-kube-api-access-8f568\") pod \"apiserver-7bbb656c7d-n5tl8\" (UID: \"39ee8ba0-977c-48f3-8ac9-65b958991220\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-n5tl8" Feb 17 13:38:42 crc kubenswrapper[4768]: I0217 13:38:42.870160 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Feb 17 13:38:42 crc kubenswrapper[4768]: I0217 13:38:42.892028 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Feb 17 13:38:42 crc kubenswrapper[4768]: I0217 13:38:42.904426 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-c55bg" Feb 17 13:38:42 crc kubenswrapper[4768]: I0217 13:38:42.912791 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Feb 17 13:38:42 crc kubenswrapper[4768]: I0217 13:38:42.935186 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-z2b5c" Feb 17 13:38:42 crc kubenswrapper[4768]: I0217 13:38:42.935225 4768 request.go:700] Waited for 1.916948499s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ingress-canary/secrets?fieldSelector=metadata.name%3Dcanary-serving-cert&limit=500&resourceVersion=0 Feb 17 13:38:42 crc kubenswrapper[4768]: I0217 13:38:42.935255 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-gwd2q" Feb 17 13:38:42 crc kubenswrapper[4768]: I0217 13:38:42.936318 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Feb 17 13:38:42 crc kubenswrapper[4768]: I0217 13:38:42.951393 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Feb 17 13:38:42 crc kubenswrapper[4768]: I0217 13:38:42.970087 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-9fmzj" Feb 17 13:38:42 crc kubenswrapper[4768]: I0217 13:38:42.971583 4768 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Feb 17 13:38:42 crc kubenswrapper[4768]: I0217 13:38:42.987793 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-n5tl8" Feb 17 13:38:42 crc kubenswrapper[4768]: I0217 13:38:42.996662 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Feb 17 13:38:43 crc kubenswrapper[4768]: I0217 13:38:43.013029 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Feb 17 13:38:43 crc kubenswrapper[4768]: I0217 13:38:43.033669 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Feb 17 13:38:43 crc kubenswrapper[4768]: I0217 13:38:43.050844 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Feb 17 13:38:43 crc kubenswrapper[4768]: I0217 13:38:43.115066 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-mvb69"] Feb 17 13:38:43 crc kubenswrapper[4768]: I0217 13:38:43.117271 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Feb 17 13:38:43 crc kubenswrapper[4768]: W0217 13:38:43.123745 4768 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7483ebd8_979d_429d_9197_cf5ae208af0a.slice/crio-8e270cfca9ba4e94d1d37ebce17fdbad0821e8f50aedb007011a74ca6d412219 WatchSource:0}: Error finding container 8e270cfca9ba4e94d1d37ebce17fdbad0821e8f50aedb007011a74ca6d412219: Status 404 returned error can't find the container with id 8e270cfca9ba4e94d1d37ebce17fdbad0821e8f50aedb007011a74ca6d412219 Feb 17 13:38:43 crc kubenswrapper[4768]: I0217 13:38:43.130282 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Feb 17 13:38:43 crc kubenswrapper[4768]: I0217 13:38:43.138492 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vfpbq\" (UID: \"9cf79399-929e-43c8-9ceb-06619ef1edee\") " pod="openshift-image-registry/image-registry-697d97f7c8-vfpbq" Feb 17 13:38:43 crc kubenswrapper[4768]: I0217 13:38:43.138532 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6k6gl\" (UniqueName: \"kubernetes.io/projected/28505874-0a70-4f53-8070-607918790abe-kube-api-access-6k6gl\") pod \"machine-approver-56656f9798-vx6f4\" (UID: \"28505874-0a70-4f53-8070-607918790abe\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-vx6f4" Feb 17 13:38:43 crc kubenswrapper[4768]: I0217 13:38:43.138553 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bc575bb-6b05-4fe4-92fb-e467de4810b7-config\") pod \"authentication-operator-69f744f599-gvm54\" (UID: \"1bc575bb-6b05-4fe4-92fb-e467de4810b7\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-gvm54" Feb 17 13:38:43 crc kubenswrapper[4768]: I0217 13:38:43.138571 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0af47ec2-b35c-48df-8f91-9c878fb5ee94-client-ca\") pod \"controller-manager-879f6c89f-bsrtm\" (UID: \"0af47ec2-b35c-48df-8f91-9c878fb5ee94\") " pod="openshift-controller-manager/controller-manager-879f6c89f-bsrtm" Feb 17 13:38:43 crc kubenswrapper[4768]: I0217 13:38:43.138587 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/b7837593-1275-40cb-820f-afe9cb13fad4-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-h77q6\" (UID: \"b7837593-1275-40cb-820f-afe9cb13fad4\") " pod="openshift-authentication/oauth-openshift-558db77b4-h77q6" Feb 17 13:38:43 crc kubenswrapper[4768]: I0217 13:38:43.138664 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/b7837593-1275-40cb-820f-afe9cb13fad4-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-h77q6\" (UID: \"b7837593-1275-40cb-820f-afe9cb13fad4\") " pod="openshift-authentication/oauth-openshift-558db77b4-h77q6" Feb 17 13:38:43 crc kubenswrapper[4768]: I0217 13:38:43.138691 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3084c4ad-c24d-48e2-9734-99ca07d07bab-serving-cert\") pod \"etcd-operator-b45778765-t54pq\" (UID: \"3084c4ad-c24d-48e2-9734-99ca07d07bab\") " pod="openshift-etcd-operator/etcd-operator-b45778765-t54pq" Feb 17 13:38:43 crc kubenswrapper[4768]: I0217 13:38:43.138733 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b7837593-1275-40cb-820f-afe9cb13fad4-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-h77q6\" (UID: \"b7837593-1275-40cb-820f-afe9cb13fad4\") " pod="openshift-authentication/oauth-openshift-558db77b4-h77q6" Feb 17 13:38:43 crc kubenswrapper[4768]: I0217 13:38:43.138791 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/eb861f06-bf99-46e8-8627-c3d99245994b-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-kqwfk\" (UID: \"eb861f06-bf99-46e8-8627-c3d99245994b\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-kqwfk" Feb 17 13:38:43 crc kubenswrapper[4768]: I0217 13:38:43.138814 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/916ef1b0-45af-4b2a-b4ec-ad6f78f43c0c-config\") pod \"openshift-apiserver-operator-796bbdcf4f-2p2tw\" (UID: \"916ef1b0-45af-4b2a-b4ec-ad6f78f43c0c\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-2p2tw" Feb 17 13:38:43 crc kubenswrapper[4768]: I0217 13:38:43.138831 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cccpk\" (UniqueName: \"kubernetes.io/projected/0af47ec2-b35c-48df-8f91-9c878fb5ee94-kube-api-access-cccpk\") pod \"controller-manager-879f6c89f-bsrtm\" (UID: \"0af47ec2-b35c-48df-8f91-9c878fb5ee94\") " pod="openshift-controller-manager/controller-manager-879f6c89f-bsrtm" Feb 17 13:38:43 crc kubenswrapper[4768]: I0217 13:38:43.138854 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/3084c4ad-c24d-48e2-9734-99ca07d07bab-etcd-ca\") pod \"etcd-operator-b45778765-t54pq\" (UID: \"3084c4ad-c24d-48e2-9734-99ca07d07bab\") " pod="openshift-etcd-operator/etcd-operator-b45778765-t54pq" Feb 17 13:38:43 crc kubenswrapper[4768]: I0217 13:38:43.138923 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/9cf79399-929e-43c8-9ceb-06619ef1edee-installation-pull-secrets\") pod \"image-registry-697d97f7c8-vfpbq\" (UID: \"9cf79399-929e-43c8-9ceb-06619ef1edee\") " pod="openshift-image-registry/image-registry-697d97f7c8-vfpbq" Feb 17 13:38:43 crc kubenswrapper[4768]: I0217 13:38:43.139079 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/9cf79399-929e-43c8-9ceb-06619ef1edee-bound-sa-token\") pod \"image-registry-697d97f7c8-vfpbq\" (UID: \"9cf79399-929e-43c8-9ceb-06619ef1edee\") " pod="openshift-image-registry/image-registry-697d97f7c8-vfpbq" Feb 17 13:38:43 crc kubenswrapper[4768]: I0217 13:38:43.139109 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/b7837593-1275-40cb-820f-afe9cb13fad4-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-h77q6\" (UID: \"b7837593-1275-40cb-820f-afe9cb13fad4\") " pod="openshift-authentication/oauth-openshift-558db77b4-h77q6" Feb 17 13:38:43 crc kubenswrapper[4768]: I0217 13:38:43.139129 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/27d2fb4d-0721-4384-bfd5-2070137b6e1c-serving-cert\") pod \"openshift-config-operator-7777fb866f-m66tk\" (UID: \"27d2fb4d-0721-4384-bfd5-2070137b6e1c\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-m66tk" Feb 17 13:38:43 crc kubenswrapper[4768]: I0217 13:38:43.139144 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xgph4\" (UniqueName: \"kubernetes.io/projected/27d2fb4d-0721-4384-bfd5-2070137b6e1c-kube-api-access-xgph4\") pod \"openshift-config-operator-7777fb866f-m66tk\" (UID: \"27d2fb4d-0721-4384-bfd5-2070137b6e1c\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-m66tk" Feb 17 13:38:43 crc kubenswrapper[4768]: I0217 13:38:43.139175 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/9cf79399-929e-43c8-9ceb-06619ef1edee-ca-trust-extracted\") pod \"image-registry-697d97f7c8-vfpbq\" (UID: \"9cf79399-929e-43c8-9ceb-06619ef1edee\") " pod="openshift-image-registry/image-registry-697d97f7c8-vfpbq" Feb 17 13:38:43 crc kubenswrapper[4768]: I0217 13:38:43.139191 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vrm4q\" (UniqueName: \"kubernetes.io/projected/3084c4ad-c24d-48e2-9734-99ca07d07bab-kube-api-access-vrm4q\") pod \"etcd-operator-b45778765-t54pq\" (UID: \"3084c4ad-c24d-48e2-9734-99ca07d07bab\") " pod="openshift-etcd-operator/etcd-operator-b45778765-t54pq" Feb 17 13:38:43 crc kubenswrapper[4768]: I0217 13:38:43.139210 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xwr7q\" (UniqueName: \"kubernetes.io/projected/1bc575bb-6b05-4fe4-92fb-e467de4810b7-kube-api-access-xwr7q\") pod \"authentication-operator-69f744f599-gvm54\" (UID: \"1bc575bb-6b05-4fe4-92fb-e467de4810b7\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-gvm54" Feb 17 13:38:43 crc kubenswrapper[4768]: I0217 13:38:43.139226 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0af47ec2-b35c-48df-8f91-9c878fb5ee94-config\") pod \"controller-manager-879f6c89f-bsrtm\" (UID: \"0af47ec2-b35c-48df-8f91-9c878fb5ee94\") " pod="openshift-controller-manager/controller-manager-879f6c89f-bsrtm" Feb 17 13:38:43 crc kubenswrapper[4768]: E0217 13:38:43.139253 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 13:38:43.639238306 +0000 UTC m=+142.918624748 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vfpbq" (UID: "9cf79399-929e-43c8-9ceb-06619ef1edee") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 13:38:43 crc kubenswrapper[4768]: I0217 13:38:43.139278 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/28505874-0a70-4f53-8070-607918790abe-auth-proxy-config\") pod \"machine-approver-56656f9798-vx6f4\" (UID: \"28505874-0a70-4f53-8070-607918790abe\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-vx6f4" Feb 17 13:38:43 crc kubenswrapper[4768]: I0217 13:38:43.139311 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pprxg\" (UniqueName: \"kubernetes.io/projected/aa502e6c-070c-46ab-a1b8-82e34b55aad7-kube-api-access-pprxg\") pod \"migrator-59844c95c7-ds4bl\" (UID: \"aa502e6c-070c-46ab-a1b8-82e34b55aad7\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-ds4bl" Feb 17 13:38:43 crc kubenswrapper[4768]: I0217 13:38:43.139329 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dr2bq\" (UniqueName: \"kubernetes.io/projected/b7837593-1275-40cb-820f-afe9cb13fad4-kube-api-access-dr2bq\") pod \"oauth-openshift-558db77b4-h77q6\" (UID: \"b7837593-1275-40cb-820f-afe9cb13fad4\") " pod="openshift-authentication/oauth-openshift-558db77b4-h77q6" Feb 17 13:38:43 crc kubenswrapper[4768]: I0217 13:38:43.139352 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/28505874-0a70-4f53-8070-607918790abe-config\") pod \"machine-approver-56656f9798-vx6f4\" (UID: \"28505874-0a70-4f53-8070-607918790abe\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-vx6f4" Feb 17 13:38:43 crc kubenswrapper[4768]: I0217 13:38:43.139369 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/0af47ec2-b35c-48df-8f91-9c878fb5ee94-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-bsrtm\" (UID: \"0af47ec2-b35c-48df-8f91-9c878fb5ee94\") " pod="openshift-controller-manager/controller-manager-879f6c89f-bsrtm" Feb 17 13:38:43 crc kubenswrapper[4768]: I0217 13:38:43.139407 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/b7837593-1275-40cb-820f-afe9cb13fad4-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-h77q6\" (UID: \"b7837593-1275-40cb-820f-afe9cb13fad4\") " pod="openshift-authentication/oauth-openshift-558db77b4-h77q6" Feb 17 13:38:43 crc kubenswrapper[4768]: I0217 13:38:43.139440 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9cf79399-929e-43c8-9ceb-06619ef1edee-trusted-ca\") pod \"image-registry-697d97f7c8-vfpbq\" (UID: \"9cf79399-929e-43c8-9ceb-06619ef1edee\") " pod="openshift-image-registry/image-registry-697d97f7c8-vfpbq" Feb 17 13:38:43 crc kubenswrapper[4768]: I0217 13:38:43.139461 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/916ef1b0-45af-4b2a-b4ec-ad6f78f43c0c-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-2p2tw\" (UID: \"916ef1b0-45af-4b2a-b4ec-ad6f78f43c0c\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-2p2tw" Feb 17 13:38:43 crc kubenswrapper[4768]: I0217 13:38:43.139475 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/b7837593-1275-40cb-820f-afe9cb13fad4-audit-policies\") pod \"oauth-openshift-558db77b4-h77q6\" (UID: \"b7837593-1275-40cb-820f-afe9cb13fad4\") " pod="openshift-authentication/oauth-openshift-558db77b4-h77q6" Feb 17 13:38:43 crc kubenswrapper[4768]: I0217 13:38:43.139506 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nxfph\" (UniqueName: \"kubernetes.io/projected/eb861f06-bf99-46e8-8627-c3d99245994b-kube-api-access-nxfph\") pod \"cluster-image-registry-operator-dc59b4c8b-kqwfk\" (UID: \"eb861f06-bf99-46e8-8627-c3d99245994b\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-kqwfk" Feb 17 13:38:43 crc kubenswrapper[4768]: I0217 13:38:43.139523 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/3084c4ad-c24d-48e2-9734-99ca07d07bab-etcd-client\") pod \"etcd-operator-b45778765-t54pq\" (UID: \"3084c4ad-c24d-48e2-9734-99ca07d07bab\") " pod="openshift-etcd-operator/etcd-operator-b45778765-t54pq" Feb 17 13:38:43 crc kubenswrapper[4768]: I0217 13:38:43.139540 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bc575bb-6b05-4fe4-92fb-e467de4810b7-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-gvm54\" (UID: \"1bc575bb-6b05-4fe4-92fb-e467de4810b7\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-gvm54" Feb 17 13:38:43 crc kubenswrapper[4768]: I0217 13:38:43.139560 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0af47ec2-b35c-48df-8f91-9c878fb5ee94-serving-cert\") pod \"controller-manager-879f6c89f-bsrtm\" (UID: \"0af47ec2-b35c-48df-8f91-9c878fb5ee94\") " pod="openshift-controller-manager/controller-manager-879f6c89f-bsrtm" Feb 17 13:38:43 crc kubenswrapper[4768]: I0217 13:38:43.139623 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/b7837593-1275-40cb-820f-afe9cb13fad4-audit-dir\") pod \"oauth-openshift-558db77b4-h77q6\" (UID: \"b7837593-1275-40cb-820f-afe9cb13fad4\") " pod="openshift-authentication/oauth-openshift-558db77b4-h77q6" Feb 17 13:38:43 crc kubenswrapper[4768]: I0217 13:38:43.139678 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/eb861f06-bf99-46e8-8627-c3d99245994b-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-kqwfk\" (UID: \"eb861f06-bf99-46e8-8627-c3d99245994b\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-kqwfk" Feb 17 13:38:43 crc kubenswrapper[4768]: I0217 13:38:43.139697 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bc575bb-6b05-4fe4-92fb-e467de4810b7-serving-cert\") pod \"authentication-operator-69f744f599-gvm54\" (UID: \"1bc575bb-6b05-4fe4-92fb-e467de4810b7\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-gvm54" Feb 17 13:38:43 crc kubenswrapper[4768]: I0217 13:38:43.139732 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/9cf79399-929e-43c8-9ceb-06619ef1edee-registry-tls\") pod \"image-registry-697d97f7c8-vfpbq\" (UID: \"9cf79399-929e-43c8-9ceb-06619ef1edee\") " pod="openshift-image-registry/image-registry-697d97f7c8-vfpbq" Feb 17 13:38:43 crc kubenswrapper[4768]: I0217 13:38:43.139749 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/b7837593-1275-40cb-820f-afe9cb13fad4-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-h77q6\" (UID: \"b7837593-1275-40cb-820f-afe9cb13fad4\") " pod="openshift-authentication/oauth-openshift-558db77b4-h77q6" Feb 17 13:38:43 crc kubenswrapper[4768]: I0217 13:38:43.139793 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/28505874-0a70-4f53-8070-607918790abe-machine-approver-tls\") pod \"machine-approver-56656f9798-vx6f4\" (UID: \"28505874-0a70-4f53-8070-607918790abe\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-vx6f4" Feb 17 13:38:43 crc kubenswrapper[4768]: I0217 13:38:43.139808 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/eb861f06-bf99-46e8-8627-c3d99245994b-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-kqwfk\" (UID: \"eb861f06-bf99-46e8-8627-c3d99245994b\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-kqwfk" Feb 17 13:38:43 crc kubenswrapper[4768]: I0217 13:38:43.139834 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/b7837593-1275-40cb-820f-afe9cb13fad4-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-h77q6\" (UID: \"b7837593-1275-40cb-820f-afe9cb13fad4\") " pod="openshift-authentication/oauth-openshift-558db77b4-h77q6" Feb 17 13:38:43 crc kubenswrapper[4768]: I0217 13:38:43.139850 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/27d2fb4d-0721-4384-bfd5-2070137b6e1c-available-featuregates\") pod \"openshift-config-operator-7777fb866f-m66tk\" (UID: \"27d2fb4d-0721-4384-bfd5-2070137b6e1c\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-m66tk" Feb 17 13:38:43 crc kubenswrapper[4768]: I0217 13:38:43.139864 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/3084c4ad-c24d-48e2-9734-99ca07d07bab-etcd-service-ca\") pod \"etcd-operator-b45778765-t54pq\" (UID: \"3084c4ad-c24d-48e2-9734-99ca07d07bab\") " pod="openshift-etcd-operator/etcd-operator-b45778765-t54pq" Feb 17 13:38:43 crc kubenswrapper[4768]: I0217 13:38:43.139879 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bc575bb-6b05-4fe4-92fb-e467de4810b7-service-ca-bundle\") pod \"authentication-operator-69f744f599-gvm54\" (UID: \"1bc575bb-6b05-4fe4-92fb-e467de4810b7\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-gvm54" Feb 17 13:38:43 crc kubenswrapper[4768]: I0217 13:38:43.139894 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/b7837593-1275-40cb-820f-afe9cb13fad4-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-h77q6\" (UID: \"b7837593-1275-40cb-820f-afe9cb13fad4\") " pod="openshift-authentication/oauth-openshift-558db77b4-h77q6" Feb 17 13:38:43 crc kubenswrapper[4768]: I0217 13:38:43.139909 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/9cf79399-929e-43c8-9ceb-06619ef1edee-registry-certificates\") pod \"image-registry-697d97f7c8-vfpbq\" (UID: \"9cf79399-929e-43c8-9ceb-06619ef1edee\") " pod="openshift-image-registry/image-registry-697d97f7c8-vfpbq" Feb 17 13:38:43 crc kubenswrapper[4768]: I0217 13:38:43.139922 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7jpcl\" (UniqueName: \"kubernetes.io/projected/9cf79399-929e-43c8-9ceb-06619ef1edee-kube-api-access-7jpcl\") pod \"image-registry-697d97f7c8-vfpbq\" (UID: \"9cf79399-929e-43c8-9ceb-06619ef1edee\") " pod="openshift-image-registry/image-registry-697d97f7c8-vfpbq" Feb 17 13:38:43 crc kubenswrapper[4768]: I0217 13:38:43.139937 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/b7837593-1275-40cb-820f-afe9cb13fad4-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-h77q6\" (UID: \"b7837593-1275-40cb-820f-afe9cb13fad4\") " pod="openshift-authentication/oauth-openshift-558db77b4-h77q6" Feb 17 13:38:43 crc kubenswrapper[4768]: I0217 13:38:43.139960 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/b7837593-1275-40cb-820f-afe9cb13fad4-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-h77q6\" (UID: \"b7837593-1275-40cb-820f-afe9cb13fad4\") " pod="openshift-authentication/oauth-openshift-558db77b4-h77q6" Feb 17 13:38:43 crc kubenswrapper[4768]: I0217 13:38:43.139987 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3084c4ad-c24d-48e2-9734-99ca07d07bab-config\") pod \"etcd-operator-b45778765-t54pq\" (UID: \"3084c4ad-c24d-48e2-9734-99ca07d07bab\") " pod="openshift-etcd-operator/etcd-operator-b45778765-t54pq" Feb 17 13:38:43 crc kubenswrapper[4768]: I0217 13:38:43.140003 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-98sh9\" (UniqueName: \"kubernetes.io/projected/916ef1b0-45af-4b2a-b4ec-ad6f78f43c0c-kube-api-access-98sh9\") pod \"openshift-apiserver-operator-796bbdcf4f-2p2tw\" (UID: \"916ef1b0-45af-4b2a-b4ec-ad6f78f43c0c\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-2p2tw" Feb 17 13:38:43 crc kubenswrapper[4768]: I0217 13:38:43.140018 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/b7837593-1275-40cb-820f-afe9cb13fad4-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-h77q6\" (UID: \"b7837593-1275-40cb-820f-afe9cb13fad4\") " pod="openshift-authentication/oauth-openshift-558db77b4-h77q6" Feb 17 13:38:43 crc kubenswrapper[4768]: I0217 13:38:43.151220 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Feb 17 13:38:43 crc kubenswrapper[4768]: I0217 13:38:43.171862 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Feb 17 13:38:43 crc kubenswrapper[4768]: I0217 13:38:43.192380 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-c55bg"] Feb 17 13:38:43 crc kubenswrapper[4768]: I0217 13:38:43.193411 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 17 13:38:43 crc kubenswrapper[4768]: I0217 13:38:43.201322 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-mvb69" event={"ID":"7483ebd8-979d-429d-9197-cf5ae208af0a","Type":"ContainerStarted","Data":"8e270cfca9ba4e94d1d37ebce17fdbad0821e8f50aedb007011a74ca6d412219"} Feb 17 13:38:43 crc kubenswrapper[4768]: W0217 13:38:43.210064 4768 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod97d3e088_9eff_4b50_a5ac_c5bd6bfcb773.slice/crio-ad40713bc17388a9ffd14843fcb8a014b014908d81db7a14249450bbe09501b3 WatchSource:0}: Error finding container ad40713bc17388a9ffd14843fcb8a014b014908d81db7a14249450bbe09501b3: Status 404 returned error can't find the container with id ad40713bc17388a9ffd14843fcb8a014b014908d81db7a14249450bbe09501b3 Feb 17 13:38:43 crc kubenswrapper[4768]: I0217 13:38:43.210199 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Feb 17 13:38:43 crc kubenswrapper[4768]: I0217 13:38:43.218460 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5d543cd6-dc4d-4ad7-b617-389465cd2cd7-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-nf6w5\" (UID: \"5d543cd6-dc4d-4ad7-b617-389465cd2cd7\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-nf6w5" Feb 17 13:38:43 crc kubenswrapper[4768]: I0217 13:38:43.230382 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Feb 17 13:38:43 crc kubenswrapper[4768]: I0217 13:38:43.247016 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 13:38:43 crc kubenswrapper[4768]: I0217 13:38:43.247199 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/150a773f-d920-422b-b8d3-3e33876a0642-node-pullsecrets\") pod \"apiserver-76f77b778f-hxzgb\" (UID: \"150a773f-d920-422b-b8d3-3e33876a0642\") " pod="openshift-apiserver/apiserver-76f77b778f-hxzgb" Feb 17 13:38:43 crc kubenswrapper[4768]: I0217 13:38:43.247234 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/28505874-0a70-4f53-8070-607918790abe-machine-approver-tls\") pod \"machine-approver-56656f9798-vx6f4\" (UID: \"28505874-0a70-4f53-8070-607918790abe\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-vx6f4" Feb 17 13:38:43 crc kubenswrapper[4768]: I0217 13:38:43.247255 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/5396bd13-b4d6-42d2-834d-36e8e88715b5-proxy-tls\") pod \"machine-config-operator-74547568cd-czhdm\" (UID: \"5396bd13-b4d6-42d2-834d-36e8e88715b5\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-czhdm" Feb 17 13:38:43 crc kubenswrapper[4768]: I0217 13:38:43.247282 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/27d2fb4d-0721-4384-bfd5-2070137b6e1c-available-featuregates\") pod \"openshift-config-operator-7777fb866f-m66tk\" (UID: \"27d2fb4d-0721-4384-bfd5-2070137b6e1c\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-m66tk" Feb 17 13:38:43 crc kubenswrapper[4768]: I0217 13:38:43.247301 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/00ff8eee-3713-495f-a7c7-d05bba726cda-mountpoint-dir\") pod \"csi-hostpathplugin-pls7p\" (UID: \"00ff8eee-3713-495f-a7c7-d05bba726cda\") " pod="hostpath-provisioner/csi-hostpathplugin-pls7p" Feb 17 13:38:43 crc kubenswrapper[4768]: I0217 13:38:43.247317 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/b7837593-1275-40cb-820f-afe9cb13fad4-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-h77q6\" (UID: \"b7837593-1275-40cb-820f-afe9cb13fad4\") " pod="openshift-authentication/oauth-openshift-558db77b4-h77q6" Feb 17 13:38:43 crc kubenswrapper[4768]: I0217 13:38:43.247349 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/3084c4ad-c24d-48e2-9734-99ca07d07bab-etcd-service-ca\") pod \"etcd-operator-b45778765-t54pq\" (UID: \"3084c4ad-c24d-48e2-9734-99ca07d07bab\") " pod="openshift-etcd-operator/etcd-operator-b45778765-t54pq" Feb 17 13:38:43 crc kubenswrapper[4768]: I0217 13:38:43.247364 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/b7837593-1275-40cb-820f-afe9cb13fad4-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-h77q6\" (UID: \"b7837593-1275-40cb-820f-afe9cb13fad4\") " pod="openshift-authentication/oauth-openshift-558db77b4-h77q6" Feb 17 13:38:43 crc kubenswrapper[4768]: I0217 13:38:43.247382 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/43681511-fb8b-441c-bde6-0b1fa3cd8955-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-8lzlh\" (UID: \"43681511-fb8b-441c-bde6-0b1fa3cd8955\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-8lzlh" Feb 17 13:38:43 crc kubenswrapper[4768]: I0217 13:38:43.247398 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7jpcl\" (UniqueName: \"kubernetes.io/projected/9cf79399-929e-43c8-9ceb-06619ef1edee-kube-api-access-7jpcl\") pod \"image-registry-697d97f7c8-vfpbq\" (UID: \"9cf79399-929e-43c8-9ceb-06619ef1edee\") " pod="openshift-image-registry/image-registry-697d97f7c8-vfpbq" Feb 17 13:38:43 crc kubenswrapper[4768]: I0217 13:38:43.247413 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k4tmz\" (UniqueName: \"kubernetes.io/projected/d9e8c957-294c-4be0-812d-9cc81edf44f6-kube-api-access-k4tmz\") pod \"packageserver-d55dfcdfc-llw5g\" (UID: \"d9e8c957-294c-4be0-812d-9cc81edf44f6\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-llw5g" Feb 17 13:38:43 crc kubenswrapper[4768]: I0217 13:38:43.247437 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/00ff8eee-3713-495f-a7c7-d05bba726cda-plugins-dir\") pod \"csi-hostpathplugin-pls7p\" (UID: \"00ff8eee-3713-495f-a7c7-d05bba726cda\") " pod="hostpath-provisioner/csi-hostpathplugin-pls7p" Feb 17 13:38:43 crc kubenswrapper[4768]: I0217 13:38:43.247453 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ea95ef83-4d37-4fa3-b58c-5712a3fe0450-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-rm2vj\" (UID: \"ea95ef83-4d37-4fa3-b58c-5712a3fe0450\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-rm2vj" Feb 17 13:38:43 crc kubenswrapper[4768]: I0217 13:38:43.247479 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-98sh9\" (UniqueName: \"kubernetes.io/projected/916ef1b0-45af-4b2a-b4ec-ad6f78f43c0c-kube-api-access-98sh9\") pod \"openshift-apiserver-operator-796bbdcf4f-2p2tw\" (UID: \"916ef1b0-45af-4b2a-b4ec-ad6f78f43c0c\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-2p2tw" Feb 17 13:38:43 crc kubenswrapper[4768]: I0217 13:38:43.247496 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/31c906f5-7452-4a91-ac3e-3c230e7785aa-certs\") pod \"machine-config-server-k6fdj\" (UID: \"31c906f5-7452-4a91-ac3e-3c230e7785aa\") " pod="openshift-machine-config-operator/machine-config-server-k6fdj" Feb 17 13:38:43 crc kubenswrapper[4768]: I0217 13:38:43.247510 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6cdvg\" (UniqueName: \"kubernetes.io/projected/7aedec6a-6287-4e21-92a5-c818c2879842-kube-api-access-6cdvg\") pod \"ingress-canary-ztfdt\" (UID: \"7aedec6a-6287-4e21-92a5-c818c2879842\") " pod="openshift-ingress-canary/ingress-canary-ztfdt" Feb 17 13:38:43 crc kubenswrapper[4768]: I0217 13:38:43.247525 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5s2x7\" (UniqueName: \"kubernetes.io/projected/00ff8eee-3713-495f-a7c7-d05bba726cda-kube-api-access-5s2x7\") pod \"csi-hostpathplugin-pls7p\" (UID: \"00ff8eee-3713-495f-a7c7-d05bba726cda\") " pod="hostpath-provisioner/csi-hostpathplugin-pls7p" Feb 17 13:38:43 crc kubenswrapper[4768]: I0217 13:38:43.247538 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/451ced6a-ebdc-43a3-9639-5d74f0885fed-metrics-tls\") pod \"dns-default-sqgwf\" (UID: \"451ced6a-ebdc-43a3-9639-5d74f0885fed\") " pod="openshift-dns/dns-default-sqgwf" Feb 17 13:38:43 crc kubenswrapper[4768]: I0217 13:38:43.247551 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/150a773f-d920-422b-b8d3-3e33876a0642-etcd-client\") pod \"apiserver-76f77b778f-hxzgb\" (UID: \"150a773f-d920-422b-b8d3-3e33876a0642\") " pod="openshift-apiserver/apiserver-76f77b778f-hxzgb" Feb 17 13:38:43 crc kubenswrapper[4768]: I0217 13:38:43.247580 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/b7837593-1275-40cb-820f-afe9cb13fad4-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-h77q6\" (UID: \"b7837593-1275-40cb-820f-afe9cb13fad4\") " pod="openshift-authentication/oauth-openshift-558db77b4-h77q6" Feb 17 13:38:43 crc kubenswrapper[4768]: I0217 13:38:43.247601 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4c14ef86-26dd-4ad1-854e-2592ba200b02-serving-cert\") pod \"console-operator-58897d9998-8c6lh\" (UID: \"4c14ef86-26dd-4ad1-854e-2592ba200b02\") " pod="openshift-console-operator/console-operator-58897d9998-8c6lh" Feb 17 13:38:43 crc kubenswrapper[4768]: I0217 13:38:43.247615 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5v5pz\" (UniqueName: \"kubernetes.io/projected/150a773f-d920-422b-b8d3-3e33876a0642-kube-api-access-5v5pz\") pod \"apiserver-76f77b778f-hxzgb\" (UID: \"150a773f-d920-422b-b8d3-3e33876a0642\") " pod="openshift-apiserver/apiserver-76f77b778f-hxzgb" Feb 17 13:38:43 crc kubenswrapper[4768]: I0217 13:38:43.247631 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3084c4ad-c24d-48e2-9734-99ca07d07bab-serving-cert\") pod \"etcd-operator-b45778765-t54pq\" (UID: \"3084c4ad-c24d-48e2-9734-99ca07d07bab\") " pod="openshift-etcd-operator/etcd-operator-b45778765-t54pq" Feb 17 13:38:43 crc kubenswrapper[4768]: I0217 13:38:43.247645 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b7837593-1275-40cb-820f-afe9cb13fad4-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-h77q6\" (UID: \"b7837593-1275-40cb-820f-afe9cb13fad4\") " pod="openshift-authentication/oauth-openshift-558db77b4-h77q6" Feb 17 13:38:43 crc kubenswrapper[4768]: I0217 13:38:43.247662 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/43681511-fb8b-441c-bde6-0b1fa3cd8955-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-8lzlh\" (UID: \"43681511-fb8b-441c-bde6-0b1fa3cd8955\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-8lzlh" Feb 17 13:38:43 crc kubenswrapper[4768]: I0217 13:38:43.247689 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/5396bd13-b4d6-42d2-834d-36e8e88715b5-auth-proxy-config\") pod \"machine-config-operator-74547568cd-czhdm\" (UID: \"5396bd13-b4d6-42d2-834d-36e8e88715b5\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-czhdm" Feb 17 13:38:43 crc kubenswrapper[4768]: I0217 13:38:43.247703 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49862fb8-6a93-48ac-926a-846f72a67989-service-ca-bundle\") pod \"router-default-5444994796-ql5b5\" (UID: \"49862fb8-6a93-48ac-926a-846f72a67989\") " pod="openshift-ingress/router-default-5444994796-ql5b5" Feb 17 13:38:43 crc kubenswrapper[4768]: I0217 13:38:43.247719 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cccpk\" (UniqueName: \"kubernetes.io/projected/0af47ec2-b35c-48df-8f91-9c878fb5ee94-kube-api-access-cccpk\") pod \"controller-manager-879f6c89f-bsrtm\" (UID: \"0af47ec2-b35c-48df-8f91-9c878fb5ee94\") " pod="openshift-controller-manager/controller-manager-879f6c89f-bsrtm" Feb 17 13:38:43 crc kubenswrapper[4768]: I0217 13:38:43.247733 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0face492-83c1-49d4-bc1e-7de407151988-secret-volume\") pod \"collect-profiles-29522250-hq8zz\" (UID: \"0face492-83c1-49d4-bc1e-7de407151988\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522250-hq8zz" Feb 17 13:38:43 crc kubenswrapper[4768]: I0217 13:38:43.247749 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mv2sh\" (UniqueName: \"kubernetes.io/projected/77b1f878-6463-4342-b3f1-96c32e69e4d9-kube-api-access-mv2sh\") pod \"service-ca-9c57cc56f-fc989\" (UID: \"77b1f878-6463-4342-b3f1-96c32e69e4d9\") " pod="openshift-service-ca/service-ca-9c57cc56f-fc989" Feb 17 13:38:43 crc kubenswrapper[4768]: I0217 13:38:43.247766 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ea95ef83-4d37-4fa3-b58c-5712a3fe0450-config\") pod \"kube-controller-manager-operator-78b949d7b-rm2vj\" (UID: \"ea95ef83-4d37-4fa3-b58c-5712a3fe0450\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-rm2vj" Feb 17 13:38:43 crc kubenswrapper[4768]: I0217 13:38:43.247782 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2k94d\" (UniqueName: \"kubernetes.io/projected/49862fb8-6a93-48ac-926a-846f72a67989-kube-api-access-2k94d\") pod \"router-default-5444994796-ql5b5\" (UID: \"49862fb8-6a93-48ac-926a-846f72a67989\") " pod="openshift-ingress/router-default-5444994796-ql5b5" Feb 17 13:38:43 crc kubenswrapper[4768]: I0217 13:38:43.247804 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/27d2fb4d-0721-4384-bfd5-2070137b6e1c-serving-cert\") pod \"openshift-config-operator-7777fb866f-m66tk\" (UID: \"27d2fb4d-0721-4384-bfd5-2070137b6e1c\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-m66tk" Feb 17 13:38:43 crc kubenswrapper[4768]: I0217 13:38:43.247820 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xgph4\" (UniqueName: \"kubernetes.io/projected/27d2fb4d-0721-4384-bfd5-2070137b6e1c-kube-api-access-xgph4\") pod \"openshift-config-operator-7777fb866f-m66tk\" (UID: \"27d2fb4d-0721-4384-bfd5-2070137b6e1c\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-m66tk" Feb 17 13:38:43 crc kubenswrapper[4768]: I0217 13:38:43.247835 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v5d6n\" (UniqueName: \"kubernetes.io/projected/6205a23a-a18f-44c3-82be-ccecbf757630-kube-api-access-v5d6n\") pod \"service-ca-operator-777779d784-wdd7d\" (UID: \"6205a23a-a18f-44c3-82be-ccecbf757630\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-wdd7d" Feb 17 13:38:43 crc kubenswrapper[4768]: I0217 13:38:43.247859 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/9cf79399-929e-43c8-9ceb-06619ef1edee-ca-trust-extracted\") pod \"image-registry-697d97f7c8-vfpbq\" (UID: \"9cf79399-929e-43c8-9ceb-06619ef1edee\") " pod="openshift-image-registry/image-registry-697d97f7c8-vfpbq" Feb 17 13:38:43 crc kubenswrapper[4768]: I0217 13:38:43.247874 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/77b1f878-6463-4342-b3f1-96c32e69e4d9-signing-cabundle\") pod \"service-ca-9c57cc56f-fc989\" (UID: \"77b1f878-6463-4342-b3f1-96c32e69e4d9\") " pod="openshift-service-ca/service-ca-9c57cc56f-fc989" Feb 17 13:38:43 crc kubenswrapper[4768]: I0217 13:38:43.247911 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/28505874-0a70-4f53-8070-607918790abe-auth-proxy-config\") pod \"machine-approver-56656f9798-vx6f4\" (UID: \"28505874-0a70-4f53-8070-607918790abe\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-vx6f4" Feb 17 13:38:43 crc kubenswrapper[4768]: I0217 13:38:43.247927 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/9548bee5-799d-49de-bc66-296f14396f43-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-vgjr8\" (UID: \"9548bee5-799d-49de-bc66-296f14396f43\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-vgjr8" Feb 17 13:38:43 crc kubenswrapper[4768]: I0217 13:38:43.247948 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dr2bq\" (UniqueName: \"kubernetes.io/projected/b7837593-1275-40cb-820f-afe9cb13fad4-kube-api-access-dr2bq\") pod \"oauth-openshift-558db77b4-h77q6\" (UID: \"b7837593-1275-40cb-820f-afe9cb13fad4\") " pod="openshift-authentication/oauth-openshift-558db77b4-h77q6" Feb 17 13:38:43 crc kubenswrapper[4768]: I0217 13:38:43.247965 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5r6jx\" (UniqueName: \"kubernetes.io/projected/a192d1d8-c111-4c76-b256-7110fc99b045-kube-api-access-5r6jx\") pod \"multus-admission-controller-857f4d67dd-26dvz\" (UID: \"a192d1d8-c111-4c76-b256-7110fc99b045\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-26dvz" Feb 17 13:38:43 crc kubenswrapper[4768]: I0217 13:38:43.247980 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hkffj\" (UniqueName: \"kubernetes.io/projected/4c14ef86-26dd-4ad1-854e-2592ba200b02-kube-api-access-hkffj\") pod \"console-operator-58897d9998-8c6lh\" (UID: \"4c14ef86-26dd-4ad1-854e-2592ba200b02\") " pod="openshift-console-operator/console-operator-58897d9998-8c6lh" Feb 17 13:38:43 crc kubenswrapper[4768]: I0217 13:38:43.248003 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/28505874-0a70-4f53-8070-607918790abe-config\") pod \"machine-approver-56656f9798-vx6f4\" (UID: \"28505874-0a70-4f53-8070-607918790abe\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-vx6f4" Feb 17 13:38:43 crc kubenswrapper[4768]: I0217 13:38:43.248018 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/b7837593-1275-40cb-820f-afe9cb13fad4-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-h77q6\" (UID: \"b7837593-1275-40cb-820f-afe9cb13fad4\") " pod="openshift-authentication/oauth-openshift-558db77b4-h77q6" Feb 17 13:38:43 crc kubenswrapper[4768]: I0217 13:38:43.248032 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/a192d1d8-c111-4c76-b256-7110fc99b045-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-26dvz\" (UID: \"a192d1d8-c111-4c76-b256-7110fc99b045\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-26dvz" Feb 17 13:38:43 crc kubenswrapper[4768]: I0217 13:38:43.248047 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/d9e8c957-294c-4be0-812d-9cc81edf44f6-apiservice-cert\") pod \"packageserver-d55dfcdfc-llw5g\" (UID: \"d9e8c957-294c-4be0-812d-9cc81edf44f6\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-llw5g" Feb 17 13:38:43 crc kubenswrapper[4768]: I0217 13:38:43.248062 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/150a773f-d920-422b-b8d3-3e33876a0642-image-import-ca\") pod \"apiserver-76f77b778f-hxzgb\" (UID: \"150a773f-d920-422b-b8d3-3e33876a0642\") " pod="openshift-apiserver/apiserver-76f77b778f-hxzgb" Feb 17 13:38:43 crc kubenswrapper[4768]: I0217 13:38:43.248078 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/29f8bd1f-9a14-4725-a333-ee7509778b5d-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-5wgl6\" (UID: \"29f8bd1f-9a14-4725-a333-ee7509778b5d\") " pod="openshift-marketplace/marketplace-operator-79b997595-5wgl6" Feb 17 13:38:43 crc kubenswrapper[4768]: I0217 13:38:43.248093 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/c8527686-e4fe-4252-a6ac-b6ed4c7ad183-srv-cert\") pod \"olm-operator-6b444d44fb-xwxcr\" (UID: \"c8527686-e4fe-4252-a6ac-b6ed4c7ad183\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-xwxcr" Feb 17 13:38:43 crc kubenswrapper[4768]: I0217 13:38:43.248125 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nxfph\" (UniqueName: \"kubernetes.io/projected/eb861f06-bf99-46e8-8627-c3d99245994b-kube-api-access-nxfph\") pod \"cluster-image-registry-operator-dc59b4c8b-kqwfk\" (UID: \"eb861f06-bf99-46e8-8627-c3d99245994b\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-kqwfk" Feb 17 13:38:43 crc kubenswrapper[4768]: I0217 13:38:43.248140 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/3084c4ad-c24d-48e2-9734-99ca07d07bab-etcd-client\") pod \"etcd-operator-b45778765-t54pq\" (UID: \"3084c4ad-c24d-48e2-9734-99ca07d07bab\") " pod="openshift-etcd-operator/etcd-operator-b45778765-t54pq" Feb 17 13:38:43 crc kubenswrapper[4768]: I0217 13:38:43.248155 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0af47ec2-b35c-48df-8f91-9c878fb5ee94-serving-cert\") pod \"controller-manager-879f6c89f-bsrtm\" (UID: \"0af47ec2-b35c-48df-8f91-9c878fb5ee94\") " pod="openshift-controller-manager/controller-manager-879f6c89f-bsrtm" Feb 17 13:38:43 crc kubenswrapper[4768]: I0217 13:38:43.248170 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/e871b050-8136-486d-abc5-59a91f53d26c-srv-cert\") pod \"catalog-operator-68c6474976-ghmbf\" (UID: \"e871b050-8136-486d-abc5-59a91f53d26c\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-ghmbf" Feb 17 13:38:43 crc kubenswrapper[4768]: I0217 13:38:43.248186 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/b7837593-1275-40cb-820f-afe9cb13fad4-audit-dir\") pod \"oauth-openshift-558db77b4-h77q6\" (UID: \"b7837593-1275-40cb-820f-afe9cb13fad4\") " pod="openshift-authentication/oauth-openshift-558db77b4-h77q6" Feb 17 13:38:43 crc kubenswrapper[4768]: I0217 13:38:43.248203 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/150a773f-d920-422b-b8d3-3e33876a0642-audit-dir\") pod \"apiserver-76f77b778f-hxzgb\" (UID: \"150a773f-d920-422b-b8d3-3e33876a0642\") " pod="openshift-apiserver/apiserver-76f77b778f-hxzgb" Feb 17 13:38:43 crc kubenswrapper[4768]: I0217 13:38:43.248219 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bc575bb-6b05-4fe4-92fb-e467de4810b7-serving-cert\") pod \"authentication-operator-69f744f599-gvm54\" (UID: \"1bc575bb-6b05-4fe4-92fb-e467de4810b7\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-gvm54" Feb 17 13:38:43 crc kubenswrapper[4768]: I0217 13:38:43.248233 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/150a773f-d920-422b-b8d3-3e33876a0642-audit\") pod \"apiserver-76f77b778f-hxzgb\" (UID: \"150a773f-d920-422b-b8d3-3e33876a0642\") " pod="openshift-apiserver/apiserver-76f77b778f-hxzgb" Feb 17 13:38:43 crc kubenswrapper[4768]: I0217 13:38:43.248253 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/9cf79399-929e-43c8-9ceb-06619ef1edee-registry-tls\") pod \"image-registry-697d97f7c8-vfpbq\" (UID: \"9cf79399-929e-43c8-9ceb-06619ef1edee\") " pod="openshift-image-registry/image-registry-697d97f7c8-vfpbq" Feb 17 13:38:43 crc kubenswrapper[4768]: I0217 13:38:43.248268 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/b7837593-1275-40cb-820f-afe9cb13fad4-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-h77q6\" (UID: \"b7837593-1275-40cb-820f-afe9cb13fad4\") " pod="openshift-authentication/oauth-openshift-558db77b4-h77q6" Feb 17 13:38:43 crc kubenswrapper[4768]: I0217 13:38:43.248284 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zwbct\" (UniqueName: \"kubernetes.io/projected/0face492-83c1-49d4-bc1e-7de407151988-kube-api-access-zwbct\") pod \"collect-profiles-29522250-hq8zz\" (UID: \"0face492-83c1-49d4-bc1e-7de407151988\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522250-hq8zz" Feb 17 13:38:43 crc kubenswrapper[4768]: I0217 13:38:43.248301 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/eb861f06-bf99-46e8-8627-c3d99245994b-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-kqwfk\" (UID: \"eb861f06-bf99-46e8-8627-c3d99245994b\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-kqwfk" Feb 17 13:38:43 crc kubenswrapper[4768]: I0217 13:38:43.248315 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ea95ef83-4d37-4fa3-b58c-5712a3fe0450-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-rm2vj\" (UID: \"ea95ef83-4d37-4fa3-b58c-5712a3fe0450\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-rm2vj" Feb 17 13:38:43 crc kubenswrapper[4768]: I0217 13:38:43.248331 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6205a23a-a18f-44c3-82be-ccecbf757630-config\") pod \"service-ca-operator-777779d784-wdd7d\" (UID: \"6205a23a-a18f-44c3-82be-ccecbf757630\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-wdd7d" Feb 17 13:38:43 crc kubenswrapper[4768]: I0217 13:38:43.248365 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-62g7l\" (UniqueName: \"kubernetes.io/projected/5dc809a2-ba43-4858-9f51-ce7f2e366f29-kube-api-access-62g7l\") pod \"machine-config-controller-84d6567774-m4thw\" (UID: \"5dc809a2-ba43-4858-9f51-ce7f2e366f29\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-m4thw" Feb 17 13:38:43 crc kubenswrapper[4768]: I0217 13:38:43.248382 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/28c95f1d-fd75-4161-92ca-4cc1e928a1ba-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-rjn87\" (UID: \"28c95f1d-fd75-4161-92ca-4cc1e928a1ba\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-rjn87" Feb 17 13:38:43 crc kubenswrapper[4768]: I0217 13:38:43.248401 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bc575bb-6b05-4fe4-92fb-e467de4810b7-service-ca-bundle\") pod \"authentication-operator-69f744f599-gvm54\" (UID: \"1bc575bb-6b05-4fe4-92fb-e467de4810b7\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-gvm54" Feb 17 13:38:43 crc kubenswrapper[4768]: I0217 13:38:43.248416 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p2sx4\" (UniqueName: \"kubernetes.io/projected/451ced6a-ebdc-43a3-9639-5d74f0885fed-kube-api-access-p2sx4\") pod \"dns-default-sqgwf\" (UID: \"451ced6a-ebdc-43a3-9639-5d74f0885fed\") " pod="openshift-dns/dns-default-sqgwf" Feb 17 13:38:43 crc kubenswrapper[4768]: I0217 13:38:43.248430 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-29z5d\" (UniqueName: \"kubernetes.io/projected/e871b050-8136-486d-abc5-59a91f53d26c-kube-api-access-29z5d\") pod \"catalog-operator-68c6474976-ghmbf\" (UID: \"e871b050-8136-486d-abc5-59a91f53d26c\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-ghmbf" Feb 17 13:38:43 crc kubenswrapper[4768]: I0217 13:38:43.248454 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/150a773f-d920-422b-b8d3-3e33876a0642-etcd-serving-ca\") pod \"apiserver-76f77b778f-hxzgb\" (UID: \"150a773f-d920-422b-b8d3-3e33876a0642\") " pod="openshift-apiserver/apiserver-76f77b778f-hxzgb" Feb 17 13:38:43 crc kubenswrapper[4768]: I0217 13:38:43.248540 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/9cf79399-929e-43c8-9ceb-06619ef1edee-registry-certificates\") pod \"image-registry-697d97f7c8-vfpbq\" (UID: \"9cf79399-929e-43c8-9ceb-06619ef1edee\") " pod="openshift-image-registry/image-registry-697d97f7c8-vfpbq" Feb 17 13:38:43 crc kubenswrapper[4768]: I0217 13:38:43.248771 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/b7837593-1275-40cb-820f-afe9cb13fad4-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-h77q6\" (UID: \"b7837593-1275-40cb-820f-afe9cb13fad4\") " pod="openshift-authentication/oauth-openshift-558db77b4-h77q6" Feb 17 13:38:43 crc kubenswrapper[4768]: I0217 13:38:43.249373 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/31c906f5-7452-4a91-ac3e-3c230e7785aa-node-bootstrap-token\") pod \"machine-config-server-k6fdj\" (UID: \"31c906f5-7452-4a91-ac3e-3c230e7785aa\") " pod="openshift-machine-config-operator/machine-config-server-k6fdj" Feb 17 13:38:43 crc kubenswrapper[4768]: E0217 13:38:43.249478 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 13:38:43.749430812 +0000 UTC m=+143.028817284 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 13:38:43 crc kubenswrapper[4768]: I0217 13:38:43.249568 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/b7837593-1275-40cb-820f-afe9cb13fad4-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-h77q6\" (UID: \"b7837593-1275-40cb-820f-afe9cb13fad4\") " pod="openshift-authentication/oauth-openshift-558db77b4-h77q6" Feb 17 13:38:43 crc kubenswrapper[4768]: I0217 13:38:43.249685 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3084c4ad-c24d-48e2-9734-99ca07d07bab-config\") pod \"etcd-operator-b45778765-t54pq\" (UID: \"3084c4ad-c24d-48e2-9734-99ca07d07bab\") " pod="openshift-etcd-operator/etcd-operator-b45778765-t54pq" Feb 17 13:38:43 crc kubenswrapper[4768]: I0217 13:38:43.249759 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/b7837593-1275-40cb-820f-afe9cb13fad4-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-h77q6\" (UID: \"b7837593-1275-40cb-820f-afe9cb13fad4\") " pod="openshift-authentication/oauth-openshift-558db77b4-h77q6" Feb 17 13:38:43 crc kubenswrapper[4768]: I0217 13:38:43.249890 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/b7837593-1275-40cb-820f-afe9cb13fad4-audit-dir\") pod \"oauth-openshift-558db77b4-h77q6\" (UID: \"b7837593-1275-40cb-820f-afe9cb13fad4\") " pod="openshift-authentication/oauth-openshift-558db77b4-h77q6" Feb 17 13:38:43 crc kubenswrapper[4768]: I0217 13:38:43.249857 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vfpbq\" (UID: \"9cf79399-929e-43c8-9ceb-06619ef1edee\") " pod="openshift-image-registry/image-registry-697d97f7c8-vfpbq" Feb 17 13:38:43 crc kubenswrapper[4768]: I0217 13:38:43.250049 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6k6gl\" (UniqueName: \"kubernetes.io/projected/28505874-0a70-4f53-8070-607918790abe-kube-api-access-6k6gl\") pod \"machine-approver-56656f9798-vx6f4\" (UID: \"28505874-0a70-4f53-8070-607918790abe\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-vx6f4" Feb 17 13:38:43 crc kubenswrapper[4768]: I0217 13:38:43.250068 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bc575bb-6b05-4fe4-92fb-e467de4810b7-config\") pod \"authentication-operator-69f744f599-gvm54\" (UID: \"1bc575bb-6b05-4fe4-92fb-e467de4810b7\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-gvm54" Feb 17 13:38:43 crc kubenswrapper[4768]: I0217 13:38:43.250085 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0af47ec2-b35c-48df-8f91-9c878fb5ee94-client-ca\") pod \"controller-manager-879f6c89f-bsrtm\" (UID: \"0af47ec2-b35c-48df-8f91-9c878fb5ee94\") " pod="openshift-controller-manager/controller-manager-879f6c89f-bsrtm" Feb 17 13:38:43 crc kubenswrapper[4768]: I0217 13:38:43.250102 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/b7837593-1275-40cb-820f-afe9cb13fad4-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-h77q6\" (UID: \"b7837593-1275-40cb-820f-afe9cb13fad4\") " pod="openshift-authentication/oauth-openshift-558db77b4-h77q6" Feb 17 13:38:43 crc kubenswrapper[4768]: I0217 13:38:43.250092 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/27d2fb4d-0721-4384-bfd5-2070137b6e1c-available-featuregates\") pod \"openshift-config-operator-7777fb866f-m66tk\" (UID: \"27d2fb4d-0721-4384-bfd5-2070137b6e1c\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-m66tk" Feb 17 13:38:43 crc kubenswrapper[4768]: I0217 13:38:43.250142 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/c8527686-e4fe-4252-a6ac-b6ed4c7ad183-profile-collector-cert\") pod \"olm-operator-6b444d44fb-xwxcr\" (UID: \"c8527686-e4fe-4252-a6ac-b6ed4c7ad183\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-xwxcr" Feb 17 13:38:43 crc kubenswrapper[4768]: I0217 13:38:43.250170 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/d9e8c957-294c-4be0-812d-9cc81edf44f6-tmpfs\") pod \"packageserver-d55dfcdfc-llw5g\" (UID: \"d9e8c957-294c-4be0-812d-9cc81edf44f6\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-llw5g" Feb 17 13:38:43 crc kubenswrapper[4768]: I0217 13:38:43.250188 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/49862fb8-6a93-48ac-926a-846f72a67989-default-certificate\") pod \"router-default-5444994796-ql5b5\" (UID: \"49862fb8-6a93-48ac-926a-846f72a67989\") " pod="openshift-ingress/router-default-5444994796-ql5b5" Feb 17 13:38:43 crc kubenswrapper[4768]: I0217 13:38:43.250314 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4c14ef86-26dd-4ad1-854e-2592ba200b02-config\") pod \"console-operator-58897d9998-8c6lh\" (UID: \"4c14ef86-26dd-4ad1-854e-2592ba200b02\") " pod="openshift-console-operator/console-operator-58897d9998-8c6lh" Feb 17 13:38:43 crc kubenswrapper[4768]: I0217 13:38:43.250367 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/43681511-fb8b-441c-bde6-0b1fa3cd8955-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-8lzlh\" (UID: \"43681511-fb8b-441c-bde6-0b1fa3cd8955\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-8lzlh" Feb 17 13:38:43 crc kubenswrapper[4768]: I0217 13:38:43.250431 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/150a773f-d920-422b-b8d3-3e33876a0642-trusted-ca-bundle\") pod \"apiserver-76f77b778f-hxzgb\" (UID: \"150a773f-d920-422b-b8d3-3e33876a0642\") " pod="openshift-apiserver/apiserver-76f77b778f-hxzgb" Feb 17 13:38:43 crc kubenswrapper[4768]: I0217 13:38:43.250432 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/9cf79399-929e-43c8-9ceb-06619ef1edee-ca-trust-extracted\") pod \"image-registry-697d97f7c8-vfpbq\" (UID: \"9cf79399-929e-43c8-9ceb-06619ef1edee\") " pod="openshift-image-registry/image-registry-697d97f7c8-vfpbq" Feb 17 13:38:43 crc kubenswrapper[4768]: E0217 13:38:43.250465 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 13:38:43.750454865 +0000 UTC m=+143.029841307 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vfpbq" (UID: "9cf79399-929e-43c8-9ceb-06619ef1edee") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 13:38:43 crc kubenswrapper[4768]: I0217 13:38:43.250581 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bc575bb-6b05-4fe4-92fb-e467de4810b7-service-ca-bundle\") pod \"authentication-operator-69f744f599-gvm54\" (UID: \"1bc575bb-6b05-4fe4-92fb-e467de4810b7\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-gvm54" Feb 17 13:38:43 crc kubenswrapper[4768]: I0217 13:38:43.250450 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/00ff8eee-3713-495f-a7c7-d05bba726cda-csi-data-dir\") pod \"csi-hostpathplugin-pls7p\" (UID: \"00ff8eee-3713-495f-a7c7-d05bba726cda\") " pod="hostpath-provisioner/csi-hostpathplugin-pls7p" Feb 17 13:38:43 crc kubenswrapper[4768]: I0217 13:38:43.250709 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/e871b050-8136-486d-abc5-59a91f53d26c-profile-collector-cert\") pod \"catalog-operator-68c6474976-ghmbf\" (UID: \"e871b050-8136-486d-abc5-59a91f53d26c\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-ghmbf" Feb 17 13:38:43 crc kubenswrapper[4768]: I0217 13:38:43.250751 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/eb861f06-bf99-46e8-8627-c3d99245994b-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-kqwfk\" (UID: \"eb861f06-bf99-46e8-8627-c3d99245994b\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-kqwfk" Feb 17 13:38:43 crc kubenswrapper[4768]: I0217 13:38:43.250771 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/916ef1b0-45af-4b2a-b4ec-ad6f78f43c0c-config\") pod \"openshift-apiserver-operator-796bbdcf4f-2p2tw\" (UID: \"916ef1b0-45af-4b2a-b4ec-ad6f78f43c0c\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-2p2tw" Feb 17 13:38:43 crc kubenswrapper[4768]: I0217 13:38:43.250789 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/3084c4ad-c24d-48e2-9734-99ca07d07bab-etcd-ca\") pod \"etcd-operator-b45778765-t54pq\" (UID: \"3084c4ad-c24d-48e2-9734-99ca07d07bab\") " pod="openshift-etcd-operator/etcd-operator-b45778765-t54pq" Feb 17 13:38:43 crc kubenswrapper[4768]: I0217 13:38:43.250807 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/29f8bd1f-9a14-4725-a333-ee7509778b5d-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-5wgl6\" (UID: \"29f8bd1f-9a14-4725-a333-ee7509778b5d\") " pod="openshift-marketplace/marketplace-operator-79b997595-5wgl6" Feb 17 13:38:43 crc kubenswrapper[4768]: I0217 13:38:43.250821 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bc575bb-6b05-4fe4-92fb-e467de4810b7-config\") pod \"authentication-operator-69f744f599-gvm54\" (UID: \"1bc575bb-6b05-4fe4-92fb-e467de4810b7\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-gvm54" Feb 17 13:38:43 crc kubenswrapper[4768]: I0217 13:38:43.250827 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gsvqf\" (UniqueName: \"kubernetes.io/projected/6a5895c9-f283-43f0-82d7-c8a0cbf377ce-kube-api-access-gsvqf\") pod \"control-plane-machine-set-operator-78cbb6b69f-tpnsx\" (UID: \"6a5895c9-f283-43f0-82d7-c8a0cbf377ce\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-tpnsx" Feb 17 13:38:43 crc kubenswrapper[4768]: I0217 13:38:43.250862 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/49862fb8-6a93-48ac-926a-846f72a67989-metrics-certs\") pod \"router-default-5444994796-ql5b5\" (UID: \"49862fb8-6a93-48ac-926a-846f72a67989\") " pod="openshift-ingress/router-default-5444994796-ql5b5" Feb 17 13:38:43 crc kubenswrapper[4768]: I0217 13:38:43.250884 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/9cf79399-929e-43c8-9ceb-06619ef1edee-installation-pull-secrets\") pod \"image-registry-697d97f7c8-vfpbq\" (UID: \"9cf79399-929e-43c8-9ceb-06619ef1edee\") " pod="openshift-image-registry/image-registry-697d97f7c8-vfpbq" Feb 17 13:38:43 crc kubenswrapper[4768]: I0217 13:38:43.250903 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rxz7g\" (UniqueName: \"kubernetes.io/projected/5396bd13-b4d6-42d2-834d-36e8e88715b5-kube-api-access-rxz7g\") pod \"machine-config-operator-74547568cd-czhdm\" (UID: \"5396bd13-b4d6-42d2-834d-36e8e88715b5\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-czhdm" Feb 17 13:38:43 crc kubenswrapper[4768]: I0217 13:38:43.250921 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/5dc809a2-ba43-4858-9f51-ce7f2e366f29-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-m4thw\" (UID: \"5dc809a2-ba43-4858-9f51-ce7f2e366f29\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-m4thw" Feb 17 13:38:43 crc kubenswrapper[4768]: I0217 13:38:43.250942 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h99s7\" (UniqueName: \"kubernetes.io/projected/c8527686-e4fe-4252-a6ac-b6ed4c7ad183-kube-api-access-h99s7\") pod \"olm-operator-6b444d44fb-xwxcr\" (UID: \"c8527686-e4fe-4252-a6ac-b6ed4c7ad183\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-xwxcr" Feb 17 13:38:43 crc kubenswrapper[4768]: I0217 13:38:43.250967 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/5396bd13-b4d6-42d2-834d-36e8e88715b5-images\") pod \"machine-config-operator-74547568cd-czhdm\" (UID: \"5396bd13-b4d6-42d2-834d-36e8e88715b5\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-czhdm" Feb 17 13:38:43 crc kubenswrapper[4768]: I0217 13:38:43.250989 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/9cf79399-929e-43c8-9ceb-06619ef1edee-bound-sa-token\") pod \"image-registry-697d97f7c8-vfpbq\" (UID: \"9cf79399-929e-43c8-9ceb-06619ef1edee\") " pod="openshift-image-registry/image-registry-697d97f7c8-vfpbq" Feb 17 13:38:43 crc kubenswrapper[4768]: I0217 13:38:43.251012 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/b7837593-1275-40cb-820f-afe9cb13fad4-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-h77q6\" (UID: \"b7837593-1275-40cb-820f-afe9cb13fad4\") " pod="openshift-authentication/oauth-openshift-558db77b4-h77q6" Feb 17 13:38:43 crc kubenswrapper[4768]: I0217 13:38:43.251040 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6a5895c9-f283-43f0-82d7-c8a0cbf377ce-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-tpnsx\" (UID: \"6a5895c9-f283-43f0-82d7-c8a0cbf377ce\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-tpnsx" Feb 17 13:38:43 crc kubenswrapper[4768]: I0217 13:38:43.251320 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/7aedec6a-6287-4e21-92a5-c818c2879842-cert\") pod \"ingress-canary-ztfdt\" (UID: \"7aedec6a-6287-4e21-92a5-c818c2879842\") " pod="openshift-ingress-canary/ingress-canary-ztfdt" Feb 17 13:38:43 crc kubenswrapper[4768]: I0217 13:38:43.251356 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/5dc809a2-ba43-4858-9f51-ce7f2e366f29-proxy-tls\") pod \"machine-config-controller-84d6567774-m4thw\" (UID: \"5dc809a2-ba43-4858-9f51-ce7f2e366f29\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-m4thw" Feb 17 13:38:43 crc kubenswrapper[4768]: I0217 13:38:43.251378 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/77b1f878-6463-4342-b3f1-96c32e69e4d9-signing-key\") pod \"service-ca-9c57cc56f-fc989\" (UID: \"77b1f878-6463-4342-b3f1-96c32e69e4d9\") " pod="openshift-service-ca/service-ca-9c57cc56f-fc989" Feb 17 13:38:43 crc kubenswrapper[4768]: I0217 13:38:43.251400 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/d9e8c957-294c-4be0-812d-9cc81edf44f6-webhook-cert\") pod \"packageserver-d55dfcdfc-llw5g\" (UID: \"d9e8c957-294c-4be0-812d-9cc81edf44f6\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-llw5g" Feb 17 13:38:43 crc kubenswrapper[4768]: I0217 13:38:43.251421 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/150a773f-d920-422b-b8d3-3e33876a0642-config\") pod \"apiserver-76f77b778f-hxzgb\" (UID: \"150a773f-d920-422b-b8d3-3e33876a0642\") " pod="openshift-apiserver/apiserver-76f77b778f-hxzgb" Feb 17 13:38:43 crc kubenswrapper[4768]: I0217 13:38:43.251449 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vrm4q\" (UniqueName: \"kubernetes.io/projected/3084c4ad-c24d-48e2-9734-99ca07d07bab-kube-api-access-vrm4q\") pod \"etcd-operator-b45778765-t54pq\" (UID: \"3084c4ad-c24d-48e2-9734-99ca07d07bab\") " pod="openshift-etcd-operator/etcd-operator-b45778765-t54pq" Feb 17 13:38:43 crc kubenswrapper[4768]: I0217 13:38:43.251473 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xwr7q\" (UniqueName: \"kubernetes.io/projected/1bc575bb-6b05-4fe4-92fb-e467de4810b7-kube-api-access-xwr7q\") pod \"authentication-operator-69f744f599-gvm54\" (UID: \"1bc575bb-6b05-4fe4-92fb-e467de4810b7\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-gvm54" Feb 17 13:38:43 crc kubenswrapper[4768]: I0217 13:38:43.251495 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0af47ec2-b35c-48df-8f91-9c878fb5ee94-config\") pod \"controller-manager-879f6c89f-bsrtm\" (UID: \"0af47ec2-b35c-48df-8f91-9c878fb5ee94\") " pod="openshift-controller-manager/controller-manager-879f6c89f-bsrtm" Feb 17 13:38:43 crc kubenswrapper[4768]: I0217 13:38:43.251506 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/916ef1b0-45af-4b2a-b4ec-ad6f78f43c0c-config\") pod \"openshift-apiserver-operator-796bbdcf4f-2p2tw\" (UID: \"916ef1b0-45af-4b2a-b4ec-ad6f78f43c0c\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-2p2tw" Feb 17 13:38:43 crc kubenswrapper[4768]: I0217 13:38:43.251517 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/451ced6a-ebdc-43a3-9639-5d74f0885fed-config-volume\") pod \"dns-default-sqgwf\" (UID: \"451ced6a-ebdc-43a3-9639-5d74f0885fed\") " pod="openshift-dns/dns-default-sqgwf" Feb 17 13:38:43 crc kubenswrapper[4768]: I0217 13:38:43.251539 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/28505874-0a70-4f53-8070-607918790abe-auth-proxy-config\") pod \"machine-approver-56656f9798-vx6f4\" (UID: \"28505874-0a70-4f53-8070-607918790abe\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-vx6f4" Feb 17 13:38:43 crc kubenswrapper[4768]: I0217 13:38:43.251544 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8n6f7\" (UniqueName: \"kubernetes.io/projected/29f8bd1f-9a14-4725-a333-ee7509778b5d-kube-api-access-8n6f7\") pod \"marketplace-operator-79b997595-5wgl6\" (UID: \"29f8bd1f-9a14-4725-a333-ee7509778b5d\") " pod="openshift-marketplace/marketplace-operator-79b997595-5wgl6" Feb 17 13:38:43 crc kubenswrapper[4768]: I0217 13:38:43.251623 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ddzgl\" (UniqueName: \"kubernetes.io/projected/31c906f5-7452-4a91-ac3e-3c230e7785aa-kube-api-access-ddzgl\") pod \"machine-config-server-k6fdj\" (UID: \"31c906f5-7452-4a91-ac3e-3c230e7785aa\") " pod="openshift-machine-config-operator/machine-config-server-k6fdj" Feb 17 13:38:43 crc kubenswrapper[4768]: I0217 13:38:43.251653 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/4c14ef86-26dd-4ad1-854e-2592ba200b02-trusted-ca\") pod \"console-operator-58897d9998-8c6lh\" (UID: \"4c14ef86-26dd-4ad1-854e-2592ba200b02\") " pod="openshift-console-operator/console-operator-58897d9998-8c6lh" Feb 17 13:38:43 crc kubenswrapper[4768]: I0217 13:38:43.251671 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/150a773f-d920-422b-b8d3-3e33876a0642-encryption-config\") pod \"apiserver-76f77b778f-hxzgb\" (UID: \"150a773f-d920-422b-b8d3-3e33876a0642\") " pod="openshift-apiserver/apiserver-76f77b778f-hxzgb" Feb 17 13:38:43 crc kubenswrapper[4768]: I0217 13:38:43.251712 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h7fl6\" (UniqueName: \"kubernetes.io/projected/9548bee5-799d-49de-bc66-296f14396f43-kube-api-access-h7fl6\") pod \"package-server-manager-789f6589d5-vgjr8\" (UID: \"9548bee5-799d-49de-bc66-296f14396f43\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-vgjr8" Feb 17 13:38:43 crc kubenswrapper[4768]: I0217 13:38:43.251773 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/00ff8eee-3713-495f-a7c7-d05bba726cda-socket-dir\") pod \"csi-hostpathplugin-pls7p\" (UID: \"00ff8eee-3713-495f-a7c7-d05bba726cda\") " pod="hostpath-provisioner/csi-hostpathplugin-pls7p" Feb 17 13:38:43 crc kubenswrapper[4768]: I0217 13:38:43.251792 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x9xvb\" (UniqueName: \"kubernetes.io/projected/28c95f1d-fd75-4161-92ca-4cc1e928a1ba-kube-api-access-x9xvb\") pod \"cluster-samples-operator-665b6dd947-rjn87\" (UID: \"28c95f1d-fd75-4161-92ca-4cc1e928a1ba\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-rjn87" Feb 17 13:38:43 crc kubenswrapper[4768]: I0217 13:38:43.251819 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pprxg\" (UniqueName: \"kubernetes.io/projected/aa502e6c-070c-46ab-a1b8-82e34b55aad7-kube-api-access-pprxg\") pod \"migrator-59844c95c7-ds4bl\" (UID: \"aa502e6c-070c-46ab-a1b8-82e34b55aad7\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-ds4bl" Feb 17 13:38:43 crc kubenswrapper[4768]: I0217 13:38:43.251859 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/0af47ec2-b35c-48df-8f91-9c878fb5ee94-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-bsrtm\" (UID: \"0af47ec2-b35c-48df-8f91-9c878fb5ee94\") " pod="openshift-controller-manager/controller-manager-879f6c89f-bsrtm" Feb 17 13:38:43 crc kubenswrapper[4768]: I0217 13:38:43.253179 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 17 13:38:43 crc kubenswrapper[4768]: I0217 13:38:43.253460 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/b7837593-1275-40cb-820f-afe9cb13fad4-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-h77q6\" (UID: \"b7837593-1275-40cb-820f-afe9cb13fad4\") " pod="openshift-authentication/oauth-openshift-558db77b4-h77q6" Feb 17 13:38:43 crc kubenswrapper[4768]: I0217 13:38:43.253466 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/27d2fb4d-0721-4384-bfd5-2070137b6e1c-serving-cert\") pod \"openshift-config-operator-7777fb866f-m66tk\" (UID: \"27d2fb4d-0721-4384-bfd5-2070137b6e1c\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-m66tk" Feb 17 13:38:43 crc kubenswrapper[4768]: I0217 13:38:43.253573 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/00ff8eee-3713-495f-a7c7-d05bba726cda-registration-dir\") pod \"csi-hostpathplugin-pls7p\" (UID: \"00ff8eee-3713-495f-a7c7-d05bba726cda\") " pod="hostpath-provisioner/csi-hostpathplugin-pls7p" Feb 17 13:38:43 crc kubenswrapper[4768]: I0217 13:38:43.253598 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bc575bb-6b05-4fe4-92fb-e467de4810b7-serving-cert\") pod \"authentication-operator-69f744f599-gvm54\" (UID: \"1bc575bb-6b05-4fe4-92fb-e467de4810b7\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-gvm54" Feb 17 13:38:43 crc kubenswrapper[4768]: I0217 13:38:43.253611 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9cf79399-929e-43c8-9ceb-06619ef1edee-trusted-ca\") pod \"image-registry-697d97f7c8-vfpbq\" (UID: \"9cf79399-929e-43c8-9ceb-06619ef1edee\") " pod="openshift-image-registry/image-registry-697d97f7c8-vfpbq" Feb 17 13:38:43 crc kubenswrapper[4768]: I0217 13:38:43.253641 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/916ef1b0-45af-4b2a-b4ec-ad6f78f43c0c-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-2p2tw\" (UID: \"916ef1b0-45af-4b2a-b4ec-ad6f78f43c0c\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-2p2tw" Feb 17 13:38:43 crc kubenswrapper[4768]: I0217 13:38:43.254209 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/b7837593-1275-40cb-820f-afe9cb13fad4-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-h77q6\" (UID: \"b7837593-1275-40cb-820f-afe9cb13fad4\") " pod="openshift-authentication/oauth-openshift-558db77b4-h77q6" Feb 17 13:38:43 crc kubenswrapper[4768]: I0217 13:38:43.254239 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/b7837593-1275-40cb-820f-afe9cb13fad4-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-h77q6\" (UID: \"b7837593-1275-40cb-820f-afe9cb13fad4\") " pod="openshift-authentication/oauth-openshift-558db77b4-h77q6" Feb 17 13:38:43 crc kubenswrapper[4768]: I0217 13:38:43.254307 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/9cf79399-929e-43c8-9ceb-06619ef1edee-registry-certificates\") pod \"image-registry-697d97f7c8-vfpbq\" (UID: \"9cf79399-929e-43c8-9ceb-06619ef1edee\") " pod="openshift-image-registry/image-registry-697d97f7c8-vfpbq" Feb 17 13:38:43 crc kubenswrapper[4768]: I0217 13:38:43.254372 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/b7837593-1275-40cb-820f-afe9cb13fad4-audit-policies\") pod \"oauth-openshift-558db77b4-h77q6\" (UID: \"b7837593-1275-40cb-820f-afe9cb13fad4\") " pod="openshift-authentication/oauth-openshift-558db77b4-h77q6" Feb 17 13:38:43 crc kubenswrapper[4768]: I0217 13:38:43.254411 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6205a23a-a18f-44c3-82be-ccecbf757630-serving-cert\") pod \"service-ca-operator-777779d784-wdd7d\" (UID: \"6205a23a-a18f-44c3-82be-ccecbf757630\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-wdd7d" Feb 17 13:38:43 crc kubenswrapper[4768]: I0217 13:38:43.254540 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0face492-83c1-49d4-bc1e-7de407151988-config-volume\") pod \"collect-profiles-29522250-hq8zz\" (UID: \"0face492-83c1-49d4-bc1e-7de407151988\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522250-hq8zz" Feb 17 13:38:43 crc kubenswrapper[4768]: I0217 13:38:43.254577 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bc575bb-6b05-4fe4-92fb-e467de4810b7-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-gvm54\" (UID: \"1bc575bb-6b05-4fe4-92fb-e467de4810b7\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-gvm54" Feb 17 13:38:43 crc kubenswrapper[4768]: I0217 13:38:43.254602 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/49862fb8-6a93-48ac-926a-846f72a67989-stats-auth\") pod \"router-default-5444994796-ql5b5\" (UID: \"49862fb8-6a93-48ac-926a-846f72a67989\") " pod="openshift-ingress/router-default-5444994796-ql5b5" Feb 17 13:38:43 crc kubenswrapper[4768]: I0217 13:38:43.254627 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/150a773f-d920-422b-b8d3-3e33876a0642-serving-cert\") pod \"apiserver-76f77b778f-hxzgb\" (UID: \"150a773f-d920-422b-b8d3-3e33876a0642\") " pod="openshift-apiserver/apiserver-76f77b778f-hxzgb" Feb 17 13:38:43 crc kubenswrapper[4768]: I0217 13:38:43.254681 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/eb861f06-bf99-46e8-8627-c3d99245994b-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-kqwfk\" (UID: \"eb861f06-bf99-46e8-8627-c3d99245994b\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-kqwfk" Feb 17 13:38:43 crc kubenswrapper[4768]: I0217 13:38:43.254708 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0af47ec2-b35c-48df-8f91-9c878fb5ee94-config\") pod \"controller-manager-879f6c89f-bsrtm\" (UID: \"0af47ec2-b35c-48df-8f91-9c878fb5ee94\") " pod="openshift-controller-manager/controller-manager-879f6c89f-bsrtm" Feb 17 13:38:43 crc kubenswrapper[4768]: I0217 13:38:43.255274 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9cf79399-929e-43c8-9ceb-06619ef1edee-trusted-ca\") pod \"image-registry-697d97f7c8-vfpbq\" (UID: \"9cf79399-929e-43c8-9ceb-06619ef1edee\") " pod="openshift-image-registry/image-registry-697d97f7c8-vfpbq" Feb 17 13:38:43 crc kubenswrapper[4768]: I0217 13:38:43.255639 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/eb861f06-bf99-46e8-8627-c3d99245994b-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-kqwfk\" (UID: \"eb861f06-bf99-46e8-8627-c3d99245994b\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-kqwfk" Feb 17 13:38:43 crc kubenswrapper[4768]: I0217 13:38:43.255871 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bc575bb-6b05-4fe4-92fb-e467de4810b7-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-gvm54\" (UID: \"1bc575bb-6b05-4fe4-92fb-e467de4810b7\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-gvm54" Feb 17 13:38:43 crc kubenswrapper[4768]: I0217 13:38:43.255974 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/b7837593-1275-40cb-820f-afe9cb13fad4-audit-policies\") pod \"oauth-openshift-558db77b4-h77q6\" (UID: \"b7837593-1275-40cb-820f-afe9cb13fad4\") " pod="openshift-authentication/oauth-openshift-558db77b4-h77q6" Feb 17 13:38:43 crc kubenswrapper[4768]: I0217 13:38:43.255974 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/916ef1b0-45af-4b2a-b4ec-ad6f78f43c0c-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-2p2tw\" (UID: \"916ef1b0-45af-4b2a-b4ec-ad6f78f43c0c\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-2p2tw" Feb 17 13:38:43 crc kubenswrapper[4768]: I0217 13:38:43.256598 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/9cf79399-929e-43c8-9ceb-06619ef1edee-registry-tls\") pod \"image-registry-697d97f7c8-vfpbq\" (UID: \"9cf79399-929e-43c8-9ceb-06619ef1edee\") " pod="openshift-image-registry/image-registry-697d97f7c8-vfpbq" Feb 17 13:38:43 crc kubenswrapper[4768]: I0217 13:38:43.257612 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/b7837593-1275-40cb-820f-afe9cb13fad4-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-h77q6\" (UID: \"b7837593-1275-40cb-820f-afe9cb13fad4\") " pod="openshift-authentication/oauth-openshift-558db77b4-h77q6" Feb 17 13:38:43 crc kubenswrapper[4768]: I0217 13:38:43.257836 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/eb861f06-bf99-46e8-8627-c3d99245994b-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-kqwfk\" (UID: \"eb861f06-bf99-46e8-8627-c3d99245994b\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-kqwfk" Feb 17 13:38:43 crc kubenswrapper[4768]: I0217 13:38:43.258518 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/9cf79399-929e-43c8-9ceb-06619ef1edee-installation-pull-secrets\") pod \"image-registry-697d97f7c8-vfpbq\" (UID: \"9cf79399-929e-43c8-9ceb-06619ef1edee\") " pod="openshift-image-registry/image-registry-697d97f7c8-vfpbq" Feb 17 13:38:43 crc kubenswrapper[4768]: I0217 13:38:43.271039 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 17 13:38:43 crc kubenswrapper[4768]: I0217 13:38:43.283226 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0af47ec2-b35c-48df-8f91-9c878fb5ee94-serving-cert\") pod \"controller-manager-879f6c89f-bsrtm\" (UID: \"0af47ec2-b35c-48df-8f91-9c878fb5ee94\") " pod="openshift-controller-manager/controller-manager-879f6c89f-bsrtm" Feb 17 13:38:43 crc kubenswrapper[4768]: I0217 13:38:43.290686 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Feb 17 13:38:43 crc kubenswrapper[4768]: I0217 13:38:43.311275 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Feb 17 13:38:43 crc kubenswrapper[4768]: I0217 13:38:43.321027 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/b7837593-1275-40cb-820f-afe9cb13fad4-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-h77q6\" (UID: \"b7837593-1275-40cb-820f-afe9cb13fad4\") " pod="openshift-authentication/oauth-openshift-558db77b4-h77q6" Feb 17 13:38:43 crc kubenswrapper[4768]: I0217 13:38:43.330673 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Feb 17 13:38:43 crc kubenswrapper[4768]: I0217 13:38:43.333697 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5d543cd6-dc4d-4ad7-b617-389465cd2cd7-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-nf6w5\" (UID: \"5d543cd6-dc4d-4ad7-b617-389465cd2cd7\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-nf6w5" Feb 17 13:38:43 crc kubenswrapper[4768]: I0217 13:38:43.351113 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Feb 17 13:38:43 crc kubenswrapper[4768]: I0217 13:38:43.352158 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-5nwnr"] Feb 17 13:38:43 crc kubenswrapper[4768]: I0217 13:38:43.354020 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-r7ksx"] Feb 17 13:38:43 crc kubenswrapper[4768]: I0217 13:38:43.355135 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 13:38:43 crc kubenswrapper[4768]: E0217 13:38:43.355312 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 13:38:43.855293542 +0000 UTC m=+143.134679984 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 13:38:43 crc kubenswrapper[4768]: I0217 13:38:43.355458 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/c8527686-e4fe-4252-a6ac-b6ed4c7ad183-profile-collector-cert\") pod \"olm-operator-6b444d44fb-xwxcr\" (UID: \"c8527686-e4fe-4252-a6ac-b6ed4c7ad183\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-xwxcr" Feb 17 13:38:43 crc kubenswrapper[4768]: I0217 13:38:43.355492 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/d9e8c957-294c-4be0-812d-9cc81edf44f6-tmpfs\") pod \"packageserver-d55dfcdfc-llw5g\" (UID: \"d9e8c957-294c-4be0-812d-9cc81edf44f6\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-llw5g" Feb 17 13:38:43 crc kubenswrapper[4768]: I0217 13:38:43.355518 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/49862fb8-6a93-48ac-926a-846f72a67989-default-certificate\") pod \"router-default-5444994796-ql5b5\" (UID: \"49862fb8-6a93-48ac-926a-846f72a67989\") " pod="openshift-ingress/router-default-5444994796-ql5b5" Feb 17 13:38:43 crc kubenswrapper[4768]: I0217 13:38:43.355545 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/43681511-fb8b-441c-bde6-0b1fa3cd8955-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-8lzlh\" (UID: \"43681511-fb8b-441c-bde6-0b1fa3cd8955\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-8lzlh" Feb 17 13:38:43 crc kubenswrapper[4768]: I0217 13:38:43.355574 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4c14ef86-26dd-4ad1-854e-2592ba200b02-config\") pod \"console-operator-58897d9998-8c6lh\" (UID: \"4c14ef86-26dd-4ad1-854e-2592ba200b02\") " pod="openshift-console-operator/console-operator-58897d9998-8c6lh" Feb 17 13:38:43 crc kubenswrapper[4768]: I0217 13:38:43.355599 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/e871b050-8136-486d-abc5-59a91f53d26c-profile-collector-cert\") pod \"catalog-operator-68c6474976-ghmbf\" (UID: \"e871b050-8136-486d-abc5-59a91f53d26c\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-ghmbf" Feb 17 13:38:43 crc kubenswrapper[4768]: I0217 13:38:43.355624 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/150a773f-d920-422b-b8d3-3e33876a0642-trusted-ca-bundle\") pod \"apiserver-76f77b778f-hxzgb\" (UID: \"150a773f-d920-422b-b8d3-3e33876a0642\") " pod="openshift-apiserver/apiserver-76f77b778f-hxzgb" Feb 17 13:38:43 crc kubenswrapper[4768]: I0217 13:38:43.355647 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/00ff8eee-3713-495f-a7c7-d05bba726cda-csi-data-dir\") pod \"csi-hostpathplugin-pls7p\" (UID: \"00ff8eee-3713-495f-a7c7-d05bba726cda\") " pod="hostpath-provisioner/csi-hostpathplugin-pls7p" Feb 17 13:38:43 crc kubenswrapper[4768]: I0217 13:38:43.355687 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/49862fb8-6a93-48ac-926a-846f72a67989-metrics-certs\") pod \"router-default-5444994796-ql5b5\" (UID: \"49862fb8-6a93-48ac-926a-846f72a67989\") " pod="openshift-ingress/router-default-5444994796-ql5b5" Feb 17 13:38:43 crc kubenswrapper[4768]: I0217 13:38:43.355719 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/29f8bd1f-9a14-4725-a333-ee7509778b5d-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-5wgl6\" (UID: \"29f8bd1f-9a14-4725-a333-ee7509778b5d\") " pod="openshift-marketplace/marketplace-operator-79b997595-5wgl6" Feb 17 13:38:43 crc kubenswrapper[4768]: I0217 13:38:43.355743 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gsvqf\" (UniqueName: \"kubernetes.io/projected/6a5895c9-f283-43f0-82d7-c8a0cbf377ce-kube-api-access-gsvqf\") pod \"control-plane-machine-set-operator-78cbb6b69f-tpnsx\" (UID: \"6a5895c9-f283-43f0-82d7-c8a0cbf377ce\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-tpnsx" Feb 17 13:38:43 crc kubenswrapper[4768]: I0217 13:38:43.355808 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/5dc809a2-ba43-4858-9f51-ce7f2e366f29-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-m4thw\" (UID: \"5dc809a2-ba43-4858-9f51-ce7f2e366f29\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-m4thw" Feb 17 13:38:43 crc kubenswrapper[4768]: I0217 13:38:43.355832 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h99s7\" (UniqueName: \"kubernetes.io/projected/c8527686-e4fe-4252-a6ac-b6ed4c7ad183-kube-api-access-h99s7\") pod \"olm-operator-6b444d44fb-xwxcr\" (UID: \"c8527686-e4fe-4252-a6ac-b6ed4c7ad183\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-xwxcr" Feb 17 13:38:43 crc kubenswrapper[4768]: I0217 13:38:43.355858 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rxz7g\" (UniqueName: \"kubernetes.io/projected/5396bd13-b4d6-42d2-834d-36e8e88715b5-kube-api-access-rxz7g\") pod \"machine-config-operator-74547568cd-czhdm\" (UID: \"5396bd13-b4d6-42d2-834d-36e8e88715b5\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-czhdm" Feb 17 13:38:43 crc kubenswrapper[4768]: I0217 13:38:43.355881 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/5396bd13-b4d6-42d2-834d-36e8e88715b5-images\") pod \"machine-config-operator-74547568cd-czhdm\" (UID: \"5396bd13-b4d6-42d2-834d-36e8e88715b5\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-czhdm" Feb 17 13:38:43 crc kubenswrapper[4768]: I0217 13:38:43.355899 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/7aedec6a-6287-4e21-92a5-c818c2879842-cert\") pod \"ingress-canary-ztfdt\" (UID: \"7aedec6a-6287-4e21-92a5-c818c2879842\") " pod="openshift-ingress-canary/ingress-canary-ztfdt" Feb 17 13:38:43 crc kubenswrapper[4768]: I0217 13:38:43.355922 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6a5895c9-f283-43f0-82d7-c8a0cbf377ce-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-tpnsx\" (UID: \"6a5895c9-f283-43f0-82d7-c8a0cbf377ce\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-tpnsx" Feb 17 13:38:43 crc kubenswrapper[4768]: I0217 13:38:43.355938 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/5dc809a2-ba43-4858-9f51-ce7f2e366f29-proxy-tls\") pod \"machine-config-controller-84d6567774-m4thw\" (UID: \"5dc809a2-ba43-4858-9f51-ce7f2e366f29\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-m4thw" Feb 17 13:38:43 crc kubenswrapper[4768]: I0217 13:38:43.355953 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/77b1f878-6463-4342-b3f1-96c32e69e4d9-signing-key\") pod \"service-ca-9c57cc56f-fc989\" (UID: \"77b1f878-6463-4342-b3f1-96c32e69e4d9\") " pod="openshift-service-ca/service-ca-9c57cc56f-fc989" Feb 17 13:38:43 crc kubenswrapper[4768]: I0217 13:38:43.355968 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/d9e8c957-294c-4be0-812d-9cc81edf44f6-webhook-cert\") pod \"packageserver-d55dfcdfc-llw5g\" (UID: \"d9e8c957-294c-4be0-812d-9cc81edf44f6\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-llw5g" Feb 17 13:38:43 crc kubenswrapper[4768]: I0217 13:38:43.355982 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/150a773f-d920-422b-b8d3-3e33876a0642-config\") pod \"apiserver-76f77b778f-hxzgb\" (UID: \"150a773f-d920-422b-b8d3-3e33876a0642\") " pod="openshift-apiserver/apiserver-76f77b778f-hxzgb" Feb 17 13:38:43 crc kubenswrapper[4768]: I0217 13:38:43.355995 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/451ced6a-ebdc-43a3-9639-5d74f0885fed-config-volume\") pod \"dns-default-sqgwf\" (UID: \"451ced6a-ebdc-43a3-9639-5d74f0885fed\") " pod="openshift-dns/dns-default-sqgwf" Feb 17 13:38:43 crc kubenswrapper[4768]: I0217 13:38:43.356010 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8n6f7\" (UniqueName: \"kubernetes.io/projected/29f8bd1f-9a14-4725-a333-ee7509778b5d-kube-api-access-8n6f7\") pod \"marketplace-operator-79b997595-5wgl6\" (UID: \"29f8bd1f-9a14-4725-a333-ee7509778b5d\") " pod="openshift-marketplace/marketplace-operator-79b997595-5wgl6" Feb 17 13:38:43 crc kubenswrapper[4768]: I0217 13:38:43.356026 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ddzgl\" (UniqueName: \"kubernetes.io/projected/31c906f5-7452-4a91-ac3e-3c230e7785aa-kube-api-access-ddzgl\") pod \"machine-config-server-k6fdj\" (UID: \"31c906f5-7452-4a91-ac3e-3c230e7785aa\") " pod="openshift-machine-config-operator/machine-config-server-k6fdj" Feb 17 13:38:43 crc kubenswrapper[4768]: I0217 13:38:43.356055 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/4c14ef86-26dd-4ad1-854e-2592ba200b02-trusted-ca\") pod \"console-operator-58897d9998-8c6lh\" (UID: \"4c14ef86-26dd-4ad1-854e-2592ba200b02\") " pod="openshift-console-operator/console-operator-58897d9998-8c6lh" Feb 17 13:38:43 crc kubenswrapper[4768]: I0217 13:38:43.356074 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/150a773f-d920-422b-b8d3-3e33876a0642-encryption-config\") pod \"apiserver-76f77b778f-hxzgb\" (UID: \"150a773f-d920-422b-b8d3-3e33876a0642\") " pod="openshift-apiserver/apiserver-76f77b778f-hxzgb" Feb 17 13:38:43 crc kubenswrapper[4768]: I0217 13:38:43.356096 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h7fl6\" (UniqueName: \"kubernetes.io/projected/9548bee5-799d-49de-bc66-296f14396f43-kube-api-access-h7fl6\") pod \"package-server-manager-789f6589d5-vgjr8\" (UID: \"9548bee5-799d-49de-bc66-296f14396f43\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-vgjr8" Feb 17 13:38:43 crc kubenswrapper[4768]: I0217 13:38:43.356138 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/00ff8eee-3713-495f-a7c7-d05bba726cda-socket-dir\") pod \"csi-hostpathplugin-pls7p\" (UID: \"00ff8eee-3713-495f-a7c7-d05bba726cda\") " pod="hostpath-provisioner/csi-hostpathplugin-pls7p" Feb 17 13:38:43 crc kubenswrapper[4768]: I0217 13:38:43.356155 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x9xvb\" (UniqueName: \"kubernetes.io/projected/28c95f1d-fd75-4161-92ca-4cc1e928a1ba-kube-api-access-x9xvb\") pod \"cluster-samples-operator-665b6dd947-rjn87\" (UID: \"28c95f1d-fd75-4161-92ca-4cc1e928a1ba\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-rjn87" Feb 17 13:38:43 crc kubenswrapper[4768]: I0217 13:38:43.356176 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/00ff8eee-3713-495f-a7c7-d05bba726cda-registration-dir\") pod \"csi-hostpathplugin-pls7p\" (UID: \"00ff8eee-3713-495f-a7c7-d05bba726cda\") " pod="hostpath-provisioner/csi-hostpathplugin-pls7p" Feb 17 13:38:43 crc kubenswrapper[4768]: I0217 13:38:43.356197 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6205a23a-a18f-44c3-82be-ccecbf757630-serving-cert\") pod \"service-ca-operator-777779d784-wdd7d\" (UID: \"6205a23a-a18f-44c3-82be-ccecbf757630\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-wdd7d" Feb 17 13:38:43 crc kubenswrapper[4768]: I0217 13:38:43.356215 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0face492-83c1-49d4-bc1e-7de407151988-config-volume\") pod \"collect-profiles-29522250-hq8zz\" (UID: \"0face492-83c1-49d4-bc1e-7de407151988\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522250-hq8zz" Feb 17 13:38:43 crc kubenswrapper[4768]: I0217 13:38:43.356234 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/49862fb8-6a93-48ac-926a-846f72a67989-stats-auth\") pod \"router-default-5444994796-ql5b5\" (UID: \"49862fb8-6a93-48ac-926a-846f72a67989\") " pod="openshift-ingress/router-default-5444994796-ql5b5" Feb 17 13:38:43 crc kubenswrapper[4768]: I0217 13:38:43.356248 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/150a773f-d920-422b-b8d3-3e33876a0642-serving-cert\") pod \"apiserver-76f77b778f-hxzgb\" (UID: \"150a773f-d920-422b-b8d3-3e33876a0642\") " pod="openshift-apiserver/apiserver-76f77b778f-hxzgb" Feb 17 13:38:43 crc kubenswrapper[4768]: I0217 13:38:43.356279 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/150a773f-d920-422b-b8d3-3e33876a0642-node-pullsecrets\") pod \"apiserver-76f77b778f-hxzgb\" (UID: \"150a773f-d920-422b-b8d3-3e33876a0642\") " pod="openshift-apiserver/apiserver-76f77b778f-hxzgb" Feb 17 13:38:43 crc kubenswrapper[4768]: I0217 13:38:43.356294 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/5396bd13-b4d6-42d2-834d-36e8e88715b5-proxy-tls\") pod \"machine-config-operator-74547568cd-czhdm\" (UID: \"5396bd13-b4d6-42d2-834d-36e8e88715b5\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-czhdm" Feb 17 13:38:43 crc kubenswrapper[4768]: I0217 13:38:43.356308 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/00ff8eee-3713-495f-a7c7-d05bba726cda-mountpoint-dir\") pod \"csi-hostpathplugin-pls7p\" (UID: \"00ff8eee-3713-495f-a7c7-d05bba726cda\") " pod="hostpath-provisioner/csi-hostpathplugin-pls7p" Feb 17 13:38:43 crc kubenswrapper[4768]: I0217 13:38:43.356328 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/43681511-fb8b-441c-bde6-0b1fa3cd8955-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-8lzlh\" (UID: \"43681511-fb8b-441c-bde6-0b1fa3cd8955\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-8lzlh" Feb 17 13:38:43 crc kubenswrapper[4768]: I0217 13:38:43.356355 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k4tmz\" (UniqueName: \"kubernetes.io/projected/d9e8c957-294c-4be0-812d-9cc81edf44f6-kube-api-access-k4tmz\") pod \"packageserver-d55dfcdfc-llw5g\" (UID: \"d9e8c957-294c-4be0-812d-9cc81edf44f6\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-llw5g" Feb 17 13:38:43 crc kubenswrapper[4768]: I0217 13:38:43.356370 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/00ff8eee-3713-495f-a7c7-d05bba726cda-plugins-dir\") pod \"csi-hostpathplugin-pls7p\" (UID: \"00ff8eee-3713-495f-a7c7-d05bba726cda\") " pod="hostpath-provisioner/csi-hostpathplugin-pls7p" Feb 17 13:38:43 crc kubenswrapper[4768]: I0217 13:38:43.356392 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ea95ef83-4d37-4fa3-b58c-5712a3fe0450-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-rm2vj\" (UID: \"ea95ef83-4d37-4fa3-b58c-5712a3fe0450\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-rm2vj" Feb 17 13:38:43 crc kubenswrapper[4768]: I0217 13:38:43.356414 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/31c906f5-7452-4a91-ac3e-3c230e7785aa-certs\") pod \"machine-config-server-k6fdj\" (UID: \"31c906f5-7452-4a91-ac3e-3c230e7785aa\") " pod="openshift-machine-config-operator/machine-config-server-k6fdj" Feb 17 13:38:43 crc kubenswrapper[4768]: I0217 13:38:43.356429 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6cdvg\" (UniqueName: \"kubernetes.io/projected/7aedec6a-6287-4e21-92a5-c818c2879842-kube-api-access-6cdvg\") pod \"ingress-canary-ztfdt\" (UID: \"7aedec6a-6287-4e21-92a5-c818c2879842\") " pod="openshift-ingress-canary/ingress-canary-ztfdt" Feb 17 13:38:43 crc kubenswrapper[4768]: I0217 13:38:43.356444 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/451ced6a-ebdc-43a3-9639-5d74f0885fed-metrics-tls\") pod \"dns-default-sqgwf\" (UID: \"451ced6a-ebdc-43a3-9639-5d74f0885fed\") " pod="openshift-dns/dns-default-sqgwf" Feb 17 13:38:43 crc kubenswrapper[4768]: I0217 13:38:43.356463 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/150a773f-d920-422b-b8d3-3e33876a0642-etcd-client\") pod \"apiserver-76f77b778f-hxzgb\" (UID: \"150a773f-d920-422b-b8d3-3e33876a0642\") " pod="openshift-apiserver/apiserver-76f77b778f-hxzgb" Feb 17 13:38:43 crc kubenswrapper[4768]: I0217 13:38:43.356479 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5s2x7\" (UniqueName: \"kubernetes.io/projected/00ff8eee-3713-495f-a7c7-d05bba726cda-kube-api-access-5s2x7\") pod \"csi-hostpathplugin-pls7p\" (UID: \"00ff8eee-3713-495f-a7c7-d05bba726cda\") " pod="hostpath-provisioner/csi-hostpathplugin-pls7p" Feb 17 13:38:43 crc kubenswrapper[4768]: I0217 13:38:43.356515 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4c14ef86-26dd-4ad1-854e-2592ba200b02-serving-cert\") pod \"console-operator-58897d9998-8c6lh\" (UID: \"4c14ef86-26dd-4ad1-854e-2592ba200b02\") " pod="openshift-console-operator/console-operator-58897d9998-8c6lh" Feb 17 13:38:43 crc kubenswrapper[4768]: I0217 13:38:43.356532 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5v5pz\" (UniqueName: \"kubernetes.io/projected/150a773f-d920-422b-b8d3-3e33876a0642-kube-api-access-5v5pz\") pod \"apiserver-76f77b778f-hxzgb\" (UID: \"150a773f-d920-422b-b8d3-3e33876a0642\") " pod="openshift-apiserver/apiserver-76f77b778f-hxzgb" Feb 17 13:38:43 crc kubenswrapper[4768]: I0217 13:38:43.356552 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/43681511-fb8b-441c-bde6-0b1fa3cd8955-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-8lzlh\" (UID: \"43681511-fb8b-441c-bde6-0b1fa3cd8955\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-8lzlh" Feb 17 13:38:43 crc kubenswrapper[4768]: I0217 13:38:43.356575 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/5396bd13-b4d6-42d2-834d-36e8e88715b5-auth-proxy-config\") pod \"machine-config-operator-74547568cd-czhdm\" (UID: \"5396bd13-b4d6-42d2-834d-36e8e88715b5\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-czhdm" Feb 17 13:38:43 crc kubenswrapper[4768]: I0217 13:38:43.356590 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49862fb8-6a93-48ac-926a-846f72a67989-service-ca-bundle\") pod \"router-default-5444994796-ql5b5\" (UID: \"49862fb8-6a93-48ac-926a-846f72a67989\") " pod="openshift-ingress/router-default-5444994796-ql5b5" Feb 17 13:38:43 crc kubenswrapper[4768]: I0217 13:38:43.356606 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0face492-83c1-49d4-bc1e-7de407151988-secret-volume\") pod \"collect-profiles-29522250-hq8zz\" (UID: \"0face492-83c1-49d4-bc1e-7de407151988\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522250-hq8zz" Feb 17 13:38:43 crc kubenswrapper[4768]: I0217 13:38:43.356622 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mv2sh\" (UniqueName: \"kubernetes.io/projected/77b1f878-6463-4342-b3f1-96c32e69e4d9-kube-api-access-mv2sh\") pod \"service-ca-9c57cc56f-fc989\" (UID: \"77b1f878-6463-4342-b3f1-96c32e69e4d9\") " pod="openshift-service-ca/service-ca-9c57cc56f-fc989" Feb 17 13:38:43 crc kubenswrapper[4768]: I0217 13:38:43.356653 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2k94d\" (UniqueName: \"kubernetes.io/projected/49862fb8-6a93-48ac-926a-846f72a67989-kube-api-access-2k94d\") pod \"router-default-5444994796-ql5b5\" (UID: \"49862fb8-6a93-48ac-926a-846f72a67989\") " pod="openshift-ingress/router-default-5444994796-ql5b5" Feb 17 13:38:43 crc kubenswrapper[4768]: I0217 13:38:43.356677 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ea95ef83-4d37-4fa3-b58c-5712a3fe0450-config\") pod \"kube-controller-manager-operator-78b949d7b-rm2vj\" (UID: \"ea95ef83-4d37-4fa3-b58c-5712a3fe0450\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-rm2vj" Feb 17 13:38:43 crc kubenswrapper[4768]: I0217 13:38:43.356707 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v5d6n\" (UniqueName: \"kubernetes.io/projected/6205a23a-a18f-44c3-82be-ccecbf757630-kube-api-access-v5d6n\") pod \"service-ca-operator-777779d784-wdd7d\" (UID: \"6205a23a-a18f-44c3-82be-ccecbf757630\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-wdd7d" Feb 17 13:38:43 crc kubenswrapper[4768]: I0217 13:38:43.356724 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/77b1f878-6463-4342-b3f1-96c32e69e4d9-signing-cabundle\") pod \"service-ca-9c57cc56f-fc989\" (UID: \"77b1f878-6463-4342-b3f1-96c32e69e4d9\") " pod="openshift-service-ca/service-ca-9c57cc56f-fc989" Feb 17 13:38:43 crc kubenswrapper[4768]: I0217 13:38:43.356748 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/9548bee5-799d-49de-bc66-296f14396f43-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-vgjr8\" (UID: \"9548bee5-799d-49de-bc66-296f14396f43\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-vgjr8" Feb 17 13:38:43 crc kubenswrapper[4768]: I0217 13:38:43.356780 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5r6jx\" (UniqueName: \"kubernetes.io/projected/a192d1d8-c111-4c76-b256-7110fc99b045-kube-api-access-5r6jx\") pod \"multus-admission-controller-857f4d67dd-26dvz\" (UID: \"a192d1d8-c111-4c76-b256-7110fc99b045\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-26dvz" Feb 17 13:38:43 crc kubenswrapper[4768]: I0217 13:38:43.356797 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hkffj\" (UniqueName: \"kubernetes.io/projected/4c14ef86-26dd-4ad1-854e-2592ba200b02-kube-api-access-hkffj\") pod \"console-operator-58897d9998-8c6lh\" (UID: \"4c14ef86-26dd-4ad1-854e-2592ba200b02\") " pod="openshift-console-operator/console-operator-58897d9998-8c6lh" Feb 17 13:38:43 crc kubenswrapper[4768]: I0217 13:38:43.356815 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/a192d1d8-c111-4c76-b256-7110fc99b045-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-26dvz\" (UID: \"a192d1d8-c111-4c76-b256-7110fc99b045\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-26dvz" Feb 17 13:38:43 crc kubenswrapper[4768]: I0217 13:38:43.356848 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/29f8bd1f-9a14-4725-a333-ee7509778b5d-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-5wgl6\" (UID: \"29f8bd1f-9a14-4725-a333-ee7509778b5d\") " pod="openshift-marketplace/marketplace-operator-79b997595-5wgl6" Feb 17 13:38:43 crc kubenswrapper[4768]: I0217 13:38:43.356872 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/d9e8c957-294c-4be0-812d-9cc81edf44f6-apiservice-cert\") pod \"packageserver-d55dfcdfc-llw5g\" (UID: \"d9e8c957-294c-4be0-812d-9cc81edf44f6\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-llw5g" Feb 17 13:38:43 crc kubenswrapper[4768]: I0217 13:38:43.356889 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/150a773f-d920-422b-b8d3-3e33876a0642-image-import-ca\") pod \"apiserver-76f77b778f-hxzgb\" (UID: \"150a773f-d920-422b-b8d3-3e33876a0642\") " pod="openshift-apiserver/apiserver-76f77b778f-hxzgb" Feb 17 13:38:43 crc kubenswrapper[4768]: I0217 13:38:43.356904 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/c8527686-e4fe-4252-a6ac-b6ed4c7ad183-srv-cert\") pod \"olm-operator-6b444d44fb-xwxcr\" (UID: \"c8527686-e4fe-4252-a6ac-b6ed4c7ad183\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-xwxcr" Feb 17 13:38:43 crc kubenswrapper[4768]: I0217 13:38:43.356926 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/e871b050-8136-486d-abc5-59a91f53d26c-srv-cert\") pod \"catalog-operator-68c6474976-ghmbf\" (UID: \"e871b050-8136-486d-abc5-59a91f53d26c\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-ghmbf" Feb 17 13:38:43 crc kubenswrapper[4768]: I0217 13:38:43.356963 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/150a773f-d920-422b-b8d3-3e33876a0642-audit-dir\") pod \"apiserver-76f77b778f-hxzgb\" (UID: \"150a773f-d920-422b-b8d3-3e33876a0642\") " pod="openshift-apiserver/apiserver-76f77b778f-hxzgb" Feb 17 13:38:43 crc kubenswrapper[4768]: I0217 13:38:43.356990 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/150a773f-d920-422b-b8d3-3e33876a0642-audit\") pod \"apiserver-76f77b778f-hxzgb\" (UID: \"150a773f-d920-422b-b8d3-3e33876a0642\") " pod="openshift-apiserver/apiserver-76f77b778f-hxzgb" Feb 17 13:38:43 crc kubenswrapper[4768]: I0217 13:38:43.357012 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zwbct\" (UniqueName: \"kubernetes.io/projected/0face492-83c1-49d4-bc1e-7de407151988-kube-api-access-zwbct\") pod \"collect-profiles-29522250-hq8zz\" (UID: \"0face492-83c1-49d4-bc1e-7de407151988\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522250-hq8zz" Feb 17 13:38:43 crc kubenswrapper[4768]: I0217 13:38:43.357046 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ea95ef83-4d37-4fa3-b58c-5712a3fe0450-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-rm2vj\" (UID: \"ea95ef83-4d37-4fa3-b58c-5712a3fe0450\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-rm2vj" Feb 17 13:38:43 crc kubenswrapper[4768]: I0217 13:38:43.357070 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6205a23a-a18f-44c3-82be-ccecbf757630-config\") pod \"service-ca-operator-777779d784-wdd7d\" (UID: \"6205a23a-a18f-44c3-82be-ccecbf757630\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-wdd7d" Feb 17 13:38:43 crc kubenswrapper[4768]: I0217 13:38:43.357093 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-62g7l\" (UniqueName: \"kubernetes.io/projected/5dc809a2-ba43-4858-9f51-ce7f2e366f29-kube-api-access-62g7l\") pod \"machine-config-controller-84d6567774-m4thw\" (UID: \"5dc809a2-ba43-4858-9f51-ce7f2e366f29\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-m4thw" Feb 17 13:38:43 crc kubenswrapper[4768]: I0217 13:38:43.357140 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/28c95f1d-fd75-4161-92ca-4cc1e928a1ba-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-rjn87\" (UID: \"28c95f1d-fd75-4161-92ca-4cc1e928a1ba\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-rjn87" Feb 17 13:38:43 crc kubenswrapper[4768]: I0217 13:38:43.357163 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p2sx4\" (UniqueName: \"kubernetes.io/projected/451ced6a-ebdc-43a3-9639-5d74f0885fed-kube-api-access-p2sx4\") pod \"dns-default-sqgwf\" (UID: \"451ced6a-ebdc-43a3-9639-5d74f0885fed\") " pod="openshift-dns/dns-default-sqgwf" Feb 17 13:38:43 crc kubenswrapper[4768]: I0217 13:38:43.357195 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-29z5d\" (UniqueName: \"kubernetes.io/projected/e871b050-8136-486d-abc5-59a91f53d26c-kube-api-access-29z5d\") pod \"catalog-operator-68c6474976-ghmbf\" (UID: \"e871b050-8136-486d-abc5-59a91f53d26c\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-ghmbf" Feb 17 13:38:43 crc kubenswrapper[4768]: I0217 13:38:43.357221 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/150a773f-d920-422b-b8d3-3e33876a0642-etcd-serving-ca\") pod \"apiserver-76f77b778f-hxzgb\" (UID: \"150a773f-d920-422b-b8d3-3e33876a0642\") " pod="openshift-apiserver/apiserver-76f77b778f-hxzgb" Feb 17 13:38:43 crc kubenswrapper[4768]: I0217 13:38:43.357257 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/31c906f5-7452-4a91-ac3e-3c230e7785aa-node-bootstrap-token\") pod \"machine-config-server-k6fdj\" (UID: \"31c906f5-7452-4a91-ac3e-3c230e7785aa\") " pod="openshift-machine-config-operator/machine-config-server-k6fdj" Feb 17 13:38:43 crc kubenswrapper[4768]: I0217 13:38:43.357303 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vfpbq\" (UID: \"9cf79399-929e-43c8-9ceb-06619ef1edee\") " pod="openshift-image-registry/image-registry-697d97f7c8-vfpbq" Feb 17 13:38:43 crc kubenswrapper[4768]: I0217 13:38:43.357348 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/43681511-fb8b-441c-bde6-0b1fa3cd8955-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-8lzlh\" (UID: \"43681511-fb8b-441c-bde6-0b1fa3cd8955\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-8lzlh" Feb 17 13:38:43 crc kubenswrapper[4768]: I0217 13:38:43.357419 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/150a773f-d920-422b-b8d3-3e33876a0642-trusted-ca-bundle\") pod \"apiserver-76f77b778f-hxzgb\" (UID: \"150a773f-d920-422b-b8d3-3e33876a0642\") " pod="openshift-apiserver/apiserver-76f77b778f-hxzgb" Feb 17 13:38:43 crc kubenswrapper[4768]: E0217 13:38:43.357661 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 13:38:43.857634366 +0000 UTC m=+143.137020888 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vfpbq" (UID: "9cf79399-929e-43c8-9ceb-06619ef1edee") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 13:38:43 crc kubenswrapper[4768]: I0217 13:38:43.357702 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/d9e8c957-294c-4be0-812d-9cc81edf44f6-tmpfs\") pod \"packageserver-d55dfcdfc-llw5g\" (UID: \"d9e8c957-294c-4be0-812d-9cc81edf44f6\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-llw5g" Feb 17 13:38:43 crc kubenswrapper[4768]: I0217 13:38:43.357865 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/29f8bd1f-9a14-4725-a333-ee7509778b5d-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-5wgl6\" (UID: \"29f8bd1f-9a14-4725-a333-ee7509778b5d\") " pod="openshift-marketplace/marketplace-operator-79b997595-5wgl6" Feb 17 13:38:43 crc kubenswrapper[4768]: I0217 13:38:43.357957 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/00ff8eee-3713-495f-a7c7-d05bba726cda-csi-data-dir\") pod \"csi-hostpathplugin-pls7p\" (UID: \"00ff8eee-3713-495f-a7c7-d05bba726cda\") " pod="hostpath-provisioner/csi-hostpathplugin-pls7p" Feb 17 13:38:43 crc kubenswrapper[4768]: I0217 13:38:43.358012 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/5dc809a2-ba43-4858-9f51-ce7f2e366f29-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-m4thw\" (UID: \"5dc809a2-ba43-4858-9f51-ce7f2e366f29\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-m4thw" Feb 17 13:38:43 crc kubenswrapper[4768]: I0217 13:38:43.358010 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/00ff8eee-3713-495f-a7c7-d05bba726cda-socket-dir\") pod \"csi-hostpathplugin-pls7p\" (UID: \"00ff8eee-3713-495f-a7c7-d05bba726cda\") " pod="hostpath-provisioner/csi-hostpathplugin-pls7p" Feb 17 13:38:43 crc kubenswrapper[4768]: I0217 13:38:43.358454 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4c14ef86-26dd-4ad1-854e-2592ba200b02-config\") pod \"console-operator-58897d9998-8c6lh\" (UID: \"4c14ef86-26dd-4ad1-854e-2592ba200b02\") " pod="openshift-console-operator/console-operator-58897d9998-8c6lh" Feb 17 13:38:43 crc kubenswrapper[4768]: I0217 13:38:43.358533 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/00ff8eee-3713-495f-a7c7-d05bba726cda-registration-dir\") pod \"csi-hostpathplugin-pls7p\" (UID: \"00ff8eee-3713-495f-a7c7-d05bba726cda\") " pod="hostpath-provisioner/csi-hostpathplugin-pls7p" Feb 17 13:38:43 crc kubenswrapper[4768]: I0217 13:38:43.359052 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/150a773f-d920-422b-b8d3-3e33876a0642-config\") pod \"apiserver-76f77b778f-hxzgb\" (UID: \"150a773f-d920-422b-b8d3-3e33876a0642\") " pod="openshift-apiserver/apiserver-76f77b778f-hxzgb" Feb 17 13:38:43 crc kubenswrapper[4768]: I0217 13:38:43.359730 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/451ced6a-ebdc-43a3-9639-5d74f0885fed-config-volume\") pod \"dns-default-sqgwf\" (UID: \"451ced6a-ebdc-43a3-9639-5d74f0885fed\") " pod="openshift-dns/dns-default-sqgwf" Feb 17 13:38:43 crc kubenswrapper[4768]: I0217 13:38:43.360385 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/7aedec6a-6287-4e21-92a5-c818c2879842-cert\") pod \"ingress-canary-ztfdt\" (UID: \"7aedec6a-6287-4e21-92a5-c818c2879842\") " pod="openshift-ingress-canary/ingress-canary-ztfdt" Feb 17 13:38:43 crc kubenswrapper[4768]: I0217 13:38:43.360663 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/c8527686-e4fe-4252-a6ac-b6ed4c7ad183-profile-collector-cert\") pod \"olm-operator-6b444d44fb-xwxcr\" (UID: \"c8527686-e4fe-4252-a6ac-b6ed4c7ad183\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-xwxcr" Feb 17 13:38:43 crc kubenswrapper[4768]: I0217 13:38:43.360990 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/49862fb8-6a93-48ac-926a-846f72a67989-default-certificate\") pod \"router-default-5444994796-ql5b5\" (UID: \"49862fb8-6a93-48ac-926a-846f72a67989\") " pod="openshift-ingress/router-default-5444994796-ql5b5" Feb 17 13:38:43 crc kubenswrapper[4768]: I0217 13:38:43.361032 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/d9e8c957-294c-4be0-812d-9cc81edf44f6-webhook-cert\") pod \"packageserver-d55dfcdfc-llw5g\" (UID: \"d9e8c957-294c-4be0-812d-9cc81edf44f6\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-llw5g" Feb 17 13:38:43 crc kubenswrapper[4768]: I0217 13:38:43.361679 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ea95ef83-4d37-4fa3-b58c-5712a3fe0450-config\") pod \"kube-controller-manager-operator-78b949d7b-rm2vj\" (UID: \"ea95ef83-4d37-4fa3-b58c-5712a3fe0450\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-rm2vj" Feb 17 13:38:43 crc kubenswrapper[4768]: I0217 13:38:43.361970 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0face492-83c1-49d4-bc1e-7de407151988-config-volume\") pod \"collect-profiles-29522250-hq8zz\" (UID: \"0face492-83c1-49d4-bc1e-7de407151988\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522250-hq8zz" Feb 17 13:38:43 crc kubenswrapper[4768]: I0217 13:38:43.362219 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/150a773f-d920-422b-b8d3-3e33876a0642-encryption-config\") pod \"apiserver-76f77b778f-hxzgb\" (UID: \"150a773f-d920-422b-b8d3-3e33876a0642\") " pod="openshift-apiserver/apiserver-76f77b778f-hxzgb" Feb 17 13:38:43 crc kubenswrapper[4768]: I0217 13:38:43.362524 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/77b1f878-6463-4342-b3f1-96c32e69e4d9-signing-cabundle\") pod \"service-ca-9c57cc56f-fc989\" (UID: \"77b1f878-6463-4342-b3f1-96c32e69e4d9\") " pod="openshift-service-ca/service-ca-9c57cc56f-fc989" Feb 17 13:38:43 crc kubenswrapper[4768]: I0217 13:38:43.362834 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/49862fb8-6a93-48ac-926a-846f72a67989-metrics-certs\") pod \"router-default-5444994796-ql5b5\" (UID: \"49862fb8-6a93-48ac-926a-846f72a67989\") " pod="openshift-ingress/router-default-5444994796-ql5b5" Feb 17 13:38:43 crc kubenswrapper[4768]: I0217 13:38:43.362851 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/e871b050-8136-486d-abc5-59a91f53d26c-profile-collector-cert\") pod \"catalog-operator-68c6474976-ghmbf\" (UID: \"e871b050-8136-486d-abc5-59a91f53d26c\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-ghmbf" Feb 17 13:38:43 crc kubenswrapper[4768]: I0217 13:38:43.362944 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/5396bd13-b4d6-42d2-834d-36e8e88715b5-images\") pod \"machine-config-operator-74547568cd-czhdm\" (UID: \"5396bd13-b4d6-42d2-834d-36e8e88715b5\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-czhdm" Feb 17 13:38:43 crc kubenswrapper[4768]: I0217 13:38:43.362975 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/00ff8eee-3713-495f-a7c7-d05bba726cda-mountpoint-dir\") pod \"csi-hostpathplugin-pls7p\" (UID: \"00ff8eee-3713-495f-a7c7-d05bba726cda\") " pod="hostpath-provisioner/csi-hostpathplugin-pls7p" Feb 17 13:38:43 crc kubenswrapper[4768]: I0217 13:38:43.363094 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/00ff8eee-3713-495f-a7c7-d05bba726cda-plugins-dir\") pod \"csi-hostpathplugin-pls7p\" (UID: \"00ff8eee-3713-495f-a7c7-d05bba726cda\") " pod="hostpath-provisioner/csi-hostpathplugin-pls7p" Feb 17 13:38:43 crc kubenswrapper[4768]: I0217 13:38:43.363587 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6205a23a-a18f-44c3-82be-ccecbf757630-serving-cert\") pod \"service-ca-operator-777779d784-wdd7d\" (UID: \"6205a23a-a18f-44c3-82be-ccecbf757630\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-wdd7d" Feb 17 13:38:43 crc kubenswrapper[4768]: I0217 13:38:43.363842 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/150a773f-d920-422b-b8d3-3e33876a0642-audit\") pod \"apiserver-76f77b778f-hxzgb\" (UID: \"150a773f-d920-422b-b8d3-3e33876a0642\") " pod="openshift-apiserver/apiserver-76f77b778f-hxzgb" Feb 17 13:38:43 crc kubenswrapper[4768]: I0217 13:38:43.363952 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/150a773f-d920-422b-b8d3-3e33876a0642-audit-dir\") pod \"apiserver-76f77b778f-hxzgb\" (UID: \"150a773f-d920-422b-b8d3-3e33876a0642\") " pod="openshift-apiserver/apiserver-76f77b778f-hxzgb" Feb 17 13:38:43 crc kubenswrapper[4768]: I0217 13:38:43.364150 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6205a23a-a18f-44c3-82be-ccecbf757630-config\") pod \"service-ca-operator-777779d784-wdd7d\" (UID: \"6205a23a-a18f-44c3-82be-ccecbf757630\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-wdd7d" Feb 17 13:38:43 crc kubenswrapper[4768]: I0217 13:38:43.364215 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/5dc809a2-ba43-4858-9f51-ce7f2e366f29-proxy-tls\") pod \"machine-config-controller-84d6567774-m4thw\" (UID: \"5dc809a2-ba43-4858-9f51-ce7f2e366f29\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-m4thw" Feb 17 13:38:43 crc kubenswrapper[4768]: I0217 13:38:43.364543 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/49862fb8-6a93-48ac-926a-846f72a67989-stats-auth\") pod \"router-default-5444994796-ql5b5\" (UID: \"49862fb8-6a93-48ac-926a-846f72a67989\") " pod="openshift-ingress/router-default-5444994796-ql5b5" Feb 17 13:38:43 crc kubenswrapper[4768]: I0217 13:38:43.364658 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/150a773f-d920-422b-b8d3-3e33876a0642-etcd-serving-ca\") pod \"apiserver-76f77b778f-hxzgb\" (UID: \"150a773f-d920-422b-b8d3-3e33876a0642\") " pod="openshift-apiserver/apiserver-76f77b778f-hxzgb" Feb 17 13:38:43 crc kubenswrapper[4768]: I0217 13:38:43.364956 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49862fb8-6a93-48ac-926a-846f72a67989-service-ca-bundle\") pod \"router-default-5444994796-ql5b5\" (UID: \"49862fb8-6a93-48ac-926a-846f72a67989\") " pod="openshift-ingress/router-default-5444994796-ql5b5" Feb 17 13:38:43 crc kubenswrapper[4768]: I0217 13:38:43.365720 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/150a773f-d920-422b-b8d3-3e33876a0642-serving-cert\") pod \"apiserver-76f77b778f-hxzgb\" (UID: \"150a773f-d920-422b-b8d3-3e33876a0642\") " pod="openshift-apiserver/apiserver-76f77b778f-hxzgb" Feb 17 13:38:43 crc kubenswrapper[4768]: I0217 13:38:43.365962 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/5396bd13-b4d6-42d2-834d-36e8e88715b5-auth-proxy-config\") pod \"machine-config-operator-74547568cd-czhdm\" (UID: \"5396bd13-b4d6-42d2-834d-36e8e88715b5\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-czhdm" Feb 17 13:38:43 crc kubenswrapper[4768]: I0217 13:38:43.367236 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/e871b050-8136-486d-abc5-59a91f53d26c-srv-cert\") pod \"catalog-operator-68c6474976-ghmbf\" (UID: \"e871b050-8136-486d-abc5-59a91f53d26c\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-ghmbf" Feb 17 13:38:43 crc kubenswrapper[4768]: I0217 13:38:43.367543 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/77b1f878-6463-4342-b3f1-96c32e69e4d9-signing-key\") pod \"service-ca-9c57cc56f-fc989\" (UID: \"77b1f878-6463-4342-b3f1-96c32e69e4d9\") " pod="openshift-service-ca/service-ca-9c57cc56f-fc989" Feb 17 13:38:43 crc kubenswrapper[4768]: I0217 13:38:43.367962 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4c14ef86-26dd-4ad1-854e-2592ba200b02-serving-cert\") pod \"console-operator-58897d9998-8c6lh\" (UID: \"4c14ef86-26dd-4ad1-854e-2592ba200b02\") " pod="openshift-console-operator/console-operator-58897d9998-8c6lh" Feb 17 13:38:43 crc kubenswrapper[4768]: I0217 13:38:43.368041 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0face492-83c1-49d4-bc1e-7de407151988-secret-volume\") pod \"collect-profiles-29522250-hq8zz\" (UID: \"0face492-83c1-49d4-bc1e-7de407151988\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522250-hq8zz" Feb 17 13:38:43 crc kubenswrapper[4768]: I0217 13:38:43.368104 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/4c14ef86-26dd-4ad1-854e-2592ba200b02-trusted-ca\") pod \"console-operator-58897d9998-8c6lh\" (UID: \"4c14ef86-26dd-4ad1-854e-2592ba200b02\") " pod="openshift-console-operator/console-operator-58897d9998-8c6lh" Feb 17 13:38:43 crc kubenswrapper[4768]: I0217 13:38:43.368324 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ea95ef83-4d37-4fa3-b58c-5712a3fe0450-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-rm2vj\" (UID: \"ea95ef83-4d37-4fa3-b58c-5712a3fe0450\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-rm2vj" Feb 17 13:38:43 crc kubenswrapper[4768]: I0217 13:38:43.368349 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/31c906f5-7452-4a91-ac3e-3c230e7785aa-node-bootstrap-token\") pod \"machine-config-server-k6fdj\" (UID: \"31c906f5-7452-4a91-ac3e-3c230e7785aa\") " pod="openshift-machine-config-operator/machine-config-server-k6fdj" Feb 17 13:38:43 crc kubenswrapper[4768]: I0217 13:38:43.368465 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/150a773f-d920-422b-b8d3-3e33876a0642-node-pullsecrets\") pod \"apiserver-76f77b778f-hxzgb\" (UID: \"150a773f-d920-422b-b8d3-3e33876a0642\") " pod="openshift-apiserver/apiserver-76f77b778f-hxzgb" Feb 17 13:38:43 crc kubenswrapper[4768]: I0217 13:38:43.369295 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/5396bd13-b4d6-42d2-834d-36e8e88715b5-proxy-tls\") pod \"machine-config-operator-74547568cd-czhdm\" (UID: \"5396bd13-b4d6-42d2-834d-36e8e88715b5\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-czhdm" Feb 17 13:38:43 crc kubenswrapper[4768]: I0217 13:38:43.369462 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/150a773f-d920-422b-b8d3-3e33876a0642-image-import-ca\") pod \"apiserver-76f77b778f-hxzgb\" (UID: \"150a773f-d920-422b-b8d3-3e33876a0642\") " pod="openshift-apiserver/apiserver-76f77b778f-hxzgb" Feb 17 13:38:43 crc kubenswrapper[4768]: I0217 13:38:43.369534 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/150a773f-d920-422b-b8d3-3e33876a0642-etcd-client\") pod \"apiserver-76f77b778f-hxzgb\" (UID: \"150a773f-d920-422b-b8d3-3e33876a0642\") " pod="openshift-apiserver/apiserver-76f77b778f-hxzgb" Feb 17 13:38:43 crc kubenswrapper[4768]: I0217 13:38:43.369568 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/28c95f1d-fd75-4161-92ca-4cc1e928a1ba-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-rjn87\" (UID: \"28c95f1d-fd75-4161-92ca-4cc1e928a1ba\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-rjn87" Feb 17 13:38:43 crc kubenswrapper[4768]: I0217 13:38:43.371123 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/9548bee5-799d-49de-bc66-296f14396f43-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-vgjr8\" (UID: \"9548bee5-799d-49de-bc66-296f14396f43\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-vgjr8" Feb 17 13:38:43 crc kubenswrapper[4768]: I0217 13:38:43.371484 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Feb 17 13:38:43 crc kubenswrapper[4768]: I0217 13:38:43.373152 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/d9e8c957-294c-4be0-812d-9cc81edf44f6-apiservice-cert\") pod \"packageserver-d55dfcdfc-llw5g\" (UID: \"d9e8c957-294c-4be0-812d-9cc81edf44f6\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-llw5g" Feb 17 13:38:43 crc kubenswrapper[4768]: I0217 13:38:43.373282 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/c8527686-e4fe-4252-a6ac-b6ed4c7ad183-srv-cert\") pod \"olm-operator-6b444d44fb-xwxcr\" (UID: \"c8527686-e4fe-4252-a6ac-b6ed4c7ad183\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-xwxcr" Feb 17 13:38:43 crc kubenswrapper[4768]: I0217 13:38:43.373361 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/29f8bd1f-9a14-4725-a333-ee7509778b5d-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-5wgl6\" (UID: \"29f8bd1f-9a14-4725-a333-ee7509778b5d\") " pod="openshift-marketplace/marketplace-operator-79b997595-5wgl6" Feb 17 13:38:43 crc kubenswrapper[4768]: I0217 13:38:43.373480 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/31c906f5-7452-4a91-ac3e-3c230e7785aa-certs\") pod \"machine-config-server-k6fdj\" (UID: \"31c906f5-7452-4a91-ac3e-3c230e7785aa\") " pod="openshift-machine-config-operator/machine-config-server-k6fdj" Feb 17 13:38:43 crc kubenswrapper[4768]: I0217 13:38:43.373840 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6a5895c9-f283-43f0-82d7-c8a0cbf377ce-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-tpnsx\" (UID: \"6a5895c9-f283-43f0-82d7-c8a0cbf377ce\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-tpnsx" Feb 17 13:38:43 crc kubenswrapper[4768]: I0217 13:38:43.374536 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/a192d1d8-c111-4c76-b256-7110fc99b045-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-26dvz\" (UID: \"a192d1d8-c111-4c76-b256-7110fc99b045\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-26dvz" Feb 17 13:38:43 crc kubenswrapper[4768]: I0217 13:38:43.377161 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/451ced6a-ebdc-43a3-9639-5d74f0885fed-metrics-tls\") pod \"dns-default-sqgwf\" (UID: \"451ced6a-ebdc-43a3-9639-5d74f0885fed\") " pod="openshift-dns/dns-default-sqgwf" Feb 17 13:38:43 crc kubenswrapper[4768]: I0217 13:38:43.378670 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/43681511-fb8b-441c-bde6-0b1fa3cd8955-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-8lzlh\" (UID: \"43681511-fb8b-441c-bde6-0b1fa3cd8955\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-8lzlh" Feb 17 13:38:43 crc kubenswrapper[4768]: I0217 13:38:43.391530 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Feb 17 13:38:43 crc kubenswrapper[4768]: I0217 13:38:43.396930 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ff5cbab6-dd03-463e-9940-ad55678c9e38-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-p7h6m\" (UID: \"ff5cbab6-dd03-463e-9940-ad55678c9e38\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-p7h6m" Feb 17 13:38:43 crc kubenswrapper[4768]: I0217 13:38:43.411367 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Feb 17 13:38:43 crc kubenswrapper[4768]: I0217 13:38:43.416974 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-n5tl8"] Feb 17 13:38:43 crc kubenswrapper[4768]: I0217 13:38:43.422234 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-npp8v\" (UniqueName: \"kubernetes.io/projected/ff5cbab6-dd03-463e-9940-ad55678c9e38-kube-api-access-npp8v\") pod \"kube-storage-version-migrator-operator-b67b599dd-p7h6m\" (UID: \"ff5cbab6-dd03-463e-9940-ad55678c9e38\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-p7h6m" Feb 17 13:38:43 crc kubenswrapper[4768]: I0217 13:38:43.424627 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-9fmzj"] Feb 17 13:38:43 crc kubenswrapper[4768]: I0217 13:38:43.429715 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-z2b5c"] Feb 17 13:38:43 crc kubenswrapper[4768]: I0217 13:38:43.430682 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 17 13:38:43 crc kubenswrapper[4768]: I0217 13:38:43.438385 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-gwd2q"] Feb 17 13:38:43 crc kubenswrapper[4768]: W0217 13:38:43.440919 4768 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod38dc9a37_3332_40e5_b4cd_3c702455584d.slice/crio-bc27f97fe1bf9ce8697139b8f007ef5ca39425ddd23ed79dee0d4e7253264bdd WatchSource:0}: Error finding container bc27f97fe1bf9ce8697139b8f007ef5ca39425ddd23ed79dee0d4e7253264bdd: Status 404 returned error can't find the container with id bc27f97fe1bf9ce8697139b8f007ef5ca39425ddd23ed79dee0d4e7253264bdd Feb 17 13:38:43 crc kubenswrapper[4768]: W0217 13:38:43.442426 4768 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0030a046_d1bb_4a34_830c_c275306cee43.slice/crio-8c24714f527a739abe2f59896b8429c3c57b368f0af4069f5376ea795f5efa92 WatchSource:0}: Error finding container 8c24714f527a739abe2f59896b8429c3c57b368f0af4069f5376ea795f5efa92: Status 404 returned error can't find the container with id 8c24714f527a739abe2f59896b8429c3c57b368f0af4069f5376ea795f5efa92 Feb 17 13:38:43 crc kubenswrapper[4768]: W0217 13:38:43.444543 4768 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod49314047_3ca6_4f00_bdbc_bfa8a611ddb5.slice/crio-029e1efd8da46cda1b74aa91f6e9a83aea9c1a67e7e775d067c2c09a27d93551 WatchSource:0}: Error finding container 029e1efd8da46cda1b74aa91f6e9a83aea9c1a67e7e775d067c2c09a27d93551: Status 404 returned error can't find the container with id 029e1efd8da46cda1b74aa91f6e9a83aea9c1a67e7e775d067c2c09a27d93551 Feb 17 13:38:43 crc kubenswrapper[4768]: I0217 13:38:43.450368 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 17 13:38:43 crc kubenswrapper[4768]: I0217 13:38:43.457738 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 13:38:43 crc kubenswrapper[4768]: E0217 13:38:43.457885 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 13:38:43.957862669 +0000 UTC m=+143.237249111 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 13:38:43 crc kubenswrapper[4768]: I0217 13:38:43.458346 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vfpbq\" (UID: \"9cf79399-929e-43c8-9ceb-06619ef1edee\") " pod="openshift-image-registry/image-registry-697d97f7c8-vfpbq" Feb 17 13:38:43 crc kubenswrapper[4768]: E0217 13:38:43.458625 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 13:38:43.958615633 +0000 UTC m=+143.238002145 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vfpbq" (UID: "9cf79399-929e-43c8-9ceb-06619ef1edee") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 13:38:43 crc kubenswrapper[4768]: I0217 13:38:43.470036 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Feb 17 13:38:43 crc kubenswrapper[4768]: I0217 13:38:43.471458 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/b7837593-1275-40cb-820f-afe9cb13fad4-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-h77q6\" (UID: \"b7837593-1275-40cb-820f-afe9cb13fad4\") " pod="openshift-authentication/oauth-openshift-558db77b4-h77q6" Feb 17 13:38:43 crc kubenswrapper[4768]: I0217 13:38:43.492006 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Feb 17 13:38:43 crc kubenswrapper[4768]: I0217 13:38:43.510592 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Feb 17 13:38:43 crc kubenswrapper[4768]: E0217 13:38:43.523156 4768 secret.go:188] Couldn't get secret openshift-kube-storage-version-migrator-operator/serving-cert: failed to sync secret cache: timed out waiting for the condition Feb 17 13:38:43 crc kubenswrapper[4768]: E0217 13:38:43.523391 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ff5cbab6-dd03-463e-9940-ad55678c9e38-serving-cert podName:ff5cbab6-dd03-463e-9940-ad55678c9e38 nodeName:}" failed. No retries permitted until 2026-02-17 13:38:44.523367953 +0000 UTC m=+143.802754405 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/ff5cbab6-dd03-463e-9940-ad55678c9e38-serving-cert") pod "kube-storage-version-migrator-operator-b67b599dd-p7h6m" (UID: "ff5cbab6-dd03-463e-9940-ad55678c9e38") : failed to sync secret cache: timed out waiting for the condition Feb 17 13:38:43 crc kubenswrapper[4768]: I0217 13:38:43.529640 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3084c4ad-c24d-48e2-9734-99ca07d07bab-serving-cert\") pod \"etcd-operator-b45778765-t54pq\" (UID: \"3084c4ad-c24d-48e2-9734-99ca07d07bab\") " pod="openshift-etcd-operator/etcd-operator-b45778765-t54pq" Feb 17 13:38:43 crc kubenswrapper[4768]: I0217 13:38:43.530293 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Feb 17 13:38:43 crc kubenswrapper[4768]: I0217 13:38:43.537292 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/b7837593-1275-40cb-820f-afe9cb13fad4-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-h77q6\" (UID: \"b7837593-1275-40cb-820f-afe9cb13fad4\") " pod="openshift-authentication/oauth-openshift-558db77b4-h77q6" Feb 17 13:38:43 crc kubenswrapper[4768]: I0217 13:38:43.550942 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Feb 17 13:38:43 crc kubenswrapper[4768]: I0217 13:38:43.554619 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/3084c4ad-c24d-48e2-9734-99ca07d07bab-etcd-ca\") pod \"etcd-operator-b45778765-t54pq\" (UID: \"3084c4ad-c24d-48e2-9734-99ca07d07bab\") " pod="openshift-etcd-operator/etcd-operator-b45778765-t54pq" Feb 17 13:38:43 crc kubenswrapper[4768]: I0217 13:38:43.559677 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 13:38:43 crc kubenswrapper[4768]: E0217 13:38:43.559829 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 13:38:44.059807016 +0000 UTC m=+143.339193458 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 13:38:43 crc kubenswrapper[4768]: I0217 13:38:43.560247 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vfpbq\" (UID: \"9cf79399-929e-43c8-9ceb-06619ef1edee\") " pod="openshift-image-registry/image-registry-697d97f7c8-vfpbq" Feb 17 13:38:43 crc kubenswrapper[4768]: E0217 13:38:43.560704 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 13:38:44.060689824 +0000 UTC m=+143.340076266 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vfpbq" (UID: "9cf79399-929e-43c8-9ceb-06619ef1edee") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 13:38:43 crc kubenswrapper[4768]: I0217 13:38:43.574246 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Feb 17 13:38:43 crc kubenswrapper[4768]: I0217 13:38:43.585638 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/b7837593-1275-40cb-820f-afe9cb13fad4-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-h77q6\" (UID: \"b7837593-1275-40cb-820f-afe9cb13fad4\") " pod="openshift-authentication/oauth-openshift-558db77b4-h77q6" Feb 17 13:38:43 crc kubenswrapper[4768]: I0217 13:38:43.590176 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Feb 17 13:38:43 crc kubenswrapper[4768]: I0217 13:38:43.591665 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3084c4ad-c24d-48e2-9734-99ca07d07bab-config\") pod \"etcd-operator-b45778765-t54pq\" (UID: \"3084c4ad-c24d-48e2-9734-99ca07d07bab\") " pod="openshift-etcd-operator/etcd-operator-b45778765-t54pq" Feb 17 13:38:43 crc kubenswrapper[4768]: I0217 13:38:43.613396 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Feb 17 13:38:43 crc kubenswrapper[4768]: I0217 13:38:43.623298 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/b7837593-1275-40cb-820f-afe9cb13fad4-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-h77q6\" (UID: \"b7837593-1275-40cb-820f-afe9cb13fad4\") " pod="openshift-authentication/oauth-openshift-558db77b4-h77q6" Feb 17 13:38:43 crc kubenswrapper[4768]: I0217 13:38:43.631282 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Feb 17 13:38:43 crc kubenswrapper[4768]: I0217 13:38:43.656273 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Feb 17 13:38:43 crc kubenswrapper[4768]: I0217 13:38:43.661277 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 13:38:43 crc kubenswrapper[4768]: E0217 13:38:43.668217 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 13:38:44.168188125 +0000 UTC m=+143.447574567 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 13:38:43 crc kubenswrapper[4768]: I0217 13:38:43.674386 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Feb 17 13:38:43 crc kubenswrapper[4768]: I0217 13:38:43.674910 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vfpbq\" (UID: \"9cf79399-929e-43c8-9ceb-06619ef1edee\") " pod="openshift-image-registry/image-registry-697d97f7c8-vfpbq" Feb 17 13:38:43 crc kubenswrapper[4768]: E0217 13:38:43.675597 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 13:38:44.175579588 +0000 UTC m=+143.454966030 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vfpbq" (UID: "9cf79399-929e-43c8-9ceb-06619ef1edee") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 13:38:43 crc kubenswrapper[4768]: I0217 13:38:43.682699 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/28505874-0a70-4f53-8070-607918790abe-config\") pod \"machine-approver-56656f9798-vx6f4\" (UID: \"28505874-0a70-4f53-8070-607918790abe\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-vx6f4" Feb 17 13:38:43 crc kubenswrapper[4768]: I0217 13:38:43.700412 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Feb 17 13:38:43 crc kubenswrapper[4768]: I0217 13:38:43.701424 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b7837593-1275-40cb-820f-afe9cb13fad4-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-h77q6\" (UID: \"b7837593-1275-40cb-820f-afe9cb13fad4\") " pod="openshift-authentication/oauth-openshift-558db77b4-h77q6" Feb 17 13:38:43 crc kubenswrapper[4768]: I0217 13:38:43.712066 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Feb 17 13:38:43 crc kubenswrapper[4768]: E0217 13:38:43.721374 4768 projected.go:288] Couldn't get configMap openshift-controller-manager-operator/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 17 13:38:43 crc kubenswrapper[4768]: E0217 13:38:43.721428 4768 projected.go:194] Error preparing data for projected volume kube-api-access-mz8kk for pod openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-nf6w5: failed to sync configmap cache: timed out waiting for the condition Feb 17 13:38:43 crc kubenswrapper[4768]: E0217 13:38:43.721495 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/5d543cd6-dc4d-4ad7-b617-389465cd2cd7-kube-api-access-mz8kk podName:5d543cd6-dc4d-4ad7-b617-389465cd2cd7 nodeName:}" failed. No retries permitted until 2026-02-17 13:38:44.221472216 +0000 UTC m=+143.500858658 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-mz8kk" (UniqueName: "kubernetes.io/projected/5d543cd6-dc4d-4ad7-b617-389465cd2cd7-kube-api-access-mz8kk") pod "openshift-controller-manager-operator-756b6f6bc6-nf6w5" (UID: "5d543cd6-dc4d-4ad7-b617-389465cd2cd7") : failed to sync configmap cache: timed out waiting for the condition Feb 17 13:38:43 crc kubenswrapper[4768]: I0217 13:38:43.723526 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/28505874-0a70-4f53-8070-607918790abe-machine-approver-tls\") pod \"machine-approver-56656f9798-vx6f4\" (UID: \"28505874-0a70-4f53-8070-607918790abe\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-vx6f4" Feb 17 13:38:43 crc kubenswrapper[4768]: I0217 13:38:43.731482 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Feb 17 13:38:43 crc kubenswrapper[4768]: I0217 13:38:43.750745 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Feb 17 13:38:43 crc kubenswrapper[4768]: I0217 13:38:43.757563 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/3084c4ad-c24d-48e2-9734-99ca07d07bab-etcd-client\") pod \"etcd-operator-b45778765-t54pq\" (UID: \"3084c4ad-c24d-48e2-9734-99ca07d07bab\") " pod="openshift-etcd-operator/etcd-operator-b45778765-t54pq" Feb 17 13:38:43 crc kubenswrapper[4768]: I0217 13:38:43.777658 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 13:38:43 crc kubenswrapper[4768]: E0217 13:38:43.778389 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 13:38:44.27836755 +0000 UTC m=+143.557754002 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 13:38:43 crc kubenswrapper[4768]: I0217 13:38:43.781787 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 17 13:38:43 crc kubenswrapper[4768]: I0217 13:38:43.784613 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/0af47ec2-b35c-48df-8f91-9c878fb5ee94-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-bsrtm\" (UID: \"0af47ec2-b35c-48df-8f91-9c878fb5ee94\") " pod="openshift-controller-manager/controller-manager-879f6c89f-bsrtm" Feb 17 13:38:43 crc kubenswrapper[4768]: I0217 13:38:43.790879 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Feb 17 13:38:43 crc kubenswrapper[4768]: I0217 13:38:43.810701 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Feb 17 13:38:43 crc kubenswrapper[4768]: I0217 13:38:43.814774 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/3084c4ad-c24d-48e2-9734-99ca07d07bab-etcd-service-ca\") pod \"etcd-operator-b45778765-t54pq\" (UID: \"3084c4ad-c24d-48e2-9734-99ca07d07bab\") " pod="openshift-etcd-operator/etcd-operator-b45778765-t54pq" Feb 17 13:38:43 crc kubenswrapper[4768]: I0217 13:38:43.830713 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 17 13:38:43 crc kubenswrapper[4768]: I0217 13:38:43.842171 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0af47ec2-b35c-48df-8f91-9c878fb5ee94-client-ca\") pod \"controller-manager-879f6c89f-bsrtm\" (UID: \"0af47ec2-b35c-48df-8f91-9c878fb5ee94\") " pod="openshift-controller-manager/controller-manager-879f6c89f-bsrtm" Feb 17 13:38:43 crc kubenswrapper[4768]: I0217 13:38:43.857249 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Feb 17 13:38:43 crc kubenswrapper[4768]: I0217 13:38:43.866819 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/b7837593-1275-40cb-820f-afe9cb13fad4-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-h77q6\" (UID: \"b7837593-1275-40cb-820f-afe9cb13fad4\") " pod="openshift-authentication/oauth-openshift-558db77b4-h77q6" Feb 17 13:38:43 crc kubenswrapper[4768]: I0217 13:38:43.879480 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vfpbq\" (UID: \"9cf79399-929e-43c8-9ceb-06619ef1edee\") " pod="openshift-image-registry/image-registry-697d97f7c8-vfpbq" Feb 17 13:38:43 crc kubenswrapper[4768]: E0217 13:38:43.879842 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 13:38:44.379826182 +0000 UTC m=+143.659212624 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vfpbq" (UID: "9cf79399-929e-43c8-9ceb-06619ef1edee") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 13:38:43 crc kubenswrapper[4768]: I0217 13:38:43.906145 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cccpk\" (UniqueName: \"kubernetes.io/projected/0af47ec2-b35c-48df-8f91-9c878fb5ee94-kube-api-access-cccpk\") pod \"controller-manager-879f6c89f-bsrtm\" (UID: \"0af47ec2-b35c-48df-8f91-9c878fb5ee94\") " pod="openshift-controller-manager/controller-manager-879f6c89f-bsrtm" Feb 17 13:38:43 crc kubenswrapper[4768]: I0217 13:38:43.925451 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7jpcl\" (UniqueName: \"kubernetes.io/projected/9cf79399-929e-43c8-9ceb-06619ef1edee-kube-api-access-7jpcl\") pod \"image-registry-697d97f7c8-vfpbq\" (UID: \"9cf79399-929e-43c8-9ceb-06619ef1edee\") " pod="openshift-image-registry/image-registry-697d97f7c8-vfpbq" Feb 17 13:38:43 crc kubenswrapper[4768]: I0217 13:38:43.944724 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6k6gl\" (UniqueName: \"kubernetes.io/projected/28505874-0a70-4f53-8070-607918790abe-kube-api-access-6k6gl\") pod \"machine-approver-56656f9798-vx6f4\" (UID: \"28505874-0a70-4f53-8070-607918790abe\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-vx6f4" Feb 17 13:38:43 crc kubenswrapper[4768]: I0217 13:38:43.963056 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-98sh9\" (UniqueName: \"kubernetes.io/projected/916ef1b0-45af-4b2a-b4ec-ad6f78f43c0c-kube-api-access-98sh9\") pod \"openshift-apiserver-operator-796bbdcf4f-2p2tw\" (UID: \"916ef1b0-45af-4b2a-b4ec-ad6f78f43c0c\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-2p2tw" Feb 17 13:38:43 crc kubenswrapper[4768]: I0217 13:38:43.980363 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 13:38:43 crc kubenswrapper[4768]: E0217 13:38:43.980522 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 13:38:44.48050144 +0000 UTC m=+143.759887882 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 13:38:43 crc kubenswrapper[4768]: I0217 13:38:43.980660 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vfpbq\" (UID: \"9cf79399-929e-43c8-9ceb-06619ef1edee\") " pod="openshift-image-registry/image-registry-697d97f7c8-vfpbq" Feb 17 13:38:43 crc kubenswrapper[4768]: E0217 13:38:43.981060 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 13:38:44.481048758 +0000 UTC m=+143.760435200 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vfpbq" (UID: "9cf79399-929e-43c8-9ceb-06619ef1edee") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 13:38:43 crc kubenswrapper[4768]: I0217 13:38:43.984950 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/eb861f06-bf99-46e8-8627-c3d99245994b-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-kqwfk\" (UID: \"eb861f06-bf99-46e8-8627-c3d99245994b\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-kqwfk" Feb 17 13:38:44 crc kubenswrapper[4768]: I0217 13:38:44.007300 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dr2bq\" (UniqueName: \"kubernetes.io/projected/b7837593-1275-40cb-820f-afe9cb13fad4-kube-api-access-dr2bq\") pod \"oauth-openshift-558db77b4-h77q6\" (UID: \"b7837593-1275-40cb-820f-afe9cb13fad4\") " pod="openshift-authentication/oauth-openshift-558db77b4-h77q6" Feb 17 13:38:44 crc kubenswrapper[4768]: I0217 13:38:44.028308 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nxfph\" (UniqueName: \"kubernetes.io/projected/eb861f06-bf99-46e8-8627-c3d99245994b-kube-api-access-nxfph\") pod \"cluster-image-registry-operator-dc59b4c8b-kqwfk\" (UID: \"eb861f06-bf99-46e8-8627-c3d99245994b\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-kqwfk" Feb 17 13:38:44 crc kubenswrapper[4768]: I0217 13:38:44.044874 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pprxg\" (UniqueName: \"kubernetes.io/projected/aa502e6c-070c-46ab-a1b8-82e34b55aad7-kube-api-access-pprxg\") pod \"migrator-59844c95c7-ds4bl\" (UID: \"aa502e6c-070c-46ab-a1b8-82e34b55aad7\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-ds4bl" Feb 17 13:38:44 crc kubenswrapper[4768]: I0217 13:38:44.045122 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-2p2tw" Feb 17 13:38:44 crc kubenswrapper[4768]: I0217 13:38:44.060272 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-bsrtm" Feb 17 13:38:44 crc kubenswrapper[4768]: I0217 13:38:44.069067 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vrm4q\" (UniqueName: \"kubernetes.io/projected/3084c4ad-c24d-48e2-9734-99ca07d07bab-kube-api-access-vrm4q\") pod \"etcd-operator-b45778765-t54pq\" (UID: \"3084c4ad-c24d-48e2-9734-99ca07d07bab\") " pod="openshift-etcd-operator/etcd-operator-b45778765-t54pq" Feb 17 13:38:44 crc kubenswrapper[4768]: I0217 13:38:44.076161 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-h77q6" Feb 17 13:38:44 crc kubenswrapper[4768]: I0217 13:38:44.081874 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 13:38:44 crc kubenswrapper[4768]: E0217 13:38:44.082494 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 13:38:44.582474748 +0000 UTC m=+143.861861190 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 13:38:44 crc kubenswrapper[4768]: I0217 13:38:44.092785 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xwr7q\" (UniqueName: \"kubernetes.io/projected/1bc575bb-6b05-4fe4-92fb-e467de4810b7-kube-api-access-xwr7q\") pod \"authentication-operator-69f744f599-gvm54\" (UID: \"1bc575bb-6b05-4fe4-92fb-e467de4810b7\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-gvm54" Feb 17 13:38:44 crc kubenswrapper[4768]: I0217 13:38:44.097698 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-ds4bl" Feb 17 13:38:44 crc kubenswrapper[4768]: I0217 13:38:44.107840 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/9cf79399-929e-43c8-9ceb-06619ef1edee-bound-sa-token\") pod \"image-registry-697d97f7c8-vfpbq\" (UID: \"9cf79399-929e-43c8-9ceb-06619ef1edee\") " pod="openshift-image-registry/image-registry-697d97f7c8-vfpbq" Feb 17 13:38:44 crc kubenswrapper[4768]: I0217 13:38:44.129183 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xgph4\" (UniqueName: \"kubernetes.io/projected/27d2fb4d-0721-4384-bfd5-2070137b6e1c-kube-api-access-xgph4\") pod \"openshift-config-operator-7777fb866f-m66tk\" (UID: \"27d2fb4d-0721-4384-bfd5-2070137b6e1c\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-m66tk" Feb 17 13:38:44 crc kubenswrapper[4768]: I0217 13:38:44.170174 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gsvqf\" (UniqueName: \"kubernetes.io/projected/6a5895c9-f283-43f0-82d7-c8a0cbf377ce-kube-api-access-gsvqf\") pod \"control-plane-machine-set-operator-78cbb6b69f-tpnsx\" (UID: \"6a5895c9-f283-43f0-82d7-c8a0cbf377ce\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-tpnsx" Feb 17 13:38:44 crc kubenswrapper[4768]: I0217 13:38:44.183436 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h99s7\" (UniqueName: \"kubernetes.io/projected/c8527686-e4fe-4252-a6ac-b6ed4c7ad183-kube-api-access-h99s7\") pod \"olm-operator-6b444d44fb-xwxcr\" (UID: \"c8527686-e4fe-4252-a6ac-b6ed4c7ad183\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-xwxcr" Feb 17 13:38:44 crc kubenswrapper[4768]: I0217 13:38:44.183960 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vfpbq\" (UID: \"9cf79399-929e-43c8-9ceb-06619ef1edee\") " pod="openshift-image-registry/image-registry-697d97f7c8-vfpbq" Feb 17 13:38:44 crc kubenswrapper[4768]: E0217 13:38:44.184532 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 13:38:44.684519348 +0000 UTC m=+143.963905790 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vfpbq" (UID: "9cf79399-929e-43c8-9ceb-06619ef1edee") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 13:38:44 crc kubenswrapper[4768]: I0217 13:38:44.185764 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h7fl6\" (UniqueName: \"kubernetes.io/projected/9548bee5-799d-49de-bc66-296f14396f43-kube-api-access-h7fl6\") pod \"package-server-manager-789f6589d5-vgjr8\" (UID: \"9548bee5-799d-49de-bc66-296f14396f43\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-vgjr8" Feb 17 13:38:44 crc kubenswrapper[4768]: I0217 13:38:44.210196 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x9xvb\" (UniqueName: \"kubernetes.io/projected/28c95f1d-fd75-4161-92ca-4cc1e928a1ba-kube-api-access-x9xvb\") pod \"cluster-samples-operator-665b6dd947-rjn87\" (UID: \"28c95f1d-fd75-4161-92ca-4cc1e928a1ba\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-rjn87" Feb 17 13:38:44 crc kubenswrapper[4768]: I0217 13:38:44.212185 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-9fmzj" event={"ID":"0030a046-d1bb-4a34-830c-c275306cee43","Type":"ContainerStarted","Data":"0341b0ed70f9a262f791c3325bd1c895770a7fadc1bdf1789880873bb9ef9e10"} Feb 17 13:38:44 crc kubenswrapper[4768]: I0217 13:38:44.212228 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-9fmzj" event={"ID":"0030a046-d1bb-4a34-830c-c275306cee43","Type":"ContainerStarted","Data":"8c24714f527a739abe2f59896b8429c3c57b368f0af4069f5376ea795f5efa92"} Feb 17 13:38:44 crc kubenswrapper[4768]: I0217 13:38:44.220354 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-t54pq" Feb 17 13:38:44 crc kubenswrapper[4768]: I0217 13:38:44.238618 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-rjn87" Feb 17 13:38:44 crc kubenswrapper[4768]: I0217 13:38:44.242418 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-tpnsx" Feb 17 13:38:44 crc kubenswrapper[4768]: I0217 13:38:44.242834 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-vx6f4" Feb 17 13:38:44 crc kubenswrapper[4768]: I0217 13:38:44.247684 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8n6f7\" (UniqueName: \"kubernetes.io/projected/29f8bd1f-9a14-4725-a333-ee7509778b5d-kube-api-access-8n6f7\") pod \"marketplace-operator-79b997595-5wgl6\" (UID: \"29f8bd1f-9a14-4725-a333-ee7509778b5d\") " pod="openshift-marketplace/marketplace-operator-79b997595-5wgl6" Feb 17 13:38:44 crc kubenswrapper[4768]: I0217 13:38:44.248991 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-mvb69" event={"ID":"7483ebd8-979d-429d-9197-cf5ae208af0a","Type":"ContainerStarted","Data":"61a7a551bdb99fef5c274d5af0271c12c3b19e978ea59050595d4ec6836f56bd"} Feb 17 13:38:44 crc kubenswrapper[4768]: I0217 13:38:44.249525 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-mvb69" Feb 17 13:38:44 crc kubenswrapper[4768]: I0217 13:38:44.250675 4768 patch_prober.go:28] interesting pod/downloads-7954f5f757-mvb69 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.27:8080/\": dial tcp 10.217.0.27:8080: connect: connection refused" start-of-body= Feb 17 13:38:44 crc kubenswrapper[4768]: I0217 13:38:44.250708 4768 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-mvb69" podUID="7483ebd8-979d-429d-9197-cf5ae208af0a" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.27:8080/\": dial tcp 10.217.0.27:8080: connect: connection refused" Feb 17 13:38:44 crc kubenswrapper[4768]: I0217 13:38:44.252145 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rxz7g\" (UniqueName: \"kubernetes.io/projected/5396bd13-b4d6-42d2-834d-36e8e88715b5-kube-api-access-rxz7g\") pod \"machine-config-operator-74547568cd-czhdm\" (UID: \"5396bd13-b4d6-42d2-834d-36e8e88715b5\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-czhdm" Feb 17 13:38:44 crc kubenswrapper[4768]: I0217 13:38:44.252818 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-gwd2q" event={"ID":"49314047-3ca6-4f00-bdbc-bfa8a611ddb5","Type":"ContainerStarted","Data":"17473bd47a76dd5603d4cb5476f6e178c7540cd3ab1f971505d21d246cd5f20b"} Feb 17 13:38:44 crc kubenswrapper[4768]: I0217 13:38:44.252905 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-gwd2q" event={"ID":"49314047-3ca6-4f00-bdbc-bfa8a611ddb5","Type":"ContainerStarted","Data":"39c9de0a16c3f65ef860ba961af27cb1a40310ad456202a709b9ba80b4f68db8"} Feb 17 13:38:44 crc kubenswrapper[4768]: I0217 13:38:44.252926 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-gwd2q" event={"ID":"49314047-3ca6-4f00-bdbc-bfa8a611ddb5","Type":"ContainerStarted","Data":"029e1efd8da46cda1b74aa91f6e9a83aea9c1a67e7e775d067c2c09a27d93551"} Feb 17 13:38:44 crc kubenswrapper[4768]: I0217 13:38:44.257773 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-c55bg" event={"ID":"97d3e088-9eff-4b50-a5ac-c5bd6bfcb773","Type":"ContainerStarted","Data":"7b69291183765f34d566784290a299965003c1870273482c5c4cdce1e9600f77"} Feb 17 13:38:44 crc kubenswrapper[4768]: I0217 13:38:44.257818 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-c55bg" event={"ID":"97d3e088-9eff-4b50-a5ac-c5bd6bfcb773","Type":"ContainerStarted","Data":"ad40713bc17388a9ffd14843fcb8a014b014908d81db7a14249450bbe09501b3"} Feb 17 13:38:44 crc kubenswrapper[4768]: I0217 13:38:44.258544 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-c55bg" Feb 17 13:38:44 crc kubenswrapper[4768]: I0217 13:38:44.263319 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-kqwfk" Feb 17 13:38:44 crc kubenswrapper[4768]: I0217 13:38:44.278779 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ddzgl\" (UniqueName: \"kubernetes.io/projected/31c906f5-7452-4a91-ac3e-3c230e7785aa-kube-api-access-ddzgl\") pod \"machine-config-server-k6fdj\" (UID: \"31c906f5-7452-4a91-ac3e-3c230e7785aa\") " pod="openshift-machine-config-operator/machine-config-server-k6fdj" Feb 17 13:38:44 crc kubenswrapper[4768]: I0217 13:38:44.284245 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-r7ksx" event={"ID":"74e5b268-67e3-4e32-bccb-1a1f0717a2db","Type":"ContainerStarted","Data":"8e56f31b0fba65aa9f059d5954e6add53c8bcd65ff4ba0a111f62532d1aad7cb"} Feb 17 13:38:44 crc kubenswrapper[4768]: I0217 13:38:44.284279 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-r7ksx" event={"ID":"74e5b268-67e3-4e32-bccb-1a1f0717a2db","Type":"ContainerStarted","Data":"1668aa5947fc9a1ff0991d7141da707d247ed9c4db3b65eed07177b5c30c92b3"} Feb 17 13:38:44 crc kubenswrapper[4768]: I0217 13:38:44.284290 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-r7ksx" event={"ID":"74e5b268-67e3-4e32-bccb-1a1f0717a2db","Type":"ContainerStarted","Data":"82c02afdc0779579bb145a6c8e9769d70f023ee7cd116c2ac65e119615184c2c"} Feb 17 13:38:44 crc kubenswrapper[4768]: I0217 13:38:44.285081 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 13:38:44 crc kubenswrapper[4768]: I0217 13:38:44.285478 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mz8kk\" (UniqueName: \"kubernetes.io/projected/5d543cd6-dc4d-4ad7-b617-389465cd2cd7-kube-api-access-mz8kk\") pod \"openshift-controller-manager-operator-756b6f6bc6-nf6w5\" (UID: \"5d543cd6-dc4d-4ad7-b617-389465cd2cd7\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-nf6w5" Feb 17 13:38:44 crc kubenswrapper[4768]: E0217 13:38:44.287358 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 13:38:44.787335443 +0000 UTC m=+144.066721885 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 13:38:44 crc kubenswrapper[4768]: I0217 13:38:44.288950 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mz8kk\" (UniqueName: \"kubernetes.io/projected/5d543cd6-dc4d-4ad7-b617-389465cd2cd7-kube-api-access-mz8kk\") pod \"openshift-controller-manager-operator-756b6f6bc6-nf6w5\" (UID: \"5d543cd6-dc4d-4ad7-b617-389465cd2cd7\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-nf6w5" Feb 17 13:38:44 crc kubenswrapper[4768]: I0217 13:38:44.292827 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v5d6n\" (UniqueName: \"kubernetes.io/projected/6205a23a-a18f-44c3-82be-ccecbf757630-kube-api-access-v5d6n\") pod \"service-ca-operator-777779d784-wdd7d\" (UID: \"6205a23a-a18f-44c3-82be-ccecbf757630\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-wdd7d" Feb 17 13:38:44 crc kubenswrapper[4768]: I0217 13:38:44.294657 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-5nwnr" event={"ID":"4bd94acf-ca75-4475-b5ca-445219fccb15","Type":"ContainerStarted","Data":"39b3ab5374539654f52324fefa89e9459f4b813e39a79154e1c0bcc6e1402b7e"} Feb 17 13:38:44 crc kubenswrapper[4768]: I0217 13:38:44.294702 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-5nwnr" event={"ID":"4bd94acf-ca75-4475-b5ca-445219fccb15","Type":"ContainerStarted","Data":"0f54f3fa7139055028d92b38897532dd88d869b838879101b89cbfd4ab72b86f"} Feb 17 13:38:44 crc kubenswrapper[4768]: I0217 13:38:44.297984 4768 generic.go:334] "Generic (PLEG): container finished" podID="39ee8ba0-977c-48f3-8ac9-65b958991220" containerID="0cc944083c7cf9da841454750e294e7788dad36650dd32deaa3f1499aa89d811" exitCode=0 Feb 17 13:38:44 crc kubenswrapper[4768]: I0217 13:38:44.298083 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-n5tl8" event={"ID":"39ee8ba0-977c-48f3-8ac9-65b958991220","Type":"ContainerDied","Data":"0cc944083c7cf9da841454750e294e7788dad36650dd32deaa3f1499aa89d811"} Feb 17 13:38:44 crc kubenswrapper[4768]: I0217 13:38:44.298152 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-n5tl8" event={"ID":"39ee8ba0-977c-48f3-8ac9-65b958991220","Type":"ContainerStarted","Data":"6d9ad01c142863c2c1534d023f8299eb895df3faf4b312c893b57fc82215ab2d"} Feb 17 13:38:44 crc kubenswrapper[4768]: I0217 13:38:44.301386 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-z2b5c" event={"ID":"38dc9a37-3332-40e5-b4cd-3c702455584d","Type":"ContainerStarted","Data":"f77430c8af65f2fdf2003a517abcb1b40bfee01336bcfccbc06a9d423df3a53d"} Feb 17 13:38:44 crc kubenswrapper[4768]: I0217 13:38:44.301425 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-z2b5c" event={"ID":"38dc9a37-3332-40e5-b4cd-3c702455584d","Type":"ContainerStarted","Data":"b121d9f253e92404dbe0812ce34587884ce4a8b91e8fe452265bf2f4b0e02aa4"} Feb 17 13:38:44 crc kubenswrapper[4768]: I0217 13:38:44.301435 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-z2b5c" event={"ID":"38dc9a37-3332-40e5-b4cd-3c702455584d","Type":"ContainerStarted","Data":"bc27f97fe1bf9ce8697139b8f007ef5ca39425ddd23ed79dee0d4e7253264bdd"} Feb 17 13:38:44 crc kubenswrapper[4768]: I0217 13:38:44.310780 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-c55bg" Feb 17 13:38:44 crc kubenswrapper[4768]: I0217 13:38:44.312509 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/43681511-fb8b-441c-bde6-0b1fa3cd8955-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-8lzlh\" (UID: \"43681511-fb8b-441c-bde6-0b1fa3cd8955\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-8lzlh" Feb 17 13:38:44 crc kubenswrapper[4768]: I0217 13:38:44.322883 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-2p2tw"] Feb 17 13:38:44 crc kubenswrapper[4768]: I0217 13:38:44.334706 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k4tmz\" (UniqueName: \"kubernetes.io/projected/d9e8c957-294c-4be0-812d-9cc81edf44f6-kube-api-access-k4tmz\") pod \"packageserver-d55dfcdfc-llw5g\" (UID: \"d9e8c957-294c-4be0-812d-9cc81edf44f6\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-llw5g" Feb 17 13:38:44 crc kubenswrapper[4768]: I0217 13:38:44.352302 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-29z5d\" (UniqueName: \"kubernetes.io/projected/e871b050-8136-486d-abc5-59a91f53d26c-kube-api-access-29z5d\") pod \"catalog-operator-68c6474976-ghmbf\" (UID: \"e871b050-8136-486d-abc5-59a91f53d26c\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-ghmbf" Feb 17 13:38:44 crc kubenswrapper[4768]: I0217 13:38:44.353235 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-gvm54" Feb 17 13:38:44 crc kubenswrapper[4768]: I0217 13:38:44.364988 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-bsrtm"] Feb 17 13:38:44 crc kubenswrapper[4768]: W0217 13:38:44.366214 4768 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod916ef1b0_45af_4b2a_b4ec_ad6f78f43c0c.slice/crio-a4db0e227dba5168c4c16d5c7f3037dc1d1b0925c252322b507201490a95f8e1 WatchSource:0}: Error finding container a4db0e227dba5168c4c16d5c7f3037dc1d1b0925c252322b507201490a95f8e1: Status 404 returned error can't find the container with id a4db0e227dba5168c4c16d5c7f3037dc1d1b0925c252322b507201490a95f8e1 Feb 17 13:38:44 crc kubenswrapper[4768]: I0217 13:38:44.368299 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-nf6w5" Feb 17 13:38:44 crc kubenswrapper[4768]: I0217 13:38:44.368976 4768 request.go:700] Waited for 1.004130611s due to client-side throttling, not priority and fairness, request: POST:https://api-int.crc.testing:6443/api/v1/namespaces/hostpath-provisioner/serviceaccounts/csi-hostpath-provisioner-sa/token Feb 17 13:38:44 crc kubenswrapper[4768]: I0217 13:38:44.374087 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ea95ef83-4d37-4fa3-b58c-5712a3fe0450-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-rm2vj\" (UID: \"ea95ef83-4d37-4fa3-b58c-5712a3fe0450\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-rm2vj" Feb 17 13:38:44 crc kubenswrapper[4768]: I0217 13:38:44.389437 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vfpbq\" (UID: \"9cf79399-929e-43c8-9ceb-06619ef1edee\") " pod="openshift-image-registry/image-registry-697d97f7c8-vfpbq" Feb 17 13:38:44 crc kubenswrapper[4768]: E0217 13:38:44.401374 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 13:38:44.901359029 +0000 UTC m=+144.180745471 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vfpbq" (UID: "9cf79399-929e-43c8-9ceb-06619ef1edee") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 13:38:44 crc kubenswrapper[4768]: I0217 13:38:44.405321 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-m66tk" Feb 17 13:38:44 crc kubenswrapper[4768]: I0217 13:38:44.407732 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5s2x7\" (UniqueName: \"kubernetes.io/projected/00ff8eee-3713-495f-a7c7-d05bba726cda-kube-api-access-5s2x7\") pod \"csi-hostpathplugin-pls7p\" (UID: \"00ff8eee-3713-495f-a7c7-d05bba726cda\") " pod="hostpath-provisioner/csi-hostpathplugin-pls7p" Feb 17 13:38:44 crc kubenswrapper[4768]: I0217 13:38:44.419576 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-8lzlh" Feb 17 13:38:44 crc kubenswrapper[4768]: I0217 13:38:44.426161 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-ghmbf" Feb 17 13:38:44 crc kubenswrapper[4768]: I0217 13:38:44.432816 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5v5pz\" (UniqueName: \"kubernetes.io/projected/150a773f-d920-422b-b8d3-3e33876a0642-kube-api-access-5v5pz\") pod \"apiserver-76f77b778f-hxzgb\" (UID: \"150a773f-d920-422b-b8d3-3e33876a0642\") " pod="openshift-apiserver/apiserver-76f77b778f-hxzgb" Feb 17 13:38:44 crc kubenswrapper[4768]: I0217 13:38:44.443163 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-xwxcr" Feb 17 13:38:44 crc kubenswrapper[4768]: I0217 13:38:44.444344 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p2sx4\" (UniqueName: \"kubernetes.io/projected/451ced6a-ebdc-43a3-9639-5d74f0885fed-kube-api-access-p2sx4\") pod \"dns-default-sqgwf\" (UID: \"451ced6a-ebdc-43a3-9639-5d74f0885fed\") " pod="openshift-dns/dns-default-sqgwf" Feb 17 13:38:44 crc kubenswrapper[4768]: I0217 13:38:44.444681 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-vgjr8" Feb 17 13:38:44 crc kubenswrapper[4768]: I0217 13:38:44.456526 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-rm2vj" Feb 17 13:38:44 crc kubenswrapper[4768]: I0217 13:38:44.470419 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-wdd7d" Feb 17 13:38:44 crc kubenswrapper[4768]: I0217 13:38:44.482547 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-62g7l\" (UniqueName: \"kubernetes.io/projected/5dc809a2-ba43-4858-9f51-ce7f2e366f29-kube-api-access-62g7l\") pod \"machine-config-controller-84d6567774-m4thw\" (UID: \"5dc809a2-ba43-4858-9f51-ce7f2e366f29\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-m4thw" Feb 17 13:38:44 crc kubenswrapper[4768]: I0217 13:38:44.484548 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-czhdm" Feb 17 13:38:44 crc kubenswrapper[4768]: I0217 13:38:44.490966 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 13:38:44 crc kubenswrapper[4768]: E0217 13:38:44.491421 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 13:38:44.991390773 +0000 UTC m=+144.270777215 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 13:38:44 crc kubenswrapper[4768]: I0217 13:38:44.495444 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6cdvg\" (UniqueName: \"kubernetes.io/projected/7aedec6a-6287-4e21-92a5-c818c2879842-kube-api-access-6cdvg\") pod \"ingress-canary-ztfdt\" (UID: \"7aedec6a-6287-4e21-92a5-c818c2879842\") " pod="openshift-ingress-canary/ingress-canary-ztfdt" Feb 17 13:38:44 crc kubenswrapper[4768]: I0217 13:38:44.498401 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zwbct\" (UniqueName: \"kubernetes.io/projected/0face492-83c1-49d4-bc1e-7de407151988-kube-api-access-zwbct\") pod \"collect-profiles-29522250-hq8zz\" (UID: \"0face492-83c1-49d4-bc1e-7de407151988\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522250-hq8zz" Feb 17 13:38:44 crc kubenswrapper[4768]: I0217 13:38:44.499756 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-h77q6"] Feb 17 13:38:44 crc kubenswrapper[4768]: I0217 13:38:44.516379 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29522250-hq8zz" Feb 17 13:38:44 crc kubenswrapper[4768]: I0217 13:38:44.516573 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-m4thw" Feb 17 13:38:44 crc kubenswrapper[4768]: I0217 13:38:44.520014 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mv2sh\" (UniqueName: \"kubernetes.io/projected/77b1f878-6463-4342-b3f1-96c32e69e4d9-kube-api-access-mv2sh\") pod \"service-ca-9c57cc56f-fc989\" (UID: \"77b1f878-6463-4342-b3f1-96c32e69e4d9\") " pod="openshift-service-ca/service-ca-9c57cc56f-fc989" Feb 17 13:38:44 crc kubenswrapper[4768]: I0217 13:38:44.528205 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-ds4bl"] Feb 17 13:38:44 crc kubenswrapper[4768]: I0217 13:38:44.531556 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2k94d\" (UniqueName: \"kubernetes.io/projected/49862fb8-6a93-48ac-926a-846f72a67989-kube-api-access-2k94d\") pod \"router-default-5444994796-ql5b5\" (UID: \"49862fb8-6a93-48ac-926a-846f72a67989\") " pod="openshift-ingress/router-default-5444994796-ql5b5" Feb 17 13:38:44 crc kubenswrapper[4768]: I0217 13:38:44.533572 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-llw5g" Feb 17 13:38:44 crc kubenswrapper[4768]: I0217 13:38:44.535731 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-5wgl6" Feb 17 13:38:44 crc kubenswrapper[4768]: I0217 13:38:44.549585 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-k6fdj" Feb 17 13:38:44 crc kubenswrapper[4768]: I0217 13:38:44.574394 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hkffj\" (UniqueName: \"kubernetes.io/projected/4c14ef86-26dd-4ad1-854e-2592ba200b02-kube-api-access-hkffj\") pod \"console-operator-58897d9998-8c6lh\" (UID: \"4c14ef86-26dd-4ad1-854e-2592ba200b02\") " pod="openshift-console-operator/console-operator-58897d9998-8c6lh" Feb 17 13:38:44 crc kubenswrapper[4768]: I0217 13:38:44.574437 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-ztfdt" Feb 17 13:38:44 crc kubenswrapper[4768]: I0217 13:38:44.586641 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-pls7p" Feb 17 13:38:44 crc kubenswrapper[4768]: I0217 13:38:44.590945 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5r6jx\" (UniqueName: \"kubernetes.io/projected/a192d1d8-c111-4c76-b256-7110fc99b045-kube-api-access-5r6jx\") pod \"multus-admission-controller-857f4d67dd-26dvz\" (UID: \"a192d1d8-c111-4c76-b256-7110fc99b045\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-26dvz" Feb 17 13:38:44 crc kubenswrapper[4768]: I0217 13:38:44.595851 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-sqgwf" Feb 17 13:38:44 crc kubenswrapper[4768]: I0217 13:38:44.596299 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ff5cbab6-dd03-463e-9940-ad55678c9e38-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-p7h6m\" (UID: \"ff5cbab6-dd03-463e-9940-ad55678c9e38\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-p7h6m" Feb 17 13:38:44 crc kubenswrapper[4768]: I0217 13:38:44.596357 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vfpbq\" (UID: \"9cf79399-929e-43c8-9ceb-06619ef1edee\") " pod="openshift-image-registry/image-registry-697d97f7c8-vfpbq" Feb 17 13:38:44 crc kubenswrapper[4768]: E0217 13:38:44.596586 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 13:38:45.096574581 +0000 UTC m=+144.375961023 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vfpbq" (UID: "9cf79399-929e-43c8-9ceb-06619ef1edee") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 13:38:44 crc kubenswrapper[4768]: I0217 13:38:44.611589 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ff5cbab6-dd03-463e-9940-ad55678c9e38-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-p7h6m\" (UID: \"ff5cbab6-dd03-463e-9940-ad55678c9e38\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-p7h6m" Feb 17 13:38:44 crc kubenswrapper[4768]: I0217 13:38:44.660993 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-kqwfk"] Feb 17 13:38:44 crc kubenswrapper[4768]: I0217 13:38:44.697102 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 13:38:44 crc kubenswrapper[4768]: E0217 13:38:44.697238 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 13:38:45.197208847 +0000 UTC m=+144.476595309 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 13:38:44 crc kubenswrapper[4768]: I0217 13:38:44.698419 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vfpbq\" (UID: \"9cf79399-929e-43c8-9ceb-06619ef1edee\") " pod="openshift-image-registry/image-registry-697d97f7c8-vfpbq" Feb 17 13:38:44 crc kubenswrapper[4768]: E0217 13:38:44.698843 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 13:38:45.198825978 +0000 UTC m=+144.478212420 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vfpbq" (UID: "9cf79399-929e-43c8-9ceb-06619ef1edee") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 13:38:44 crc kubenswrapper[4768]: I0217 13:38:44.712710 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-hxzgb" Feb 17 13:38:44 crc kubenswrapper[4768]: I0217 13:38:44.714278 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-t54pq"] Feb 17 13:38:44 crc kubenswrapper[4768]: I0217 13:38:44.773921 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-ql5b5" Feb 17 13:38:44 crc kubenswrapper[4768]: I0217 13:38:44.793936 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-26dvz" Feb 17 13:38:44 crc kubenswrapper[4768]: I0217 13:38:44.801401 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 13:38:44 crc kubenswrapper[4768]: E0217 13:38:44.801907 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 13:38:45.30188699 +0000 UTC m=+144.581273432 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 13:38:44 crc kubenswrapper[4768]: I0217 13:38:44.804666 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-8c6lh" Feb 17 13:38:44 crc kubenswrapper[4768]: I0217 13:38:44.804792 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-fc989" Feb 17 13:38:44 crc kubenswrapper[4768]: I0217 13:38:44.862222 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-p7h6m" Feb 17 13:38:44 crc kubenswrapper[4768]: I0217 13:38:44.903360 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vfpbq\" (UID: \"9cf79399-929e-43c8-9ceb-06619ef1edee\") " pod="openshift-image-registry/image-registry-697d97f7c8-vfpbq" Feb 17 13:38:44 crc kubenswrapper[4768]: E0217 13:38:44.904938 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 13:38:45.40488481 +0000 UTC m=+144.684271252 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vfpbq" (UID: "9cf79399-929e-43c8-9ceb-06619ef1edee") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 13:38:44 crc kubenswrapper[4768]: I0217 13:38:44.998348 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-tpnsx"] Feb 17 13:38:45 crc kubenswrapper[4768]: I0217 13:38:45.004561 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 13:38:45 crc kubenswrapper[4768]: E0217 13:38:45.004668 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 13:38:45.504649189 +0000 UTC m=+144.784035641 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 13:38:45 crc kubenswrapper[4768]: I0217 13:38:45.004931 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vfpbq\" (UID: \"9cf79399-929e-43c8-9ceb-06619ef1edee\") " pod="openshift-image-registry/image-registry-697d97f7c8-vfpbq" Feb 17 13:38:45 crc kubenswrapper[4768]: E0217 13:38:45.005279 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 13:38:45.505269658 +0000 UTC m=+144.784656110 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vfpbq" (UID: "9cf79399-929e-43c8-9ceb-06619ef1edee") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 13:38:45 crc kubenswrapper[4768]: I0217 13:38:45.105597 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 13:38:45 crc kubenswrapper[4768]: E0217 13:38:45.106016 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 13:38:45.605998967 +0000 UTC m=+144.885385409 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 13:38:45 crc kubenswrapper[4768]: I0217 13:38:45.206832 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vfpbq\" (UID: \"9cf79399-929e-43c8-9ceb-06619ef1edee\") " pod="openshift-image-registry/image-registry-697d97f7c8-vfpbq" Feb 17 13:38:45 crc kubenswrapper[4768]: E0217 13:38:45.207273 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 13:38:45.707257453 +0000 UTC m=+144.986643895 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vfpbq" (UID: "9cf79399-929e-43c8-9ceb-06619ef1edee") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 13:38:45 crc kubenswrapper[4768]: I0217 13:38:45.308171 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 13:38:45 crc kubenswrapper[4768]: E0217 13:38:45.308413 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 13:38:45.808383864 +0000 UTC m=+145.087770306 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 13:38:45 crc kubenswrapper[4768]: I0217 13:38:45.308778 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vfpbq\" (UID: \"9cf79399-929e-43c8-9ceb-06619ef1edee\") " pod="openshift-image-registry/image-registry-697d97f7c8-vfpbq" Feb 17 13:38:45 crc kubenswrapper[4768]: E0217 13:38:45.310301 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 13:38:45.810290654 +0000 UTC m=+145.089677096 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vfpbq" (UID: "9cf79399-929e-43c8-9ceb-06619ef1edee") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 13:38:45 crc kubenswrapper[4768]: I0217 13:38:45.330300 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-t54pq" event={"ID":"3084c4ad-c24d-48e2-9734-99ca07d07bab","Type":"ContainerStarted","Data":"8d08ccdc77feb86af21d48cc4f0708355fdad7bea7c7f8503acdd535d496b899"} Feb 17 13:38:45 crc kubenswrapper[4768]: I0217 13:38:45.331302 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-2p2tw" event={"ID":"916ef1b0-45af-4b2a-b4ec-ad6f78f43c0c","Type":"ContainerStarted","Data":"a4db0e227dba5168c4c16d5c7f3037dc1d1b0925c252322b507201490a95f8e1"} Feb 17 13:38:45 crc kubenswrapper[4768]: I0217 13:38:45.331955 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-h77q6" event={"ID":"b7837593-1275-40cb-820f-afe9cb13fad4","Type":"ContainerStarted","Data":"ea207b861d45db8aeb43f91c93ccb6d52238a7cffe4a80ef5a245306657ce11f"} Feb 17 13:38:45 crc kubenswrapper[4768]: I0217 13:38:45.332642 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-k6fdj" event={"ID":"31c906f5-7452-4a91-ac3e-3c230e7785aa","Type":"ContainerStarted","Data":"a3487be481ef04d7e3de0a62a25182a345c63d2a57605a09063da643d5782028"} Feb 17 13:38:45 crc kubenswrapper[4768]: I0217 13:38:45.333358 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-bsrtm" event={"ID":"0af47ec2-b35c-48df-8f91-9c878fb5ee94","Type":"ContainerStarted","Data":"aa33fd1f159cb25aee488540f8590926d6e38d5886ecaed15983fe3d5b472941"} Feb 17 13:38:45 crc kubenswrapper[4768]: I0217 13:38:45.334279 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-kqwfk" event={"ID":"eb861f06-bf99-46e8-8627-c3d99245994b","Type":"ContainerStarted","Data":"74608bae35381f9a6f2808e45d464fd08a98dacd2a5bfb41e9173c83d84e4aed"} Feb 17 13:38:45 crc kubenswrapper[4768]: I0217 13:38:45.343847 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-vx6f4" event={"ID":"28505874-0a70-4f53-8070-607918790abe","Type":"ContainerStarted","Data":"7bbf9e3106c389a9c55883441a6cfd7f1319e2a1a11f87771f081b15dc7591d3"} Feb 17 13:38:45 crc kubenswrapper[4768]: I0217 13:38:45.343883 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-vx6f4" event={"ID":"28505874-0a70-4f53-8070-607918790abe","Type":"ContainerStarted","Data":"aceb284cca2d2fe2ac13830d7e5bf30bf23283d9888d4df99c384a628afa5166"} Feb 17 13:38:45 crc kubenswrapper[4768]: I0217 13:38:45.344918 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-tpnsx" event={"ID":"6a5895c9-f283-43f0-82d7-c8a0cbf377ce","Type":"ContainerStarted","Data":"8a0936bb62a13e553888fa639ccf9e17cfefd1a7b3e0d36fdf554eeb5dd0c777"} Feb 17 13:38:45 crc kubenswrapper[4768]: I0217 13:38:45.346795 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-ds4bl" event={"ID":"aa502e6c-070c-46ab-a1b8-82e34b55aad7","Type":"ContainerStarted","Data":"39f2c579656241b5ea26cd3abb7592ce2f90498b574e54df4c5056ab473a70bf"} Feb 17 13:38:45 crc kubenswrapper[4768]: I0217 13:38:45.349369 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-ql5b5" event={"ID":"49862fb8-6a93-48ac-926a-846f72a67989","Type":"ContainerStarted","Data":"3bc77bc2ca4c63a6dc9c32d1b5ff7b387c03cf9364a069b4f7e1dd00fd61fd0f"} Feb 17 13:38:45 crc kubenswrapper[4768]: I0217 13:38:45.355483 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-n5tl8" event={"ID":"39ee8ba0-977c-48f3-8ac9-65b958991220","Type":"ContainerStarted","Data":"48825f450b630a06f0006184c9973913c86e8b03c4806ffd116991fb319ca544"} Feb 17 13:38:45 crc kubenswrapper[4768]: I0217 13:38:45.356325 4768 patch_prober.go:28] interesting pod/downloads-7954f5f757-mvb69 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.27:8080/\": dial tcp 10.217.0.27:8080: connect: connection refused" start-of-body= Feb 17 13:38:45 crc kubenswrapper[4768]: I0217 13:38:45.356374 4768 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-mvb69" podUID="7483ebd8-979d-429d-9197-cf5ae208af0a" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.27:8080/\": dial tcp 10.217.0.27:8080: connect: connection refused" Feb 17 13:38:45 crc kubenswrapper[4768]: I0217 13:38:45.410521 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 13:38:45 crc kubenswrapper[4768]: E0217 13:38:45.410918 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 13:38:45.91090547 +0000 UTC m=+145.190291912 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 13:38:45 crc kubenswrapper[4768]: I0217 13:38:45.435505 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-n5tl8" podStartSLOduration=123.435327656 podStartE2EDuration="2m3.435327656s" podCreationTimestamp="2026-02-17 13:36:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 13:38:45.429849244 +0000 UTC m=+144.709235696" watchObservedRunningTime="2026-02-17 13:38:45.435327656 +0000 UTC m=+144.714714088" Feb 17 13:38:45 crc kubenswrapper[4768]: I0217 13:38:45.515056 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vfpbq\" (UID: \"9cf79399-929e-43c8-9ceb-06619ef1edee\") " pod="openshift-image-registry/image-registry-697d97f7c8-vfpbq" Feb 17 13:38:45 crc kubenswrapper[4768]: E0217 13:38:45.528734 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 13:38:46.028718335 +0000 UTC m=+145.308104767 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vfpbq" (UID: "9cf79399-929e-43c8-9ceb-06619ef1edee") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 13:38:45 crc kubenswrapper[4768]: I0217 13:38:45.553901 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-nf6w5"] Feb 17 13:38:45 crc kubenswrapper[4768]: I0217 13:38:45.556529 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-rjn87"] Feb 17 13:38:45 crc kubenswrapper[4768]: I0217 13:38:45.580447 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-m66tk"] Feb 17 13:38:45 crc kubenswrapper[4768]: I0217 13:38:45.589503 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-ghmbf"] Feb 17 13:38:45 crc kubenswrapper[4768]: I0217 13:38:45.603377 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-gvm54"] Feb 17 13:38:45 crc kubenswrapper[4768]: I0217 13:38:45.615694 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 13:38:45 crc kubenswrapper[4768]: E0217 13:38:45.616049 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 13:38:46.116035653 +0000 UTC m=+145.395422085 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 13:38:45 crc kubenswrapper[4768]: I0217 13:38:45.621493 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns-operator/dns-operator-744455d44c-gwd2q" podStartSLOduration=124.621469893 podStartE2EDuration="2m4.621469893s" podCreationTimestamp="2026-02-17 13:36:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 13:38:45.618081147 +0000 UTC m=+144.897467579" watchObservedRunningTime="2026-02-17 13:38:45.621469893 +0000 UTC m=+144.900856345" Feb 17 13:38:45 crc kubenswrapper[4768]: I0217 13:38:45.676796 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-5nwnr" podStartSLOduration=124.676771368 podStartE2EDuration="2m4.676771368s" podCreationTimestamp="2026-02-17 13:36:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 13:38:45.673508796 +0000 UTC m=+144.952895238" watchObservedRunningTime="2026-02-17 13:38:45.676771368 +0000 UTC m=+144.956157820" Feb 17 13:38:45 crc kubenswrapper[4768]: W0217 13:38:45.700481 4768 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5d543cd6_dc4d_4ad7_b617_389465cd2cd7.slice/crio-bc53191f0fc1cbb089542df4e4812b74a1b5c6aeb26bf3adf937b69e9353ae73 WatchSource:0}: Error finding container bc53191f0fc1cbb089542df4e4812b74a1b5c6aeb26bf3adf937b69e9353ae73: Status 404 returned error can't find the container with id bc53191f0fc1cbb089542df4e4812b74a1b5c6aeb26bf3adf937b69e9353ae73 Feb 17 13:38:45 crc kubenswrapper[4768]: I0217 13:38:45.724610 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vfpbq\" (UID: \"9cf79399-929e-43c8-9ceb-06619ef1edee\") " pod="openshift-image-registry/image-registry-697d97f7c8-vfpbq" Feb 17 13:38:45 crc kubenswrapper[4768]: E0217 13:38:45.725286 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 13:38:46.225270349 +0000 UTC m=+145.504656791 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vfpbq" (UID: "9cf79399-929e-43c8-9ceb-06619ef1edee") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 13:38:45 crc kubenswrapper[4768]: I0217 13:38:45.825569 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 13:38:45 crc kubenswrapper[4768]: E0217 13:38:45.825799 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 13:38:46.32574979 +0000 UTC m=+145.605136232 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 13:38:45 crc kubenswrapper[4768]: I0217 13:38:45.826179 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vfpbq\" (UID: \"9cf79399-929e-43c8-9ceb-06619ef1edee\") " pod="openshift-image-registry/image-registry-697d97f7c8-vfpbq" Feb 17 13:38:45 crc kubenswrapper[4768]: E0217 13:38:45.826484 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 13:38:46.326472532 +0000 UTC m=+145.605858974 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vfpbq" (UID: "9cf79399-929e-43c8-9ceb-06619ef1edee") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 13:38:45 crc kubenswrapper[4768]: I0217 13:38:45.849156 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-czhdm"] Feb 17 13:38:45 crc kubenswrapper[4768]: I0217 13:38:45.851998 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-8lzlh"] Feb 17 13:38:45 crc kubenswrapper[4768]: I0217 13:38:45.854878 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-xwxcr"] Feb 17 13:38:45 crc kubenswrapper[4768]: I0217 13:38:45.855712 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-f9d7485db-9fmzj" podStartSLOduration=124.855695719 podStartE2EDuration="2m4.855695719s" podCreationTimestamp="2026-02-17 13:36:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 13:38:45.845435707 +0000 UTC m=+145.124822149" watchObservedRunningTime="2026-02-17 13:38:45.855695719 +0000 UTC m=+145.135082151" Feb 17 13:38:45 crc kubenswrapper[4768]: I0217 13:38:45.927710 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 13:38:45 crc kubenswrapper[4768]: E0217 13:38:45.927868 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 13:38:46.427847763 +0000 UTC m=+145.707234205 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 13:38:45 crc kubenswrapper[4768]: I0217 13:38:45.928307 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vfpbq\" (UID: \"9cf79399-929e-43c8-9ceb-06619ef1edee\") " pod="openshift-image-registry/image-registry-697d97f7c8-vfpbq" Feb 17 13:38:45 crc kubenswrapper[4768]: E0217 13:38:45.928609 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 13:38:46.428598146 +0000 UTC m=+145.707984578 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vfpbq" (UID: "9cf79399-929e-43c8-9ceb-06619ef1edee") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 13:38:46 crc kubenswrapper[4768]: I0217 13:38:46.029026 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 13:38:46 crc kubenswrapper[4768]: E0217 13:38:46.029390 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 13:38:46.529377056 +0000 UTC m=+145.808763498 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 13:38:46 crc kubenswrapper[4768]: I0217 13:38:46.044956 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-sqgwf"] Feb 17 13:38:46 crc kubenswrapper[4768]: W0217 13:38:46.050267 4768 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod451ced6a_ebdc_43a3_9639_5d74f0885fed.slice/crio-96858b8ac0080910c25b93266e4c5327c5b6d6417e4e7c5a6a1292c58c9352d8 WatchSource:0}: Error finding container 96858b8ac0080910c25b93266e4c5327c5b6d6417e4e7c5a6a1292c58c9352d8: Status 404 returned error can't find the container with id 96858b8ac0080910c25b93266e4c5327c5b6d6417e4e7c5a6a1292c58c9352d8 Feb 17 13:38:46 crc kubenswrapper[4768]: I0217 13:38:46.062857 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/machine-api-operator-5694c8668f-z2b5c" podStartSLOduration=124.062839846 podStartE2EDuration="2m4.062839846s" podCreationTimestamp="2026-02-17 13:36:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 13:38:46.061237695 +0000 UTC m=+145.340624137" watchObservedRunningTime="2026-02-17 13:38:46.062839846 +0000 UTC m=+145.342226288" Feb 17 13:38:46 crc kubenswrapper[4768]: I0217 13:38:46.066281 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-vgjr8"] Feb 17 13:38:46 crc kubenswrapper[4768]: W0217 13:38:46.114076 4768 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9548bee5_799d_49de_bc66_296f14396f43.slice/crio-ce4b15ea7f385923ddfc3a7134c9e2478ace5e54f271590aaa5cf0bc5943fcd6 WatchSource:0}: Error finding container ce4b15ea7f385923ddfc3a7134c9e2478ace5e54f271590aaa5cf0bc5943fcd6: Status 404 returned error can't find the container with id ce4b15ea7f385923ddfc3a7134c9e2478ace5e54f271590aaa5cf0bc5943fcd6 Feb 17 13:38:46 crc kubenswrapper[4768]: I0217 13:38:46.131709 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vfpbq\" (UID: \"9cf79399-929e-43c8-9ceb-06619ef1edee\") " pod="openshift-image-registry/image-registry-697d97f7c8-vfpbq" Feb 17 13:38:46 crc kubenswrapper[4768]: E0217 13:38:46.133865 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 13:38:46.632084057 +0000 UTC m=+145.911470499 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vfpbq" (UID: "9cf79399-929e-43c8-9ceb-06619ef1edee") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 13:38:46 crc kubenswrapper[4768]: I0217 13:38:46.140667 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-r7ksx" podStartSLOduration=125.140650505 podStartE2EDuration="2m5.140650505s" podCreationTimestamp="2026-02-17 13:36:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 13:38:46.106368901 +0000 UTC m=+145.385755353" watchObservedRunningTime="2026-02-17 13:38:46.140650505 +0000 UTC m=+145.420036947" Feb 17 13:38:46 crc kubenswrapper[4768]: I0217 13:38:46.233552 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 13:38:46 crc kubenswrapper[4768]: I0217 13:38:46.233657 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-m4thw"] Feb 17 13:38:46 crc kubenswrapper[4768]: E0217 13:38:46.233891 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 13:38:46.73387774 +0000 UTC m=+146.013264182 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 13:38:46 crc kubenswrapper[4768]: I0217 13:38:46.237315 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-8c6lh"] Feb 17 13:38:46 crc kubenswrapper[4768]: I0217 13:38:46.239972 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-c55bg" podStartSLOduration=124.23995896 podStartE2EDuration="2m4.23995896s" podCreationTimestamp="2026-02-17 13:36:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 13:38:46.239126605 +0000 UTC m=+145.518513047" watchObservedRunningTime="2026-02-17 13:38:46.23995896 +0000 UTC m=+145.519345402" Feb 17 13:38:46 crc kubenswrapper[4768]: I0217 13:38:46.250163 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-wdd7d"] Feb 17 13:38:46 crc kubenswrapper[4768]: I0217 13:38:46.285671 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29522250-hq8zz"] Feb 17 13:38:46 crc kubenswrapper[4768]: I0217 13:38:46.292467 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-rm2vj"] Feb 17 13:38:46 crc kubenswrapper[4768]: W0217 13:38:46.312718 4768 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4c14ef86_26dd_4ad1_854e_2592ba200b02.slice/crio-27e2be5aa0de413aa429d41aaa4a7fa8907b4f1c58e03b3db63e90656fc0bcd8 WatchSource:0}: Error finding container 27e2be5aa0de413aa429d41aaa4a7fa8907b4f1c58e03b3db63e90656fc0bcd8: Status 404 returned error can't find the container with id 27e2be5aa0de413aa429d41aaa4a7fa8907b4f1c58e03b3db63e90656fc0bcd8 Feb 17 13:38:46 crc kubenswrapper[4768]: W0217 13:38:46.322640 4768 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6205a23a_a18f_44c3_82be_ccecbf757630.slice/crio-241f8b83579717bc90abffa1e31613102c3da370be8ecf16ed29d5afbb17640c WatchSource:0}: Error finding container 241f8b83579717bc90abffa1e31613102c3da370be8ecf16ed29d5afbb17640c: Status 404 returned error can't find the container with id 241f8b83579717bc90abffa1e31613102c3da370be8ecf16ed29d5afbb17640c Feb 17 13:38:46 crc kubenswrapper[4768]: I0217 13:38:46.335219 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vfpbq\" (UID: \"9cf79399-929e-43c8-9ceb-06619ef1edee\") " pod="openshift-image-registry/image-registry-697d97f7c8-vfpbq" Feb 17 13:38:46 crc kubenswrapper[4768]: E0217 13:38:46.335616 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 13:38:46.83560219 +0000 UTC m=+146.114988632 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vfpbq" (UID: "9cf79399-929e-43c8-9ceb-06619ef1edee") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 13:38:46 crc kubenswrapper[4768]: I0217 13:38:46.339572 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-p7h6m"] Feb 17 13:38:46 crc kubenswrapper[4768]: I0217 13:38:46.344443 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-5wgl6"] Feb 17 13:38:46 crc kubenswrapper[4768]: I0217 13:38:46.346764 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-7954f5f757-mvb69" podStartSLOduration=125.346740599 podStartE2EDuration="2m5.346740599s" podCreationTimestamp="2026-02-17 13:36:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 13:38:46.342270779 +0000 UTC m=+145.621657221" watchObservedRunningTime="2026-02-17 13:38:46.346740599 +0000 UTC m=+145.626127041" Feb 17 13:38:46 crc kubenswrapper[4768]: I0217 13:38:46.353723 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-pls7p"] Feb 17 13:38:46 crc kubenswrapper[4768]: W0217 13:38:46.357863 4768 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podff5cbab6_dd03_463e_9940_ad55678c9e38.slice/crio-a0c768206b2185acc38e2bda416edf390fb5c864af881d7a4e98d7d902dfeb12 WatchSource:0}: Error finding container a0c768206b2185acc38e2bda416edf390fb5c864af881d7a4e98d7d902dfeb12: Status 404 returned error can't find the container with id a0c768206b2185acc38e2bda416edf390fb5c864af881d7a4e98d7d902dfeb12 Feb 17 13:38:46 crc kubenswrapper[4768]: I0217 13:38:46.365329 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-nf6w5" event={"ID":"5d543cd6-dc4d-4ad7-b617-389465cd2cd7","Type":"ContainerStarted","Data":"34be09229353ed50a6e1223e0fe9fcd626dc7fff714aa53a7796dad136930dd3"} Feb 17 13:38:46 crc kubenswrapper[4768]: I0217 13:38:46.365553 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-nf6w5" event={"ID":"5d543cd6-dc4d-4ad7-b617-389465cd2cd7","Type":"ContainerStarted","Data":"bc53191f0fc1cbb089542df4e4812b74a1b5c6aeb26bf3adf937b69e9353ae73"} Feb 17 13:38:46 crc kubenswrapper[4768]: I0217 13:38:46.370355 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-fc989"] Feb 17 13:38:46 crc kubenswrapper[4768]: I0217 13:38:46.372337 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-czhdm" event={"ID":"5396bd13-b4d6-42d2-834d-36e8e88715b5","Type":"ContainerStarted","Data":"9fa9ff57fe24e8793e8022bf9195e1a0e9bd7855cfc93e0f2eeb274238beb892"} Feb 17 13:38:46 crc kubenswrapper[4768]: I0217 13:38:46.372365 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-czhdm" event={"ID":"5396bd13-b4d6-42d2-834d-36e8e88715b5","Type":"ContainerStarted","Data":"f8d8c0c01ae846a0edc771cf24c6566c14f772116f28bda7cb974ea000313e82"} Feb 17 13:38:46 crc kubenswrapper[4768]: I0217 13:38:46.373904 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-2p2tw" event={"ID":"916ef1b0-45af-4b2a-b4ec-ad6f78f43c0c","Type":"ContainerStarted","Data":"09a225bb1b7baeb6e0a882f6bf5e86912fae095b604c5e5758a1e45ab7584364"} Feb 17 13:38:46 crc kubenswrapper[4768]: W0217 13:38:46.376041 4768 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podea95ef83_4d37_4fa3_b58c_5712a3fe0450.slice/crio-8f787504eaa57de28bac69138d135f250dd98c8a9532b777710b6047bb331942 WatchSource:0}: Error finding container 8f787504eaa57de28bac69138d135f250dd98c8a9532b777710b6047bb331942: Status 404 returned error can't find the container with id 8f787504eaa57de28bac69138d135f250dd98c8a9532b777710b6047bb331942 Feb 17 13:38:46 crc kubenswrapper[4768]: I0217 13:38:46.379245 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-8c6lh" event={"ID":"4c14ef86-26dd-4ad1-854e-2592ba200b02","Type":"ContainerStarted","Data":"27e2be5aa0de413aa429d41aaa4a7fa8907b4f1c58e03b3db63e90656fc0bcd8"} Feb 17 13:38:46 crc kubenswrapper[4768]: W0217 13:38:46.381301 4768 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0face492_83c1_49d4_bc1e_7de407151988.slice/crio-48da0ef58925f14b1014f92b1ed89ac8055c2064e235f3a6a574fd06ae1f262c WatchSource:0}: Error finding container 48da0ef58925f14b1014f92b1ed89ac8055c2064e235f3a6a574fd06ae1f262c: Status 404 returned error can't find the container with id 48da0ef58925f14b1014f92b1ed89ac8055c2064e235f3a6a574fd06ae1f262c Feb 17 13:38:46 crc kubenswrapper[4768]: I0217 13:38:46.383762 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-sqgwf" event={"ID":"451ced6a-ebdc-43a3-9639-5d74f0885fed","Type":"ContainerStarted","Data":"96858b8ac0080910c25b93266e4c5327c5b6d6417e4e7c5a6a1292c58c9352d8"} Feb 17 13:38:46 crc kubenswrapper[4768]: I0217 13:38:46.388140 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-bsrtm" event={"ID":"0af47ec2-b35c-48df-8f91-9c878fb5ee94","Type":"ContainerStarted","Data":"bd662b0fcf80ae8592adf3fc20a8ef23c5f24a6752356787a93772e3687f6125"} Feb 17 13:38:46 crc kubenswrapper[4768]: W0217 13:38:46.388562 4768 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod29f8bd1f_9a14_4725_a333_ee7509778b5d.slice/crio-9ade9ce96b94d478c6acadbc599460dc662a98ebe7f86c585e40427e6b9363ea WatchSource:0}: Error finding container 9ade9ce96b94d478c6acadbc599460dc662a98ebe7f86c585e40427e6b9363ea: Status 404 returned error can't find the container with id 9ade9ce96b94d478c6acadbc599460dc662a98ebe7f86c585e40427e6b9363ea Feb 17 13:38:46 crc kubenswrapper[4768]: I0217 13:38:46.388585 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-879f6c89f-bsrtm" Feb 17 13:38:46 crc kubenswrapper[4768]: I0217 13:38:46.390917 4768 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-bsrtm container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.28:8443/healthz\": dial tcp 10.217.0.28:8443: connect: connection refused" start-of-body= Feb 17 13:38:46 crc kubenswrapper[4768]: I0217 13:38:46.390955 4768 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-bsrtm" podUID="0af47ec2-b35c-48df-8f91-9c878fb5ee94" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.28:8443/healthz\": dial tcp 10.217.0.28:8443: connect: connection refused" Feb 17 13:38:46 crc kubenswrapper[4768]: W0217 13:38:46.400947 4768 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod00ff8eee_3713_495f_a7c7_d05bba726cda.slice/crio-d9be76fc786cc085e5ee1cae228ba16c3ce2712e1db73b74318bdc6e9d094938 WatchSource:0}: Error finding container d9be76fc786cc085e5ee1cae228ba16c3ce2712e1db73b74318bdc6e9d094938: Status 404 returned error can't find the container with id d9be76fc786cc085e5ee1cae228ba16c3ce2712e1db73b74318bdc6e9d094938 Feb 17 13:38:46 crc kubenswrapper[4768]: I0217 13:38:46.407131 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-gvm54" event={"ID":"1bc575bb-6b05-4fe4-92fb-e467de4810b7","Type":"ContainerStarted","Data":"5cf644e8fb2f4af94c60de0183d49d1a8a18ff764d2b1e19e6382e9e3714fe7d"} Feb 17 13:38:46 crc kubenswrapper[4768]: I0217 13:38:46.407173 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-gvm54" event={"ID":"1bc575bb-6b05-4fe4-92fb-e467de4810b7","Type":"ContainerStarted","Data":"ca118e88683b9477a9b66254dafb5de106ca3541904e13253f0630b856b7cef1"} Feb 17 13:38:46 crc kubenswrapper[4768]: I0217 13:38:46.418041 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-tpnsx" event={"ID":"6a5895c9-f283-43f0-82d7-c8a0cbf377ce","Type":"ContainerStarted","Data":"4f49ff791c1b5d0704753a9200fe6d70413c4de1f7965cb7cdfc7967939a357e"} Feb 17 13:38:46 crc kubenswrapper[4768]: I0217 13:38:46.426524 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-ql5b5" event={"ID":"49862fb8-6a93-48ac-926a-846f72a67989","Type":"ContainerStarted","Data":"7c6301438ddf6ba9c14ccdf3c88eac55a3a5eedc52ab0393a1e73963b255bf4e"} Feb 17 13:38:46 crc kubenswrapper[4768]: I0217 13:38:46.428599 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-vgjr8" event={"ID":"9548bee5-799d-49de-bc66-296f14396f43","Type":"ContainerStarted","Data":"b4eb3ca53b2fb9ccde78ffb0d858294f00cb26f818a1411f900c9afd89310bfa"} Feb 17 13:38:46 crc kubenswrapper[4768]: I0217 13:38:46.428633 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-vgjr8" event={"ID":"9548bee5-799d-49de-bc66-296f14396f43","Type":"ContainerStarted","Data":"ce4b15ea7f385923ddfc3a7134c9e2478ace5e54f271590aaa5cf0bc5943fcd6"} Feb 17 13:38:46 crc kubenswrapper[4768]: I0217 13:38:46.431154 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-m66tk" event={"ID":"27d2fb4d-0721-4384-bfd5-2070137b6e1c","Type":"ContainerStarted","Data":"e33b3f703a141a609b9d227e97086e31acf8127155de6c68b2a70d6ac4cb014e"} Feb 17 13:38:46 crc kubenswrapper[4768]: I0217 13:38:46.431203 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-m66tk" event={"ID":"27d2fb4d-0721-4384-bfd5-2070137b6e1c","Type":"ContainerStarted","Data":"3ee8411db057ee32184b7e0d8d0a9e2960ce2357e0e088048f865a4bb05586d3"} Feb 17 13:38:46 crc kubenswrapper[4768]: I0217 13:38:46.432935 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-kqwfk" event={"ID":"eb861f06-bf99-46e8-8627-c3d99245994b","Type":"ContainerStarted","Data":"9c706ad94b4871cb69c9ebd639279c964ce66e8b9f9392609ea586a0659c0aeb"} Feb 17 13:38:46 crc kubenswrapper[4768]: I0217 13:38:46.435775 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-vx6f4" event={"ID":"28505874-0a70-4f53-8070-607918790abe","Type":"ContainerStarted","Data":"00974cb10dabd8920a7e1a41ccead24b1428ffff9d5b5362461a5384ab682899"} Feb 17 13:38:46 crc kubenswrapper[4768]: I0217 13:38:46.435855 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 13:38:46 crc kubenswrapper[4768]: E0217 13:38:46.436294 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 13:38:46.936267707 +0000 UTC m=+146.215654139 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 13:38:46 crc kubenswrapper[4768]: I0217 13:38:46.438401 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-ghmbf" event={"ID":"e871b050-8136-486d-abc5-59a91f53d26c","Type":"ContainerStarted","Data":"ce093362ea35a72316ba573862bc647078ed7c9e9eeb41d1233adb54a36af78a"} Feb 17 13:38:46 crc kubenswrapper[4768]: I0217 13:38:46.438444 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-ghmbf" event={"ID":"e871b050-8136-486d-abc5-59a91f53d26c","Type":"ContainerStarted","Data":"6a02738943a3a4cea98386cfcbffd35b2406e1febccbfa359bceaf420e95a53a"} Feb 17 13:38:46 crc kubenswrapper[4768]: I0217 13:38:46.438585 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-ghmbf" Feb 17 13:38:46 crc kubenswrapper[4768]: I0217 13:38:46.439629 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-m4thw" event={"ID":"5dc809a2-ba43-4858-9f51-ce7f2e366f29","Type":"ContainerStarted","Data":"a66459cca4045206e5ca3e52dd0807ba427fd1d93ac3009535c4185938f3d736"} Feb 17 13:38:46 crc kubenswrapper[4768]: I0217 13:38:46.440794 4768 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-ghmbf container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.24:8443/healthz\": dial tcp 10.217.0.24:8443: connect: connection refused" start-of-body= Feb 17 13:38:46 crc kubenswrapper[4768]: I0217 13:38:46.440842 4768 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-ghmbf" podUID="e871b050-8136-486d-abc5-59a91f53d26c" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.24:8443/healthz\": dial tcp 10.217.0.24:8443: connect: connection refused" Feb 17 13:38:46 crc kubenswrapper[4768]: I0217 13:38:46.444743 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-8lzlh" event={"ID":"43681511-fb8b-441c-bde6-0b1fa3cd8955","Type":"ContainerStarted","Data":"d12ed3f30d26d0f1cf6f1436ade8dedcb74e42c9631774b28c7476367384c076"} Feb 17 13:38:46 crc kubenswrapper[4768]: I0217 13:38:46.448137 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-t54pq" event={"ID":"3084c4ad-c24d-48e2-9734-99ca07d07bab","Type":"ContainerStarted","Data":"62be56e872b5d7e0e7f9b6ddbaade02e8e4a81cd1131d9311f4b422794c1fa1e"} Feb 17 13:38:46 crc kubenswrapper[4768]: I0217 13:38:46.449493 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-k6fdj" event={"ID":"31c906f5-7452-4a91-ac3e-3c230e7785aa","Type":"ContainerStarted","Data":"afe4887f1fa4b35050f7033edf9113484cd41623592a2a291fdccfaef0d12e0d"} Feb 17 13:38:46 crc kubenswrapper[4768]: I0217 13:38:46.452783 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-xwxcr" event={"ID":"c8527686-e4fe-4252-a6ac-b6ed4c7ad183","Type":"ContainerStarted","Data":"1a7fbb19ddcab3db04cd4231516fe4ad54509da9ad9ed67819e76948398d3b3c"} Feb 17 13:38:46 crc kubenswrapper[4768]: I0217 13:38:46.452836 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-xwxcr" event={"ID":"c8527686-e4fe-4252-a6ac-b6ed4c7ad183","Type":"ContainerStarted","Data":"51a96e121e46c467db0f72bc171be6a23b39977d8fae16608947a2df8dac4371"} Feb 17 13:38:46 crc kubenswrapper[4768]: I0217 13:38:46.453740 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-xwxcr" Feb 17 13:38:46 crc kubenswrapper[4768]: I0217 13:38:46.455584 4768 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-xwxcr container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.35:8443/healthz\": dial tcp 10.217.0.35:8443: connect: connection refused" start-of-body= Feb 17 13:38:46 crc kubenswrapper[4768]: I0217 13:38:46.455643 4768 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-xwxcr" podUID="c8527686-e4fe-4252-a6ac-b6ed4c7ad183" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.35:8443/healthz\": dial tcp 10.217.0.35:8443: connect: connection refused" Feb 17 13:38:46 crc kubenswrapper[4768]: I0217 13:38:46.457327 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-rjn87" event={"ID":"28c95f1d-fd75-4161-92ca-4cc1e928a1ba","Type":"ContainerStarted","Data":"31fbcf3e938cb38f482dd713eb622c307d3210f7a43eb501f6e0ee8e9662ec43"} Feb 17 13:38:46 crc kubenswrapper[4768]: I0217 13:38:46.457362 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-rjn87" event={"ID":"28c95f1d-fd75-4161-92ca-4cc1e928a1ba","Type":"ContainerStarted","Data":"392fbca6821207cd9759249db449e407ee95cc58e5a9ba4464a560f717b45a45"} Feb 17 13:38:46 crc kubenswrapper[4768]: I0217 13:38:46.459705 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-h77q6" event={"ID":"b7837593-1275-40cb-820f-afe9cb13fad4","Type":"ContainerStarted","Data":"2e8d591d0568d9dddb8d23c2d1b01f96de95451697c8b04ea96b7905823fb8ae"} Feb 17 13:38:46 crc kubenswrapper[4768]: I0217 13:38:46.459932 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-558db77b4-h77q6" Feb 17 13:38:46 crc kubenswrapper[4768]: I0217 13:38:46.461615 4768 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-h77q6 container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.15:6443/healthz\": dial tcp 10.217.0.15:6443: connect: connection refused" start-of-body= Feb 17 13:38:46 crc kubenswrapper[4768]: I0217 13:38:46.461654 4768 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-h77q6" podUID="b7837593-1275-40cb-820f-afe9cb13fad4" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.15:6443/healthz\": dial tcp 10.217.0.15:6443: connect: connection refused" Feb 17 13:38:46 crc kubenswrapper[4768]: I0217 13:38:46.462722 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-wdd7d" event={"ID":"6205a23a-a18f-44c3-82be-ccecbf757630","Type":"ContainerStarted","Data":"241f8b83579717bc90abffa1e31613102c3da370be8ecf16ed29d5afbb17640c"} Feb 17 13:38:46 crc kubenswrapper[4768]: I0217 13:38:46.487587 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-ds4bl" event={"ID":"aa502e6c-070c-46ab-a1b8-82e34b55aad7","Type":"ContainerStarted","Data":"f44605bf7b23a2b3c4581745608bbe5ca028ec728f4e1eaa45cb5af5c2899ea5"} Feb 17 13:38:46 crc kubenswrapper[4768]: I0217 13:38:46.487637 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-ds4bl" event={"ID":"aa502e6c-070c-46ab-a1b8-82e34b55aad7","Type":"ContainerStarted","Data":"9cc945c56de6a3affabc96f6f6257a304edd199beabae651aee6e73867f8f5e7"} Feb 17 13:38:46 crc kubenswrapper[4768]: I0217 13:38:46.499512 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-ztfdt"] Feb 17 13:38:46 crc kubenswrapper[4768]: I0217 13:38:46.517529 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-hxzgb"] Feb 17 13:38:46 crc kubenswrapper[4768]: I0217 13:38:46.519761 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-llw5g"] Feb 17 13:38:46 crc kubenswrapper[4768]: I0217 13:38:46.534531 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-26dvz"] Feb 17 13:38:46 crc kubenswrapper[4768]: I0217 13:38:46.539417 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vfpbq\" (UID: \"9cf79399-929e-43c8-9ceb-06619ef1edee\") " pod="openshift-image-registry/image-registry-697d97f7c8-vfpbq" Feb 17 13:38:46 crc kubenswrapper[4768]: E0217 13:38:46.541597 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 13:38:47.0415835 +0000 UTC m=+146.320969942 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vfpbq" (UID: "9cf79399-929e-43c8-9ceb-06619ef1edee") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 13:38:46 crc kubenswrapper[4768]: I0217 13:38:46.580819 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-2p2tw" podStartSLOduration=125.58080095 podStartE2EDuration="2m5.58080095s" podCreationTimestamp="2026-02-17 13:36:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 13:38:46.542156728 +0000 UTC m=+145.821543170" watchObservedRunningTime="2026-02-17 13:38:46.58080095 +0000 UTC m=+145.860187392" Feb 17 13:38:46 crc kubenswrapper[4768]: W0217 13:38:46.594657 4768 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod150a773f_d920_422b_b8d3_3e33876a0642.slice/crio-cb6730dcc9a6c5636b144a8b6de056ea7c875850bf499d6413de600c4695d8b7 WatchSource:0}: Error finding container cb6730dcc9a6c5636b144a8b6de056ea7c875850bf499d6413de600c4695d8b7: Status 404 returned error can't find the container with id cb6730dcc9a6c5636b144a8b6de056ea7c875850bf499d6413de600c4695d8b7 Feb 17 13:38:46 crc kubenswrapper[4768]: W0217 13:38:46.613699 4768 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd9e8c957_294c_4be0_812d_9cc81edf44f6.slice/crio-09bb481d7782b6e04ab218e1b5eb91e44b16755cc202d7d842191e569af31886 WatchSource:0}: Error finding container 09bb481d7782b6e04ab218e1b5eb91e44b16755cc202d7d842191e569af31886: Status 404 returned error can't find the container with id 09bb481d7782b6e04ab218e1b5eb91e44b16755cc202d7d842191e569af31886 Feb 17 13:38:46 crc kubenswrapper[4768]: I0217 13:38:46.621710 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-kqwfk" podStartSLOduration=125.621692502 podStartE2EDuration="2m5.621692502s" podCreationTimestamp="2026-02-17 13:36:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 13:38:46.621163056 +0000 UTC m=+145.900549498" watchObservedRunningTime="2026-02-17 13:38:46.621692502 +0000 UTC m=+145.901078954" Feb 17 13:38:46 crc kubenswrapper[4768]: I0217 13:38:46.623285 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-tpnsx" podStartSLOduration=124.623273472 podStartE2EDuration="2m4.623273472s" podCreationTimestamp="2026-02-17 13:36:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 13:38:46.579155388 +0000 UTC m=+145.858541830" watchObservedRunningTime="2026-02-17 13:38:46.623273472 +0000 UTC m=+145.902659914" Feb 17 13:38:46 crc kubenswrapper[4768]: I0217 13:38:46.642335 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 13:38:46 crc kubenswrapper[4768]: E0217 13:38:46.644135 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 13:38:47.144084954 +0000 UTC m=+146.423471426 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 13:38:46 crc kubenswrapper[4768]: W0217 13:38:46.649405 4768 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda192d1d8_c111_4c76_b256_7110fc99b045.slice/crio-e0c90cf827a01c9a53cd804c505ae488df5f2d926fc8d39c1fde93368de33c32 WatchSource:0}: Error finding container e0c90cf827a01c9a53cd804c505ae488df5f2d926fc8d39c1fde93368de33c32: Status 404 returned error can't find the container with id e0c90cf827a01c9a53cd804c505ae488df5f2d926fc8d39c1fde93368de33c32 Feb 17 13:38:46 crc kubenswrapper[4768]: I0217 13:38:46.711550 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-5444994796-ql5b5" podStartSLOduration=125.711529219 podStartE2EDuration="2m5.711529219s" podCreationTimestamp="2026-02-17 13:36:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 13:38:46.708077481 +0000 UTC m=+145.987463933" watchObservedRunningTime="2026-02-17 13:38:46.711529219 +0000 UTC m=+145.990915671" Feb 17 13:38:46 crc kubenswrapper[4768]: I0217 13:38:46.745007 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vfpbq\" (UID: \"9cf79399-929e-43c8-9ceb-06619ef1edee\") " pod="openshift-image-registry/image-registry-697d97f7c8-vfpbq" Feb 17 13:38:46 crc kubenswrapper[4768]: E0217 13:38:46.745440 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 13:38:47.245423832 +0000 UTC m=+146.524810264 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vfpbq" (UID: "9cf79399-929e-43c8-9ceb-06619ef1edee") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 13:38:46 crc kubenswrapper[4768]: I0217 13:38:46.751655 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-558db77b4-h77q6" podStartSLOduration=125.751637638 podStartE2EDuration="2m5.751637638s" podCreationTimestamp="2026-02-17 13:36:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 13:38:46.749743718 +0000 UTC m=+146.029130160" watchObservedRunningTime="2026-02-17 13:38:46.751637638 +0000 UTC m=+146.031024080" Feb 17 13:38:46 crc kubenswrapper[4768]: I0217 13:38:46.774915 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-5444994796-ql5b5" Feb 17 13:38:46 crc kubenswrapper[4768]: I0217 13:38:46.777834 4768 patch_prober.go:28] interesting pod/router-default-5444994796-ql5b5 container/router namespace/openshift-ingress: Startup probe status=failure output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" start-of-body= Feb 17 13:38:46 crc kubenswrapper[4768]: I0217 13:38:46.777901 4768 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-ql5b5" podUID="49862fb8-6a93-48ac-926a-846f72a67989" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" Feb 17 13:38:46 crc kubenswrapper[4768]: I0217 13:38:46.790381 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-879f6c89f-bsrtm" podStartSLOduration=125.790361151 podStartE2EDuration="2m5.790361151s" podCreationTimestamp="2026-02-17 13:36:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 13:38:46.789201435 +0000 UTC m=+146.068587887" watchObservedRunningTime="2026-02-17 13:38:46.790361151 +0000 UTC m=+146.069747603" Feb 17 13:38:46 crc kubenswrapper[4768]: I0217 13:38:46.846989 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 13:38:46 crc kubenswrapper[4768]: E0217 13:38:46.847161 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 13:38:47.347135783 +0000 UTC m=+146.626522235 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 13:38:46 crc kubenswrapper[4768]: I0217 13:38:46.847531 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vfpbq\" (UID: \"9cf79399-929e-43c8-9ceb-06619ef1edee\") " pod="openshift-image-registry/image-registry-697d97f7c8-vfpbq" Feb 17 13:38:46 crc kubenswrapper[4768]: E0217 13:38:46.847854 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 13:38:47.347843275 +0000 UTC m=+146.627229717 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vfpbq" (UID: "9cf79399-929e-43c8-9ceb-06619ef1edee") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 13:38:46 crc kubenswrapper[4768]: I0217 13:38:46.862528 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-xwxcr" podStartSLOduration=124.862509524 podStartE2EDuration="2m4.862509524s" podCreationTimestamp="2026-02-17 13:36:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 13:38:46.826245347 +0000 UTC m=+146.105631799" watchObservedRunningTime="2026-02-17 13:38:46.862509524 +0000 UTC m=+146.141895966" Feb 17 13:38:46 crc kubenswrapper[4768]: I0217 13:38:46.907457 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-vx6f4" podStartSLOduration=126.907437243 podStartE2EDuration="2m6.907437243s" podCreationTimestamp="2026-02-17 13:36:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 13:38:46.869421742 +0000 UTC m=+146.148808184" watchObservedRunningTime="2026-02-17 13:38:46.907437243 +0000 UTC m=+146.186823685" Feb 17 13:38:46 crc kubenswrapper[4768]: I0217 13:38:46.908508 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication-operator/authentication-operator-69f744f599-gvm54" podStartSLOduration=125.908497097 podStartE2EDuration="2m5.908497097s" podCreationTimestamp="2026-02-17 13:36:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 13:38:46.906564606 +0000 UTC m=+146.185951058" watchObservedRunningTime="2026-02-17 13:38:46.908497097 +0000 UTC m=+146.187883539" Feb 17 13:38:46 crc kubenswrapper[4768]: I0217 13:38:46.953448 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-nf6w5" podStartSLOduration=125.953429476 podStartE2EDuration="2m5.953429476s" podCreationTimestamp="2026-02-17 13:36:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 13:38:46.953352654 +0000 UTC m=+146.232739096" watchObservedRunningTime="2026-02-17 13:38:46.953429476 +0000 UTC m=+146.232815918" Feb 17 13:38:46 crc kubenswrapper[4768]: I0217 13:38:46.953544 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 13:38:46 crc kubenswrapper[4768]: E0217 13:38:46.953624 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 13:38:47.453605952 +0000 UTC m=+146.732992394 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 13:38:46 crc kubenswrapper[4768]: I0217 13:38:46.960567 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vfpbq\" (UID: \"9cf79399-929e-43c8-9ceb-06619ef1edee\") " pod="openshift-image-registry/image-registry-697d97f7c8-vfpbq" Feb 17 13:38:46 crc kubenswrapper[4768]: E0217 13:38:46.961728 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 13:38:47.461714966 +0000 UTC m=+146.741101408 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vfpbq" (UID: "9cf79399-929e-43c8-9ceb-06619ef1edee") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 13:38:47 crc kubenswrapper[4768]: I0217 13:38:47.024979 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-ghmbf" podStartSLOduration=125.024960639 podStartE2EDuration="2m5.024960639s" podCreationTimestamp="2026-02-17 13:36:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 13:38:47.023395071 +0000 UTC m=+146.302781513" watchObservedRunningTime="2026-02-17 13:38:47.024960639 +0000 UTC m=+146.304347081" Feb 17 13:38:47 crc kubenswrapper[4768]: I0217 13:38:47.066051 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 13:38:47 crc kubenswrapper[4768]: E0217 13:38:47.066402 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 13:38:47.566382468 +0000 UTC m=+146.845768930 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 13:38:47 crc kubenswrapper[4768]: I0217 13:38:47.106394 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-ds4bl" podStartSLOduration=125.106378172 podStartE2EDuration="2m5.106378172s" podCreationTimestamp="2026-02-17 13:36:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 13:38:47.065054006 +0000 UTC m=+146.344440468" watchObservedRunningTime="2026-02-17 13:38:47.106378172 +0000 UTC m=+146.385764614" Feb 17 13:38:47 crc kubenswrapper[4768]: I0217 13:38:47.107256 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-server-k6fdj" podStartSLOduration=7.107250181 podStartE2EDuration="7.107250181s" podCreationTimestamp="2026-02-17 13:38:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 13:38:47.104263396 +0000 UTC m=+146.383649858" watchObservedRunningTime="2026-02-17 13:38:47.107250181 +0000 UTC m=+146.386636633" Feb 17 13:38:47 crc kubenswrapper[4768]: I0217 13:38:47.223778 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vfpbq\" (UID: \"9cf79399-929e-43c8-9ceb-06619ef1edee\") " pod="openshift-image-registry/image-registry-697d97f7c8-vfpbq" Feb 17 13:38:47 crc kubenswrapper[4768]: E0217 13:38:47.225465 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 13:38:47.725428007 +0000 UTC m=+147.004814469 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vfpbq" (UID: "9cf79399-929e-43c8-9ceb-06619ef1edee") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 13:38:47 crc kubenswrapper[4768]: I0217 13:38:47.325409 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 13:38:47 crc kubenswrapper[4768]: E0217 13:38:47.326050 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 13:38:47.826029712 +0000 UTC m=+147.105416154 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 13:38:47 crc kubenswrapper[4768]: I0217 13:38:47.426922 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vfpbq\" (UID: \"9cf79399-929e-43c8-9ceb-06619ef1edee\") " pod="openshift-image-registry/image-registry-697d97f7c8-vfpbq" Feb 17 13:38:47 crc kubenswrapper[4768]: E0217 13:38:47.427295 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 13:38:47.927282417 +0000 UTC m=+147.206668859 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vfpbq" (UID: "9cf79399-929e-43c8-9ceb-06619ef1edee") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 13:38:47 crc kubenswrapper[4768]: I0217 13:38:47.528877 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 13:38:47 crc kubenswrapper[4768]: E0217 13:38:47.529530 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 13:38:48.029513023 +0000 UTC m=+147.308899465 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 13:38:47 crc kubenswrapper[4768]: I0217 13:38:47.531315 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-rjn87" event={"ID":"28c95f1d-fd75-4161-92ca-4cc1e928a1ba","Type":"ContainerStarted","Data":"1cc19840c379e3f9a422cc654ddeb467c7f6a123bd067150dc5ab43cb37a47d9"} Feb 17 13:38:47 crc kubenswrapper[4768]: I0217 13:38:47.554304 4768 patch_prober.go:28] interesting pod/console-operator-58897d9998-8c6lh container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.25:8443/readyz\": dial tcp 10.217.0.25:8443: connect: connection refused" start-of-body= Feb 17 13:38:47 crc kubenswrapper[4768]: I0217 13:38:47.554378 4768 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-8c6lh" podUID="4c14ef86-26dd-4ad1-854e-2592ba200b02" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.25:8443/readyz\": dial tcp 10.217.0.25:8443: connect: connection refused" Feb 17 13:38:47 crc kubenswrapper[4768]: I0217 13:38:47.556307 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-8c6lh" event={"ID":"4c14ef86-26dd-4ad1-854e-2592ba200b02","Type":"ContainerStarted","Data":"d0a896b19056addda7b5e947158a9ba357fdd8d89a98642343b04bea6273e3de"} Feb 17 13:38:47 crc kubenswrapper[4768]: I0217 13:38:47.556365 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-58897d9998-8c6lh" Feb 17 13:38:47 crc kubenswrapper[4768]: I0217 13:38:47.558032 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-fc989" event={"ID":"77b1f878-6463-4342-b3f1-96c32e69e4d9","Type":"ContainerStarted","Data":"4fa83243a1887f94f94cb834d5005538f8d908a29cdc2c7f3f1dff7ab70c661e"} Feb 17 13:38:47 crc kubenswrapper[4768]: I0217 13:38:47.569386 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd-operator/etcd-operator-b45778765-t54pq" podStartSLOduration=126.569369922 podStartE2EDuration="2m6.569369922s" podCreationTimestamp="2026-02-17 13:36:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 13:38:47.16528719 +0000 UTC m=+146.444673642" watchObservedRunningTime="2026-02-17 13:38:47.569369922 +0000 UTC m=+146.848756364" Feb 17 13:38:47 crc kubenswrapper[4768]: I0217 13:38:47.569919 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-8lzlh" event={"ID":"43681511-fb8b-441c-bde6-0b1fa3cd8955","Type":"ContainerStarted","Data":"eba6ed2320878428632e06e094f1451957af63afc81bf438ec4ee3b152835483"} Feb 17 13:38:47 crc kubenswrapper[4768]: I0217 13:38:47.570421 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-rjn87" podStartSLOduration=127.570413676 podStartE2EDuration="2m7.570413676s" podCreationTimestamp="2026-02-17 13:36:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 13:38:47.568812395 +0000 UTC m=+146.848198847" watchObservedRunningTime="2026-02-17 13:38:47.570413676 +0000 UTC m=+146.849800118" Feb 17 13:38:47 crc kubenswrapper[4768]: I0217 13:38:47.579117 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-26dvz" event={"ID":"a192d1d8-c111-4c76-b256-7110fc99b045","Type":"ContainerStarted","Data":"e0c90cf827a01c9a53cd804c505ae488df5f2d926fc8d39c1fde93368de33c32"} Feb 17 13:38:47 crc kubenswrapper[4768]: I0217 13:38:47.585231 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-czhdm" event={"ID":"5396bd13-b4d6-42d2-834d-36e8e88715b5","Type":"ContainerStarted","Data":"6dd909c55473bc7bb93cfcf8f768f99d249d7f18101e5c959d336cc12709a350"} Feb 17 13:38:47 crc kubenswrapper[4768]: I0217 13:38:47.594079 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-vgjr8" Feb 17 13:38:47 crc kubenswrapper[4768]: I0217 13:38:47.596169 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca/service-ca-9c57cc56f-fc989" podStartSLOduration=125.596153853 podStartE2EDuration="2m5.596153853s" podCreationTimestamp="2026-02-17 13:36:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 13:38:47.594913204 +0000 UTC m=+146.874299646" watchObservedRunningTime="2026-02-17 13:38:47.596153853 +0000 UTC m=+146.875540295" Feb 17 13:38:47 crc kubenswrapper[4768]: I0217 13:38:47.609511 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-sqgwf" event={"ID":"451ced6a-ebdc-43a3-9639-5d74f0885fed","Type":"ContainerStarted","Data":"f3dd58a2d0ad083a82db2d21a93424bf09571b15f1f51481660d772054694d17"} Feb 17 13:38:47 crc kubenswrapper[4768]: I0217 13:38:47.615667 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-5wgl6" event={"ID":"29f8bd1f-9a14-4725-a333-ee7509778b5d","Type":"ContainerStarted","Data":"82c76dad84961fd21b2badeef4da9697bb097acb8b567939d32878f86ca80523"} Feb 17 13:38:47 crc kubenswrapper[4768]: I0217 13:38:47.615724 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-5wgl6" event={"ID":"29f8bd1f-9a14-4725-a333-ee7509778b5d","Type":"ContainerStarted","Data":"9ade9ce96b94d478c6acadbc599460dc662a98ebe7f86c585e40427e6b9363ea"} Feb 17 13:38:47 crc kubenswrapper[4768]: I0217 13:38:47.616531 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-5wgl6" Feb 17 13:38:47 crc kubenswrapper[4768]: I0217 13:38:47.620273 4768 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-5wgl6 container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.37:8080/healthz\": dial tcp 10.217.0.37:8080: connect: connection refused" start-of-body= Feb 17 13:38:47 crc kubenswrapper[4768]: I0217 13:38:47.620342 4768 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-5wgl6" podUID="29f8bd1f-9a14-4725-a333-ee7509778b5d" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.37:8080/healthz\": dial tcp 10.217.0.37:8080: connect: connection refused" Feb 17 13:38:47 crc kubenswrapper[4768]: I0217 13:38:47.627422 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-llw5g" event={"ID":"d9e8c957-294c-4be0-812d-9cc81edf44f6","Type":"ContainerStarted","Data":"4422fabdc7851c1df4a918b52e4282cbf21de0f707e84ddff16752f56fa6b64f"} Feb 17 13:38:47 crc kubenswrapper[4768]: I0217 13:38:47.627476 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-llw5g" event={"ID":"d9e8c957-294c-4be0-812d-9cc81edf44f6","Type":"ContainerStarted","Data":"09bb481d7782b6e04ab218e1b5eb91e44b16755cc202d7d842191e569af31886"} Feb 17 13:38:47 crc kubenswrapper[4768]: I0217 13:38:47.628182 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-llw5g" Feb 17 13:38:47 crc kubenswrapper[4768]: I0217 13:38:47.629857 4768 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-llw5g container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.34:5443/healthz\": dial tcp 10.217.0.34:5443: connect: connection refused" start-of-body= Feb 17 13:38:47 crc kubenswrapper[4768]: I0217 13:38:47.629920 4768 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-llw5g" podUID="d9e8c957-294c-4be0-812d-9cc81edf44f6" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.34:5443/healthz\": dial tcp 10.217.0.34:5443: connect: connection refused" Feb 17 13:38:47 crc kubenswrapper[4768]: I0217 13:38:47.631514 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vfpbq\" (UID: \"9cf79399-929e-43c8-9ceb-06619ef1edee\") " pod="openshift-image-registry/image-registry-697d97f7c8-vfpbq" Feb 17 13:38:47 crc kubenswrapper[4768]: E0217 13:38:47.633457 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 13:38:48.133438052 +0000 UTC m=+147.412824504 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vfpbq" (UID: "9cf79399-929e-43c8-9ceb-06619ef1edee") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 13:38:47 crc kubenswrapper[4768]: I0217 13:38:47.640456 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-rm2vj" event={"ID":"ea95ef83-4d37-4fa3-b58c-5712a3fe0450","Type":"ContainerStarted","Data":"8f787504eaa57de28bac69138d135f250dd98c8a9532b777710b6047bb331942"} Feb 17 13:38:47 crc kubenswrapper[4768]: I0217 13:38:47.641845 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-wdd7d" event={"ID":"6205a23a-a18f-44c3-82be-ccecbf757630","Type":"ContainerStarted","Data":"7a776bfbea6253b0d615db18c1c1e5b67480bec29ae5475ef0e43f093070e78a"} Feb 17 13:38:47 crc kubenswrapper[4768]: I0217 13:38:47.651682 4768 generic.go:334] "Generic (PLEG): container finished" podID="27d2fb4d-0721-4384-bfd5-2070137b6e1c" containerID="e33b3f703a141a609b9d227e97086e31acf8127155de6c68b2a70d6ac4cb014e" exitCode=0 Feb 17 13:38:47 crc kubenswrapper[4768]: I0217 13:38:47.651793 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-m66tk" event={"ID":"27d2fb4d-0721-4384-bfd5-2070137b6e1c","Type":"ContainerDied","Data":"e33b3f703a141a609b9d227e97086e31acf8127155de6c68b2a70d6ac4cb014e"} Feb 17 13:38:47 crc kubenswrapper[4768]: I0217 13:38:47.651819 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-m66tk" event={"ID":"27d2fb4d-0721-4384-bfd5-2070137b6e1c","Type":"ContainerStarted","Data":"3ee6392a2a0f77fc88c4cc4018e8d63f422887f81e27224a0b8f939a03d47bfe"} Feb 17 13:38:47 crc kubenswrapper[4768]: I0217 13:38:47.652784 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7777fb866f-m66tk" Feb 17 13:38:47 crc kubenswrapper[4768]: I0217 13:38:47.654217 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-ztfdt" event={"ID":"7aedec6a-6287-4e21-92a5-c818c2879842","Type":"ContainerStarted","Data":"1eaba5d622d14fb61d22b166fc64299ed1d7c9c35951a3a016fced213a0084c1"} Feb 17 13:38:47 crc kubenswrapper[4768]: I0217 13:38:47.654244 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-ztfdt" event={"ID":"7aedec6a-6287-4e21-92a5-c818c2879842","Type":"ContainerStarted","Data":"3d9aee3fcf875995c4be8cf3210079a81d06f4cd53dc6fc9311c209c55803a4c"} Feb 17 13:38:47 crc kubenswrapper[4768]: I0217 13:38:47.656406 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console-operator/console-operator-58897d9998-8c6lh" podStartSLOduration=126.656383462 podStartE2EDuration="2m6.656383462s" podCreationTimestamp="2026-02-17 13:36:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 13:38:47.625503144 +0000 UTC m=+146.904889596" watchObservedRunningTime="2026-02-17 13:38:47.656383462 +0000 UTC m=+146.935769904" Feb 17 13:38:47 crc kubenswrapper[4768]: I0217 13:38:47.667363 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-m4thw" event={"ID":"5dc809a2-ba43-4858-9f51-ce7f2e366f29","Type":"ContainerStarted","Data":"781b33f24b43008ebbd6877da6e87d04f5fdbbceaced726d66659145d3901926"} Feb 17 13:38:47 crc kubenswrapper[4768]: I0217 13:38:47.673477 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-5wgl6" podStartSLOduration=125.673462128 podStartE2EDuration="2m5.673462128s" podCreationTimestamp="2026-02-17 13:36:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 13:38:47.655653519 +0000 UTC m=+146.935039961" watchObservedRunningTime="2026-02-17 13:38:47.673462128 +0000 UTC m=+146.952848570" Feb 17 13:38:47 crc kubenswrapper[4768]: I0217 13:38:47.676091 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29522250-hq8zz" event={"ID":"0face492-83c1-49d4-bc1e-7de407151988","Type":"ContainerStarted","Data":"493e305c0fb6a166430cf00522a68a94918c96e88bc59e9b421c8bf6ccc2800b"} Feb 17 13:38:47 crc kubenswrapper[4768]: I0217 13:38:47.676144 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29522250-hq8zz" event={"ID":"0face492-83c1-49d4-bc1e-7de407151988","Type":"ContainerStarted","Data":"48da0ef58925f14b1014f92b1ed89ac8055c2064e235f3a6a574fd06ae1f262c"} Feb 17 13:38:47 crc kubenswrapper[4768]: I0217 13:38:47.677940 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-pls7p" event={"ID":"00ff8eee-3713-495f-a7c7-d05bba726cda","Type":"ContainerStarted","Data":"d9be76fc786cc085e5ee1cae228ba16c3ce2712e1db73b74318bdc6e9d094938"} Feb 17 13:38:47 crc kubenswrapper[4768]: I0217 13:38:47.683825 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-p7h6m" event={"ID":"ff5cbab6-dd03-463e-9940-ad55678c9e38","Type":"ContainerStarted","Data":"fd333dfd05e931bac7e1ffb960b8b2ae725836fa22cc6969ee03d557f8de1907"} Feb 17 13:38:47 crc kubenswrapper[4768]: I0217 13:38:47.684184 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-p7h6m" event={"ID":"ff5cbab6-dd03-463e-9940-ad55678c9e38","Type":"ContainerStarted","Data":"a0c768206b2185acc38e2bda416edf390fb5c864af881d7a4e98d7d902dfeb12"} Feb 17 13:38:47 crc kubenswrapper[4768]: I0217 13:38:47.732858 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 13:38:47 crc kubenswrapper[4768]: I0217 13:38:47.733528 4768 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-ghmbf container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.24:8443/healthz\": dial tcp 10.217.0.24:8443: connect: connection refused" start-of-body= Feb 17 13:38:47 crc kubenswrapper[4768]: I0217 13:38:47.733569 4768 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-ghmbf" podUID="e871b050-8136-486d-abc5-59a91f53d26c" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.24:8443/healthz\": dial tcp 10.217.0.24:8443: connect: connection refused" Feb 17 13:38:47 crc kubenswrapper[4768]: I0217 13:38:47.734052 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-hxzgb" event={"ID":"150a773f-d920-422b-b8d3-3e33876a0642","Type":"ContainerStarted","Data":"cb6730dcc9a6c5636b144a8b6de056ea7c875850bf499d6413de600c4695d8b7"} Feb 17 13:38:47 crc kubenswrapper[4768]: I0217 13:38:47.734886 4768 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-xwxcr container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.35:8443/healthz\": dial tcp 10.217.0.35:8443: connect: connection refused" start-of-body= Feb 17 13:38:47 crc kubenswrapper[4768]: I0217 13:38:47.734924 4768 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-xwxcr" podUID="c8527686-e4fe-4252-a6ac-b6ed4c7ad183" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.35:8443/healthz\": dial tcp 10.217.0.35:8443: connect: connection refused" Feb 17 13:38:47 crc kubenswrapper[4768]: E0217 13:38:47.737089 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 13:38:48.237070202 +0000 UTC m=+147.516456644 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 13:38:47 crc kubenswrapper[4768]: I0217 13:38:47.742431 4768 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-bsrtm container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.28:8443/healthz\": dial tcp 10.217.0.28:8443: connect: connection refused" start-of-body= Feb 17 13:38:47 crc kubenswrapper[4768]: I0217 13:38:47.742582 4768 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-bsrtm" podUID="0af47ec2-b35c-48df-8f91-9c878fb5ee94" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.28:8443/healthz\": dial tcp 10.217.0.28:8443: connect: connection refused" Feb 17 13:38:47 crc kubenswrapper[4768]: I0217 13:38:47.751885 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-8lzlh" podStartSLOduration=125.751870077 podStartE2EDuration="2m5.751870077s" podCreationTimestamp="2026-02-17 13:36:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 13:38:47.674058156 +0000 UTC m=+146.953444598" watchObservedRunningTime="2026-02-17 13:38:47.751870077 +0000 UTC m=+147.031256519" Feb 17 13:38:47 crc kubenswrapper[4768]: I0217 13:38:47.773077 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-vgjr8" podStartSLOduration=125.773059241 podStartE2EDuration="2m5.773059241s" podCreationTimestamp="2026-02-17 13:36:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 13:38:47.75164197 +0000 UTC m=+147.031028412" watchObservedRunningTime="2026-02-17 13:38:47.773059241 +0000 UTC m=+147.052445683" Feb 17 13:38:47 crc kubenswrapper[4768]: I0217 13:38:47.774660 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-czhdm" podStartSLOduration=125.774648611 podStartE2EDuration="2m5.774648611s" podCreationTimestamp="2026-02-17 13:36:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 13:38:47.772346449 +0000 UTC m=+147.051732891" watchObservedRunningTime="2026-02-17 13:38:47.774648611 +0000 UTC m=+147.054035053" Feb 17 13:38:47 crc kubenswrapper[4768]: I0217 13:38:47.780753 4768 patch_prober.go:28] interesting pod/router-default-5444994796-ql5b5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 17 13:38:47 crc kubenswrapper[4768]: [-]has-synced failed: reason withheld Feb 17 13:38:47 crc kubenswrapper[4768]: [+]process-running ok Feb 17 13:38:47 crc kubenswrapper[4768]: healthz check failed Feb 17 13:38:47 crc kubenswrapper[4768]: I0217 13:38:47.780799 4768 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-ql5b5" podUID="49862fb8-6a93-48ac-926a-846f72a67989" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 17 13:38:47 crc kubenswrapper[4768]: I0217 13:38:47.797136 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-config-operator/openshift-config-operator-7777fb866f-m66tk" podStartSLOduration=126.797123336 podStartE2EDuration="2m6.797123336s" podCreationTimestamp="2026-02-17 13:36:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 13:38:47.79600286 +0000 UTC m=+147.075389292" watchObservedRunningTime="2026-02-17 13:38:47.797123336 +0000 UTC m=+147.076509778" Feb 17 13:38:47 crc kubenswrapper[4768]: I0217 13:38:47.838426 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vfpbq\" (UID: \"9cf79399-929e-43c8-9ceb-06619ef1edee\") " pod="openshift-image-registry/image-registry-697d97f7c8-vfpbq" Feb 17 13:38:47 crc kubenswrapper[4768]: E0217 13:38:47.847872 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 13:38:48.347854507 +0000 UTC m=+147.627241009 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vfpbq" (UID: "9cf79399-929e-43c8-9ceb-06619ef1edee") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 13:38:47 crc kubenswrapper[4768]: I0217 13:38:47.871919 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca-operator/service-ca-operator-777779d784-wdd7d" podStartSLOduration=125.871904711 podStartE2EDuration="2m5.871904711s" podCreationTimestamp="2026-02-17 13:36:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 13:38:47.849004373 +0000 UTC m=+147.128390815" watchObservedRunningTime="2026-02-17 13:38:47.871904711 +0000 UTC m=+147.151291153" Feb 17 13:38:47 crc kubenswrapper[4768]: I0217 13:38:47.872001 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-m4thw" podStartSLOduration=125.871997944 podStartE2EDuration="2m5.871997944s" podCreationTimestamp="2026-02-17 13:36:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 13:38:47.870501877 +0000 UTC m=+147.149888319" watchObservedRunningTime="2026-02-17 13:38:47.871997944 +0000 UTC m=+147.151384376" Feb 17 13:38:47 crc kubenswrapper[4768]: I0217 13:38:47.906334 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-p7h6m" podStartSLOduration=125.90631641 podStartE2EDuration="2m5.90631641s" podCreationTimestamp="2026-02-17 13:36:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 13:38:47.905081031 +0000 UTC m=+147.184467483" watchObservedRunningTime="2026-02-17 13:38:47.90631641 +0000 UTC m=+147.185702852" Feb 17 13:38:47 crc kubenswrapper[4768]: I0217 13:38:47.948197 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 13:38:47 crc kubenswrapper[4768]: E0217 13:38:47.948692 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 13:38:48.448674109 +0000 UTC m=+147.728060561 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 13:38:47 crc kubenswrapper[4768]: I0217 13:38:47.961033 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-rm2vj" podStartSLOduration=125.961009595 podStartE2EDuration="2m5.961009595s" podCreationTimestamp="2026-02-17 13:36:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 13:38:47.931582343 +0000 UTC m=+147.210968775" watchObservedRunningTime="2026-02-17 13:38:47.961009595 +0000 UTC m=+147.240396047" Feb 17 13:38:47 crc kubenswrapper[4768]: I0217 13:38:47.985674 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-llw5g" podStartSLOduration=125.985652998 podStartE2EDuration="2m5.985652998s" podCreationTimestamp="2026-02-17 13:36:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 13:38:47.982557691 +0000 UTC m=+147.261944133" watchObservedRunningTime="2026-02-17 13:38:47.985652998 +0000 UTC m=+147.265039440" Feb 17 13:38:47 crc kubenswrapper[4768]: I0217 13:38:47.985994 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29522250-hq8zz" podStartSLOduration=126.985988459 podStartE2EDuration="2m6.985988459s" podCreationTimestamp="2026-02-17 13:36:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 13:38:47.962574345 +0000 UTC m=+147.241960797" watchObservedRunningTime="2026-02-17 13:38:47.985988459 +0000 UTC m=+147.265374911" Feb 17 13:38:47 crc kubenswrapper[4768]: I0217 13:38:47.992196 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-n5tl8" Feb 17 13:38:47 crc kubenswrapper[4768]: I0217 13:38:47.993003 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-n5tl8" Feb 17 13:38:48 crc kubenswrapper[4768]: I0217 13:38:48.001160 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-ztfdt" podStartSLOduration=8.001143995 podStartE2EDuration="8.001143995s" podCreationTimestamp="2026-02-17 13:38:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 13:38:47.996628063 +0000 UTC m=+147.276014495" watchObservedRunningTime="2026-02-17 13:38:48.001143995 +0000 UTC m=+147.280530447" Feb 17 13:38:48 crc kubenswrapper[4768]: I0217 13:38:48.008491 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-n5tl8" Feb 17 13:38:48 crc kubenswrapper[4768]: I0217 13:38:48.015342 4768 csr.go:261] certificate signing request csr-qdb4m is approved, waiting to be issued Feb 17 13:38:48 crc kubenswrapper[4768]: I0217 13:38:48.026472 4768 csr.go:257] certificate signing request csr-qdb4m is issued Feb 17 13:38:48 crc kubenswrapper[4768]: I0217 13:38:48.050836 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vfpbq\" (UID: \"9cf79399-929e-43c8-9ceb-06619ef1edee\") " pod="openshift-image-registry/image-registry-697d97f7c8-vfpbq" Feb 17 13:38:48 crc kubenswrapper[4768]: E0217 13:38:48.051408 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 13:38:48.551393161 +0000 UTC m=+147.830779603 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vfpbq" (UID: "9cf79399-929e-43c8-9ceb-06619ef1edee") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 13:38:48 crc kubenswrapper[4768]: I0217 13:38:48.152484 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 13:38:48 crc kubenswrapper[4768]: E0217 13:38:48.152800 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 13:38:48.65277583 +0000 UTC m=+147.932162272 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 13:38:48 crc kubenswrapper[4768]: I0217 13:38:48.153039 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vfpbq\" (UID: \"9cf79399-929e-43c8-9ceb-06619ef1edee\") " pod="openshift-image-registry/image-registry-697d97f7c8-vfpbq" Feb 17 13:38:48 crc kubenswrapper[4768]: E0217 13:38:48.153335 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 13:38:48.653323827 +0000 UTC m=+147.932710269 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vfpbq" (UID: "9cf79399-929e-43c8-9ceb-06619ef1edee") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 13:38:48 crc kubenswrapper[4768]: I0217 13:38:48.249622 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-558db77b4-h77q6" Feb 17 13:38:48 crc kubenswrapper[4768]: I0217 13:38:48.254083 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 13:38:48 crc kubenswrapper[4768]: E0217 13:38:48.254411 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 13:38:48.754393536 +0000 UTC m=+148.033779978 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 13:38:48 crc kubenswrapper[4768]: I0217 13:38:48.355273 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vfpbq\" (UID: \"9cf79399-929e-43c8-9ceb-06619ef1edee\") " pod="openshift-image-registry/image-registry-697d97f7c8-vfpbq" Feb 17 13:38:48 crc kubenswrapper[4768]: E0217 13:38:48.355548 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 13:38:48.855537348 +0000 UTC m=+148.134923780 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vfpbq" (UID: "9cf79399-929e-43c8-9ceb-06619ef1edee") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 13:38:48 crc kubenswrapper[4768]: I0217 13:38:48.456550 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 13:38:48 crc kubenswrapper[4768]: E0217 13:38:48.456766 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 13:38:48.956735672 +0000 UTC m=+148.236122114 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 13:38:48 crc kubenswrapper[4768]: I0217 13:38:48.558526 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vfpbq\" (UID: \"9cf79399-929e-43c8-9ceb-06619ef1edee\") " pod="openshift-image-registry/image-registry-697d97f7c8-vfpbq" Feb 17 13:38:48 crc kubenswrapper[4768]: E0217 13:38:48.558827 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 13:38:49.058811574 +0000 UTC m=+148.338198016 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vfpbq" (UID: "9cf79399-929e-43c8-9ceb-06619ef1edee") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 13:38:48 crc kubenswrapper[4768]: I0217 13:38:48.659634 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 13:38:48 crc kubenswrapper[4768]: E0217 13:38:48.659814 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 13:38:49.15978058 +0000 UTC m=+148.439167012 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 13:38:48 crc kubenswrapper[4768]: I0217 13:38:48.660274 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vfpbq\" (UID: \"9cf79399-929e-43c8-9ceb-06619ef1edee\") " pod="openshift-image-registry/image-registry-697d97f7c8-vfpbq" Feb 17 13:38:48 crc kubenswrapper[4768]: E0217 13:38:48.660619 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 13:38:49.160603976 +0000 UTC m=+148.439990428 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vfpbq" (UID: "9cf79399-929e-43c8-9ceb-06619ef1edee") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 13:38:48 crc kubenswrapper[4768]: I0217 13:38:48.737544 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-vgjr8" event={"ID":"9548bee5-799d-49de-bc66-296f14396f43","Type":"ContainerStarted","Data":"bd43c5c4f11a8803f1e4c0e089d296dbfddf1a031dc15067ef897d4a6c04c524"} Feb 17 13:38:48 crc kubenswrapper[4768]: I0217 13:38:48.739598 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-sqgwf" event={"ID":"451ced6a-ebdc-43a3-9639-5d74f0885fed","Type":"ContainerStarted","Data":"e592e94aaf8ad3e245e5ace52c81764ada5dcc5e58c37e524e610b506102f5bd"} Feb 17 13:38:48 crc kubenswrapper[4768]: I0217 13:38:48.740180 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-sqgwf" Feb 17 13:38:48 crc kubenswrapper[4768]: I0217 13:38:48.741509 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-m4thw" event={"ID":"5dc809a2-ba43-4858-9f51-ce7f2e366f29","Type":"ContainerStarted","Data":"390f2c3a0168f445392caf658fc4f5430d3acea5f7d5df727072202e1210ca1f"} Feb 17 13:38:48 crc kubenswrapper[4768]: I0217 13:38:48.743282 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-fc989" event={"ID":"77b1f878-6463-4342-b3f1-96c32e69e4d9","Type":"ContainerStarted","Data":"76ef3d295ecd199365e28162358db7b10a738cebd1ace106b82051d0dbb48f17"} Feb 17 13:38:48 crc kubenswrapper[4768]: I0217 13:38:48.744858 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-rm2vj" event={"ID":"ea95ef83-4d37-4fa3-b58c-5712a3fe0450","Type":"ContainerStarted","Data":"0bf8c05dc4f72606933687bb7dad50b5c0cfcbaa1bd29042085f3cb12ca5aaaa"} Feb 17 13:38:48 crc kubenswrapper[4768]: I0217 13:38:48.746424 4768 generic.go:334] "Generic (PLEG): container finished" podID="150a773f-d920-422b-b8d3-3e33876a0642" containerID="fc23d1f03d3c0fa1575b88aa1a5a21f0f0465498cbc1e6b706fbcb403cc6b49a" exitCode=0 Feb 17 13:38:48 crc kubenswrapper[4768]: I0217 13:38:48.746460 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-hxzgb" event={"ID":"150a773f-d920-422b-b8d3-3e33876a0642","Type":"ContainerDied","Data":"fc23d1f03d3c0fa1575b88aa1a5a21f0f0465498cbc1e6b706fbcb403cc6b49a"} Feb 17 13:38:48 crc kubenswrapper[4768]: I0217 13:38:48.746474 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-hxzgb" event={"ID":"150a773f-d920-422b-b8d3-3e33876a0642","Type":"ContainerStarted","Data":"e340b6da1eae6a7b0a8d97aca439623077f9ecd4522c07a111cbdda17f7658e0"} Feb 17 13:38:48 crc kubenswrapper[4768]: I0217 13:38:48.748268 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-26dvz" event={"ID":"a192d1d8-c111-4c76-b256-7110fc99b045","Type":"ContainerStarted","Data":"ba2c59ba13051e19e9a650032082db5d12b31ca42f8fafd5437c757fa6746513"} Feb 17 13:38:48 crc kubenswrapper[4768]: I0217 13:38:48.748290 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-26dvz" event={"ID":"a192d1d8-c111-4c76-b256-7110fc99b045","Type":"ContainerStarted","Data":"894de58dbc9e183933872ae2a2bbe1ff1d431d2c28092dfcd7538b1cfaaf91e6"} Feb 17 13:38:48 crc kubenswrapper[4768]: I0217 13:38:48.752282 4768 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-llw5g container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.34:5443/healthz\": dial tcp 10.217.0.34:5443: connect: connection refused" start-of-body= Feb 17 13:38:48 crc kubenswrapper[4768]: I0217 13:38:48.752328 4768 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-llw5g" podUID="d9e8c957-294c-4be0-812d-9cc81edf44f6" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.34:5443/healthz\": dial tcp 10.217.0.34:5443: connect: connection refused" Feb 17 13:38:48 crc kubenswrapper[4768]: I0217 13:38:48.752503 4768 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-5wgl6 container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.37:8080/healthz\": dial tcp 10.217.0.37:8080: connect: connection refused" start-of-body= Feb 17 13:38:48 crc kubenswrapper[4768]: I0217 13:38:48.752569 4768 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-5wgl6" podUID="29f8bd1f-9a14-4725-a333-ee7509778b5d" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.37:8080/healthz\": dial tcp 10.217.0.37:8080: connect: connection refused" Feb 17 13:38:48 crc kubenswrapper[4768]: I0217 13:38:48.755296 4768 patch_prober.go:28] interesting pod/console-operator-58897d9998-8c6lh container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.25:8443/readyz\": dial tcp 10.217.0.25:8443: connect: connection refused" start-of-body= Feb 17 13:38:48 crc kubenswrapper[4768]: I0217 13:38:48.755357 4768 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-8c6lh" podUID="4c14ef86-26dd-4ad1-854e-2592ba200b02" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.25:8443/readyz\": dial tcp 10.217.0.25:8443: connect: connection refused" Feb 17 13:38:48 crc kubenswrapper[4768]: I0217 13:38:48.756028 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-n5tl8" Feb 17 13:38:48 crc kubenswrapper[4768]: I0217 13:38:48.760773 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 13:38:48 crc kubenswrapper[4768]: E0217 13:38:48.761159 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 13:38:49.261144279 +0000 UTC m=+148.540530721 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 13:38:48 crc kubenswrapper[4768]: I0217 13:38:48.778498 4768 patch_prober.go:28] interesting pod/router-default-5444994796-ql5b5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 17 13:38:48 crc kubenswrapper[4768]: [-]has-synced failed: reason withheld Feb 17 13:38:48 crc kubenswrapper[4768]: [+]process-running ok Feb 17 13:38:48 crc kubenswrapper[4768]: healthz check failed Feb 17 13:38:48 crc kubenswrapper[4768]: I0217 13:38:48.778542 4768 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-ql5b5" podUID="49862fb8-6a93-48ac-926a-846f72a67989" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 17 13:38:48 crc kubenswrapper[4768]: I0217 13:38:48.805379 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-857f4d67dd-26dvz" podStartSLOduration=126.805361516 podStartE2EDuration="2m6.805361516s" podCreationTimestamp="2026-02-17 13:36:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 13:38:48.802921149 +0000 UTC m=+148.082307591" watchObservedRunningTime="2026-02-17 13:38:48.805361516 +0000 UTC m=+148.084747958" Feb 17 13:38:48 crc kubenswrapper[4768]: I0217 13:38:48.805979 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-sqgwf" podStartSLOduration=8.805973745 podStartE2EDuration="8.805973745s" podCreationTimestamp="2026-02-17 13:38:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 13:38:48.766131206 +0000 UTC m=+148.045517658" watchObservedRunningTime="2026-02-17 13:38:48.805973745 +0000 UTC m=+148.085360187" Feb 17 13:38:48 crc kubenswrapper[4768]: I0217 13:38:48.808656 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-xwxcr" Feb 17 13:38:48 crc kubenswrapper[4768]: I0217 13:38:48.862147 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vfpbq\" (UID: \"9cf79399-929e-43c8-9ceb-06619ef1edee\") " pod="openshift-image-registry/image-registry-697d97f7c8-vfpbq" Feb 17 13:38:48 crc kubenswrapper[4768]: E0217 13:38:48.862525 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 13:38:49.362510438 +0000 UTC m=+148.641896880 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vfpbq" (UID: "9cf79399-929e-43c8-9ceb-06619ef1edee") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 13:38:48 crc kubenswrapper[4768]: I0217 13:38:48.971613 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 13:38:48 crc kubenswrapper[4768]: E0217 13:38:48.972050 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 13:38:49.472028743 +0000 UTC m=+148.751415185 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 13:38:49 crc kubenswrapper[4768]: I0217 13:38:49.028155 4768 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2027-02-17 13:33:48 +0000 UTC, rotation deadline is 2026-11-09 12:30:50.652980813 +0000 UTC Feb 17 13:38:49 crc kubenswrapper[4768]: I0217 13:38:49.028197 4768 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 6358h52m1.624787079s for next certificate rotation Feb 17 13:38:49 crc kubenswrapper[4768]: I0217 13:38:49.073322 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vfpbq\" (UID: \"9cf79399-929e-43c8-9ceb-06619ef1edee\") " pod="openshift-image-registry/image-registry-697d97f7c8-vfpbq" Feb 17 13:38:49 crc kubenswrapper[4768]: E0217 13:38:49.073830 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 13:38:49.573814385 +0000 UTC m=+148.853200827 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vfpbq" (UID: "9cf79399-929e-43c8-9ceb-06619ef1edee") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 13:38:49 crc kubenswrapper[4768]: I0217 13:38:49.174304 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 13:38:49 crc kubenswrapper[4768]: E0217 13:38:49.174499 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 13:38:49.674471221 +0000 UTC m=+148.953857673 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 13:38:49 crc kubenswrapper[4768]: I0217 13:38:49.174609 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vfpbq\" (UID: \"9cf79399-929e-43c8-9ceb-06619ef1edee\") " pod="openshift-image-registry/image-registry-697d97f7c8-vfpbq" Feb 17 13:38:49 crc kubenswrapper[4768]: E0217 13:38:49.174881 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 13:38:49.674868815 +0000 UTC m=+148.954255257 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vfpbq" (UID: "9cf79399-929e-43c8-9ceb-06619ef1edee") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 13:38:49 crc kubenswrapper[4768]: I0217 13:38:49.275826 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 13:38:49 crc kubenswrapper[4768]: E0217 13:38:49.275965 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 13:38:49.775946944 +0000 UTC m=+149.055333386 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 13:38:49 crc kubenswrapper[4768]: I0217 13:38:49.276086 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vfpbq\" (UID: \"9cf79399-929e-43c8-9ceb-06619ef1edee\") " pod="openshift-image-registry/image-registry-697d97f7c8-vfpbq" Feb 17 13:38:49 crc kubenswrapper[4768]: E0217 13:38:49.276354 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 13:38:49.776346787 +0000 UTC m=+149.055733229 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vfpbq" (UID: "9cf79399-929e-43c8-9ceb-06619ef1edee") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 13:38:49 crc kubenswrapper[4768]: I0217 13:38:49.377654 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 13:38:49 crc kubenswrapper[4768]: E0217 13:38:49.377853 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 13:38:49.877828319 +0000 UTC m=+149.157214761 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 13:38:49 crc kubenswrapper[4768]: I0217 13:38:49.378020 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vfpbq\" (UID: \"9cf79399-929e-43c8-9ceb-06619ef1edee\") " pod="openshift-image-registry/image-registry-697d97f7c8-vfpbq" Feb 17 13:38:49 crc kubenswrapper[4768]: E0217 13:38:49.378389 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 13:38:49.878373636 +0000 UTC m=+149.157760088 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vfpbq" (UID: "9cf79399-929e-43c8-9ceb-06619ef1edee") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 13:38:49 crc kubenswrapper[4768]: I0217 13:38:49.479370 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 13:38:49 crc kubenswrapper[4768]: I0217 13:38:49.479645 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 13:38:49 crc kubenswrapper[4768]: I0217 13:38:49.479677 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 13:38:49 crc kubenswrapper[4768]: I0217 13:38:49.479755 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 13:38:49 crc kubenswrapper[4768]: E0217 13:38:49.480435 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 13:38:49.980406827 +0000 UTC m=+149.259793289 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 13:38:49 crc kubenswrapper[4768]: I0217 13:38:49.480869 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 13:38:49 crc kubenswrapper[4768]: I0217 13:38:49.489940 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 13:38:49 crc kubenswrapper[4768]: I0217 13:38:49.490341 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 13:38:49 crc kubenswrapper[4768]: I0217 13:38:49.558503 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 13:38:49 crc kubenswrapper[4768]: I0217 13:38:49.573140 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 13:38:49 crc kubenswrapper[4768]: I0217 13:38:49.582621 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vfpbq\" (UID: \"9cf79399-929e-43c8-9ceb-06619ef1edee\") " pod="openshift-image-registry/image-registry-697d97f7c8-vfpbq" Feb 17 13:38:49 crc kubenswrapper[4768]: I0217 13:38:49.582717 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 13:38:49 crc kubenswrapper[4768]: E0217 13:38:49.583013 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 13:38:50.082996453 +0000 UTC m=+149.362382885 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vfpbq" (UID: "9cf79399-929e-43c8-9ceb-06619ef1edee") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 13:38:49 crc kubenswrapper[4768]: I0217 13:38:49.591739 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 13:38:49 crc kubenswrapper[4768]: I0217 13:38:49.683732 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 13:38:49 crc kubenswrapper[4768]: E0217 13:38:49.683921 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 13:38:50.183896468 +0000 UTC m=+149.463282910 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 13:38:49 crc kubenswrapper[4768]: I0217 13:38:49.684152 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vfpbq\" (UID: \"9cf79399-929e-43c8-9ceb-06619ef1edee\") " pod="openshift-image-registry/image-registry-697d97f7c8-vfpbq" Feb 17 13:38:49 crc kubenswrapper[4768]: E0217 13:38:49.684408 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 13:38:50.184396643 +0000 UTC m=+149.463783085 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vfpbq" (UID: "9cf79399-929e-43c8-9ceb-06619ef1edee") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 13:38:49 crc kubenswrapper[4768]: I0217 13:38:49.740282 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-pc222"] Feb 17 13:38:49 crc kubenswrapper[4768]: I0217 13:38:49.741342 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-pc222" Feb 17 13:38:49 crc kubenswrapper[4768]: I0217 13:38:49.746479 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Feb 17 13:38:49 crc kubenswrapper[4768]: I0217 13:38:49.767951 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-pc222"] Feb 17 13:38:49 crc kubenswrapper[4768]: I0217 13:38:49.781972 4768 patch_prober.go:28] interesting pod/router-default-5444994796-ql5b5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 17 13:38:49 crc kubenswrapper[4768]: [-]has-synced failed: reason withheld Feb 17 13:38:49 crc kubenswrapper[4768]: [+]process-running ok Feb 17 13:38:49 crc kubenswrapper[4768]: healthz check failed Feb 17 13:38:49 crc kubenswrapper[4768]: I0217 13:38:49.782020 4768 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-ql5b5" podUID="49862fb8-6a93-48ac-926a-846f72a67989" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 17 13:38:49 crc kubenswrapper[4768]: I0217 13:38:49.785095 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 13:38:49 crc kubenswrapper[4768]: E0217 13:38:49.785427 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 13:38:50.285413432 +0000 UTC m=+149.564799874 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 13:38:49 crc kubenswrapper[4768]: I0217 13:38:49.798964 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-hxzgb" event={"ID":"150a773f-d920-422b-b8d3-3e33876a0642","Type":"ContainerStarted","Data":"2bd4f50f57207cb5ed845e00eaa4eaae43e4b4ab892e9e22a245d45e1bea0b6b"} Feb 17 13:38:49 crc kubenswrapper[4768]: I0217 13:38:49.816421 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-pls7p" event={"ID":"00ff8eee-3713-495f-a7c7-d05bba726cda","Type":"ContainerStarted","Data":"ad3077531a1e0e81b46d0b03bf8efe066f6c27dce491069ecba6e6c11a25a238"} Feb 17 13:38:49 crc kubenswrapper[4768]: I0217 13:38:49.836873 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-76f77b778f-hxzgb" podStartSLOduration=128.836854305 podStartE2EDuration="2m8.836854305s" podCreationTimestamp="2026-02-17 13:36:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 13:38:49.834891053 +0000 UTC m=+149.114277515" watchObservedRunningTime="2026-02-17 13:38:49.836854305 +0000 UTC m=+149.116240747" Feb 17 13:38:49 crc kubenswrapper[4768]: I0217 13:38:49.867227 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 13:38:49 crc kubenswrapper[4768]: I0217 13:38:49.886978 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a409f38d-1da9-42e5-94ff-502133f6cee2-catalog-content\") pod \"certified-operators-pc222\" (UID: \"a409f38d-1da9-42e5-94ff-502133f6cee2\") " pod="openshift-marketplace/certified-operators-pc222" Feb 17 13:38:49 crc kubenswrapper[4768]: I0217 13:38:49.887095 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vfpbq\" (UID: \"9cf79399-929e-43c8-9ceb-06619ef1edee\") " pod="openshift-image-registry/image-registry-697d97f7c8-vfpbq" Feb 17 13:38:49 crc kubenswrapper[4768]: I0217 13:38:49.887208 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7n4dv\" (UniqueName: \"kubernetes.io/projected/a409f38d-1da9-42e5-94ff-502133f6cee2-kube-api-access-7n4dv\") pod \"certified-operators-pc222\" (UID: \"a409f38d-1da9-42e5-94ff-502133f6cee2\") " pod="openshift-marketplace/certified-operators-pc222" Feb 17 13:38:49 crc kubenswrapper[4768]: I0217 13:38:49.887352 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a409f38d-1da9-42e5-94ff-502133f6cee2-utilities\") pod \"certified-operators-pc222\" (UID: \"a409f38d-1da9-42e5-94ff-502133f6cee2\") " pod="openshift-marketplace/certified-operators-pc222" Feb 17 13:38:49 crc kubenswrapper[4768]: E0217 13:38:49.893146 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 13:38:50.39313159 +0000 UTC m=+149.672518032 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vfpbq" (UID: "9cf79399-929e-43c8-9ceb-06619ef1edee") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 13:38:49 crc kubenswrapper[4768]: I0217 13:38:49.991238 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 13:38:49 crc kubenswrapper[4768]: E0217 13:38:49.991393 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 13:38:50.491367741 +0000 UTC m=+149.770754183 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 13:38:49 crc kubenswrapper[4768]: I0217 13:38:49.991719 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a409f38d-1da9-42e5-94ff-502133f6cee2-utilities\") pod \"certified-operators-pc222\" (UID: \"a409f38d-1da9-42e5-94ff-502133f6cee2\") " pod="openshift-marketplace/certified-operators-pc222" Feb 17 13:38:49 crc kubenswrapper[4768]: I0217 13:38:49.991783 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a409f38d-1da9-42e5-94ff-502133f6cee2-catalog-content\") pod \"certified-operators-pc222\" (UID: \"a409f38d-1da9-42e5-94ff-502133f6cee2\") " pod="openshift-marketplace/certified-operators-pc222" Feb 17 13:38:49 crc kubenswrapper[4768]: I0217 13:38:49.991833 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vfpbq\" (UID: \"9cf79399-929e-43c8-9ceb-06619ef1edee\") " pod="openshift-image-registry/image-registry-697d97f7c8-vfpbq" Feb 17 13:38:49 crc kubenswrapper[4768]: I0217 13:38:49.991874 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7n4dv\" (UniqueName: \"kubernetes.io/projected/a409f38d-1da9-42e5-94ff-502133f6cee2-kube-api-access-7n4dv\") pod \"certified-operators-pc222\" (UID: \"a409f38d-1da9-42e5-94ff-502133f6cee2\") " pod="openshift-marketplace/certified-operators-pc222" Feb 17 13:38:49 crc kubenswrapper[4768]: I0217 13:38:49.992652 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a409f38d-1da9-42e5-94ff-502133f6cee2-utilities\") pod \"certified-operators-pc222\" (UID: \"a409f38d-1da9-42e5-94ff-502133f6cee2\") " pod="openshift-marketplace/certified-operators-pc222" Feb 17 13:38:49 crc kubenswrapper[4768]: I0217 13:38:49.992948 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a409f38d-1da9-42e5-94ff-502133f6cee2-catalog-content\") pod \"certified-operators-pc222\" (UID: \"a409f38d-1da9-42e5-94ff-502133f6cee2\") " pod="openshift-marketplace/certified-operators-pc222" Feb 17 13:38:49 crc kubenswrapper[4768]: E0217 13:38:49.993249 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 13:38:50.49323433 +0000 UTC m=+149.772620772 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vfpbq" (UID: "9cf79399-929e-43c8-9ceb-06619ef1edee") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 13:38:50 crc kubenswrapper[4768]: I0217 13:38:50.058271 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7n4dv\" (UniqueName: \"kubernetes.io/projected/a409f38d-1da9-42e5-94ff-502133f6cee2-kube-api-access-7n4dv\") pod \"certified-operators-pc222\" (UID: \"a409f38d-1da9-42e5-94ff-502133f6cee2\") " pod="openshift-marketplace/certified-operators-pc222" Feb 17 13:38:50 crc kubenswrapper[4768]: I0217 13:38:50.087165 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-pc222" Feb 17 13:38:50 crc kubenswrapper[4768]: I0217 13:38:50.097549 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 13:38:50 crc kubenswrapper[4768]: E0217 13:38:50.097894 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 13:38:50.597880401 +0000 UTC m=+149.877266843 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 13:38:50 crc kubenswrapper[4768]: I0217 13:38:50.145565 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-4tmgb"] Feb 17 13:38:50 crc kubenswrapper[4768]: I0217 13:38:50.146511 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4tmgb" Feb 17 13:38:50 crc kubenswrapper[4768]: I0217 13:38:50.172937 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-4tmgb"] Feb 17 13:38:50 crc kubenswrapper[4768]: I0217 13:38:50.198536 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vfpbq\" (UID: \"9cf79399-929e-43c8-9ceb-06619ef1edee\") " pod="openshift-image-registry/image-registry-697d97f7c8-vfpbq" Feb 17 13:38:50 crc kubenswrapper[4768]: E0217 13:38:50.198861 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 13:38:50.698850188 +0000 UTC m=+149.978236630 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vfpbq" (UID: "9cf79399-929e-43c8-9ceb-06619ef1edee") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 13:38:50 crc kubenswrapper[4768]: I0217 13:38:50.299623 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 13:38:50 crc kubenswrapper[4768]: I0217 13:38:50.299818 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4wvbd\" (UniqueName: \"kubernetes.io/projected/b7de1a69-e892-4ca4-a61f-20a221ce38ba-kube-api-access-4wvbd\") pod \"certified-operators-4tmgb\" (UID: \"b7de1a69-e892-4ca4-a61f-20a221ce38ba\") " pod="openshift-marketplace/certified-operators-4tmgb" Feb 17 13:38:50 crc kubenswrapper[4768]: I0217 13:38:50.299880 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b7de1a69-e892-4ca4-a61f-20a221ce38ba-catalog-content\") pod \"certified-operators-4tmgb\" (UID: \"b7de1a69-e892-4ca4-a61f-20a221ce38ba\") " pod="openshift-marketplace/certified-operators-4tmgb" Feb 17 13:38:50 crc kubenswrapper[4768]: I0217 13:38:50.299943 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b7de1a69-e892-4ca4-a61f-20a221ce38ba-utilities\") pod \"certified-operators-4tmgb\" (UID: \"b7de1a69-e892-4ca4-a61f-20a221ce38ba\") " pod="openshift-marketplace/certified-operators-4tmgb" Feb 17 13:38:50 crc kubenswrapper[4768]: E0217 13:38:50.300078 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 13:38:50.800062042 +0000 UTC m=+150.079448484 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 13:38:50 crc kubenswrapper[4768]: I0217 13:38:50.362284 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-wsm7m"] Feb 17 13:38:50 crc kubenswrapper[4768]: I0217 13:38:50.363408 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wsm7m" Feb 17 13:38:50 crc kubenswrapper[4768]: I0217 13:38:50.374024 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Feb 17 13:38:50 crc kubenswrapper[4768]: I0217 13:38:50.384803 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-wsm7m"] Feb 17 13:38:50 crc kubenswrapper[4768]: I0217 13:38:50.402121 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b7de1a69-e892-4ca4-a61f-20a221ce38ba-catalog-content\") pod \"certified-operators-4tmgb\" (UID: \"b7de1a69-e892-4ca4-a61f-20a221ce38ba\") " pod="openshift-marketplace/certified-operators-4tmgb" Feb 17 13:38:50 crc kubenswrapper[4768]: I0217 13:38:50.402192 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b7de1a69-e892-4ca4-a61f-20a221ce38ba-utilities\") pod \"certified-operators-4tmgb\" (UID: \"b7de1a69-e892-4ca4-a61f-20a221ce38ba\") " pod="openshift-marketplace/certified-operators-4tmgb" Feb 17 13:38:50 crc kubenswrapper[4768]: I0217 13:38:50.402221 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vfpbq\" (UID: \"9cf79399-929e-43c8-9ceb-06619ef1edee\") " pod="openshift-image-registry/image-registry-697d97f7c8-vfpbq" Feb 17 13:38:50 crc kubenswrapper[4768]: I0217 13:38:50.402282 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4wvbd\" (UniqueName: \"kubernetes.io/projected/b7de1a69-e892-4ca4-a61f-20a221ce38ba-kube-api-access-4wvbd\") pod \"certified-operators-4tmgb\" (UID: \"b7de1a69-e892-4ca4-a61f-20a221ce38ba\") " pod="openshift-marketplace/certified-operators-4tmgb" Feb 17 13:38:50 crc kubenswrapper[4768]: I0217 13:38:50.403268 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b7de1a69-e892-4ca4-a61f-20a221ce38ba-catalog-content\") pod \"certified-operators-4tmgb\" (UID: \"b7de1a69-e892-4ca4-a61f-20a221ce38ba\") " pod="openshift-marketplace/certified-operators-4tmgb" Feb 17 13:38:50 crc kubenswrapper[4768]: I0217 13:38:50.403550 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b7de1a69-e892-4ca4-a61f-20a221ce38ba-utilities\") pod \"certified-operators-4tmgb\" (UID: \"b7de1a69-e892-4ca4-a61f-20a221ce38ba\") " pod="openshift-marketplace/certified-operators-4tmgb" Feb 17 13:38:50 crc kubenswrapper[4768]: E0217 13:38:50.403829 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 13:38:50.903815316 +0000 UTC m=+150.183201758 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vfpbq" (UID: "9cf79399-929e-43c8-9ceb-06619ef1edee") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 13:38:50 crc kubenswrapper[4768]: I0217 13:38:50.499131 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4wvbd\" (UniqueName: \"kubernetes.io/projected/b7de1a69-e892-4ca4-a61f-20a221ce38ba-kube-api-access-4wvbd\") pod \"certified-operators-4tmgb\" (UID: \"b7de1a69-e892-4ca4-a61f-20a221ce38ba\") " pod="openshift-marketplace/certified-operators-4tmgb" Feb 17 13:38:50 crc kubenswrapper[4768]: I0217 13:38:50.505614 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 13:38:50 crc kubenswrapper[4768]: I0217 13:38:50.506080 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rkcmq\" (UniqueName: \"kubernetes.io/projected/7e7136ca-949e-49ff-9f79-47e485a039cb-kube-api-access-rkcmq\") pod \"community-operators-wsm7m\" (UID: \"7e7136ca-949e-49ff-9f79-47e485a039cb\") " pod="openshift-marketplace/community-operators-wsm7m" Feb 17 13:38:50 crc kubenswrapper[4768]: I0217 13:38:50.506147 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7e7136ca-949e-49ff-9f79-47e485a039cb-catalog-content\") pod \"community-operators-wsm7m\" (UID: \"7e7136ca-949e-49ff-9f79-47e485a039cb\") " pod="openshift-marketplace/community-operators-wsm7m" Feb 17 13:38:50 crc kubenswrapper[4768]: I0217 13:38:50.506180 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7e7136ca-949e-49ff-9f79-47e485a039cb-utilities\") pod \"community-operators-wsm7m\" (UID: \"7e7136ca-949e-49ff-9f79-47e485a039cb\") " pod="openshift-marketplace/community-operators-wsm7m" Feb 17 13:38:50 crc kubenswrapper[4768]: E0217 13:38:50.506398 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 13:38:51.006380703 +0000 UTC m=+150.285767145 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 13:38:50 crc kubenswrapper[4768]: I0217 13:38:50.575350 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-9r46k"] Feb 17 13:38:50 crc kubenswrapper[4768]: I0217 13:38:50.576505 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9r46k" Feb 17 13:38:50 crc kubenswrapper[4768]: I0217 13:38:50.608256 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vfpbq\" (UID: \"9cf79399-929e-43c8-9ceb-06619ef1edee\") " pod="openshift-image-registry/image-registry-697d97f7c8-vfpbq" Feb 17 13:38:50 crc kubenswrapper[4768]: I0217 13:38:50.608320 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rkcmq\" (UniqueName: \"kubernetes.io/projected/7e7136ca-949e-49ff-9f79-47e485a039cb-kube-api-access-rkcmq\") pod \"community-operators-wsm7m\" (UID: \"7e7136ca-949e-49ff-9f79-47e485a039cb\") " pod="openshift-marketplace/community-operators-wsm7m" Feb 17 13:38:50 crc kubenswrapper[4768]: I0217 13:38:50.608358 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7e7136ca-949e-49ff-9f79-47e485a039cb-catalog-content\") pod \"community-operators-wsm7m\" (UID: \"7e7136ca-949e-49ff-9f79-47e485a039cb\") " pod="openshift-marketplace/community-operators-wsm7m" Feb 17 13:38:50 crc kubenswrapper[4768]: I0217 13:38:50.608382 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7e7136ca-949e-49ff-9f79-47e485a039cb-utilities\") pod \"community-operators-wsm7m\" (UID: \"7e7136ca-949e-49ff-9f79-47e485a039cb\") " pod="openshift-marketplace/community-operators-wsm7m" Feb 17 13:38:50 crc kubenswrapper[4768]: I0217 13:38:50.608870 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7e7136ca-949e-49ff-9f79-47e485a039cb-utilities\") pod \"community-operators-wsm7m\" (UID: \"7e7136ca-949e-49ff-9f79-47e485a039cb\") " pod="openshift-marketplace/community-operators-wsm7m" Feb 17 13:38:50 crc kubenswrapper[4768]: E0217 13:38:50.609076 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 13:38:51.109061813 +0000 UTC m=+150.388448255 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vfpbq" (UID: "9cf79399-929e-43c8-9ceb-06619ef1edee") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 13:38:50 crc kubenswrapper[4768]: I0217 13:38:50.609447 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7e7136ca-949e-49ff-9f79-47e485a039cb-catalog-content\") pod \"community-operators-wsm7m\" (UID: \"7e7136ca-949e-49ff-9f79-47e485a039cb\") " pod="openshift-marketplace/community-operators-wsm7m" Feb 17 13:38:50 crc kubenswrapper[4768]: I0217 13:38:50.646967 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rkcmq\" (UniqueName: \"kubernetes.io/projected/7e7136ca-949e-49ff-9f79-47e485a039cb-kube-api-access-rkcmq\") pod \"community-operators-wsm7m\" (UID: \"7e7136ca-949e-49ff-9f79-47e485a039cb\") " pod="openshift-marketplace/community-operators-wsm7m" Feb 17 13:38:50 crc kubenswrapper[4768]: I0217 13:38:50.724171 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-9r46k"] Feb 17 13:38:50 crc kubenswrapper[4768]: I0217 13:38:50.726005 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 13:38:50 crc kubenswrapper[4768]: I0217 13:38:50.726357 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6a25c47f-7f1c-42ed-85bf-acfe8949338b-catalog-content\") pod \"community-operators-9r46k\" (UID: \"6a25c47f-7f1c-42ed-85bf-acfe8949338b\") " pod="openshift-marketplace/community-operators-9r46k" Feb 17 13:38:50 crc kubenswrapper[4768]: I0217 13:38:50.726430 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6a25c47f-7f1c-42ed-85bf-acfe8949338b-utilities\") pod \"community-operators-9r46k\" (UID: \"6a25c47f-7f1c-42ed-85bf-acfe8949338b\") " pod="openshift-marketplace/community-operators-9r46k" Feb 17 13:38:50 crc kubenswrapper[4768]: I0217 13:38:50.726514 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7hvg2\" (UniqueName: \"kubernetes.io/projected/6a25c47f-7f1c-42ed-85bf-acfe8949338b-kube-api-access-7hvg2\") pod \"community-operators-9r46k\" (UID: \"6a25c47f-7f1c-42ed-85bf-acfe8949338b\") " pod="openshift-marketplace/community-operators-9r46k" Feb 17 13:38:50 crc kubenswrapper[4768]: E0217 13:38:50.726675 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 13:38:51.226652761 +0000 UTC m=+150.506039203 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 13:38:50 crc kubenswrapper[4768]: I0217 13:38:50.726869 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wsm7m" Feb 17 13:38:50 crc kubenswrapper[4768]: I0217 13:38:50.773277 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-llw5g" Feb 17 13:38:50 crc kubenswrapper[4768]: I0217 13:38:50.780218 4768 patch_prober.go:28] interesting pod/router-default-5444994796-ql5b5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 17 13:38:50 crc kubenswrapper[4768]: [-]has-synced failed: reason withheld Feb 17 13:38:50 crc kubenswrapper[4768]: [+]process-running ok Feb 17 13:38:50 crc kubenswrapper[4768]: healthz check failed Feb 17 13:38:50 crc kubenswrapper[4768]: I0217 13:38:50.780279 4768 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-ql5b5" podUID="49862fb8-6a93-48ac-926a-846f72a67989" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 17 13:38:50 crc kubenswrapper[4768]: I0217 13:38:50.800710 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4tmgb" Feb 17 13:38:50 crc kubenswrapper[4768]: I0217 13:38:50.828871 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vfpbq\" (UID: \"9cf79399-929e-43c8-9ceb-06619ef1edee\") " pod="openshift-image-registry/image-registry-697d97f7c8-vfpbq" Feb 17 13:38:50 crc kubenswrapper[4768]: I0217 13:38:50.828959 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7hvg2\" (UniqueName: \"kubernetes.io/projected/6a25c47f-7f1c-42ed-85bf-acfe8949338b-kube-api-access-7hvg2\") pod \"community-operators-9r46k\" (UID: \"6a25c47f-7f1c-42ed-85bf-acfe8949338b\") " pod="openshift-marketplace/community-operators-9r46k" Feb 17 13:38:50 crc kubenswrapper[4768]: I0217 13:38:50.829008 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6a25c47f-7f1c-42ed-85bf-acfe8949338b-catalog-content\") pod \"community-operators-9r46k\" (UID: \"6a25c47f-7f1c-42ed-85bf-acfe8949338b\") " pod="openshift-marketplace/community-operators-9r46k" Feb 17 13:38:50 crc kubenswrapper[4768]: I0217 13:38:50.829068 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6a25c47f-7f1c-42ed-85bf-acfe8949338b-utilities\") pod \"community-operators-9r46k\" (UID: \"6a25c47f-7f1c-42ed-85bf-acfe8949338b\") " pod="openshift-marketplace/community-operators-9r46k" Feb 17 13:38:50 crc kubenswrapper[4768]: I0217 13:38:50.829566 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6a25c47f-7f1c-42ed-85bf-acfe8949338b-utilities\") pod \"community-operators-9r46k\" (UID: \"6a25c47f-7f1c-42ed-85bf-acfe8949338b\") " pod="openshift-marketplace/community-operators-9r46k" Feb 17 13:38:50 crc kubenswrapper[4768]: E0217 13:38:50.829788 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 13:38:51.329763654 +0000 UTC m=+150.609150096 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vfpbq" (UID: "9cf79399-929e-43c8-9ceb-06619ef1edee") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 13:38:50 crc kubenswrapper[4768]: I0217 13:38:50.829847 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6a25c47f-7f1c-42ed-85bf-acfe8949338b-catalog-content\") pod \"community-operators-9r46k\" (UID: \"6a25c47f-7f1c-42ed-85bf-acfe8949338b\") " pod="openshift-marketplace/community-operators-9r46k" Feb 17 13:38:50 crc kubenswrapper[4768]: I0217 13:38:50.867309 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"023dd76d57c74391e891f03a556c15f2247527f82ee60f0944c3bbfb113203e6"} Feb 17 13:38:50 crc kubenswrapper[4768]: I0217 13:38:50.894445 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"b926d64ce3788dee0027c771c741f17b0d0e06961348016091be3b6a7f2a6e13"} Feb 17 13:38:50 crc kubenswrapper[4768]: I0217 13:38:50.903498 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7hvg2\" (UniqueName: \"kubernetes.io/projected/6a25c47f-7f1c-42ed-85bf-acfe8949338b-kube-api-access-7hvg2\") pod \"community-operators-9r46k\" (UID: \"6a25c47f-7f1c-42ed-85bf-acfe8949338b\") " pod="openshift-marketplace/community-operators-9r46k" Feb 17 13:38:50 crc kubenswrapper[4768]: I0217 13:38:50.931409 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"688cc569437561be1aa4e1e369f1a4ac304430a63b1be25064e6ff349d8f816d"} Feb 17 13:38:50 crc kubenswrapper[4768]: I0217 13:38:50.932363 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9r46k" Feb 17 13:38:50 crc kubenswrapper[4768]: I0217 13:38:50.932611 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 13:38:50 crc kubenswrapper[4768]: E0217 13:38:50.932826 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 13:38:51.432813606 +0000 UTC m=+150.712200048 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 13:38:51 crc kubenswrapper[4768]: I0217 13:38:51.034005 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vfpbq\" (UID: \"9cf79399-929e-43c8-9ceb-06619ef1edee\") " pod="openshift-image-registry/image-registry-697d97f7c8-vfpbq" Feb 17 13:38:51 crc kubenswrapper[4768]: E0217 13:38:51.035386 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 13:38:51.535372553 +0000 UTC m=+150.814758995 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vfpbq" (UID: "9cf79399-929e-43c8-9ceb-06619ef1edee") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 13:38:51 crc kubenswrapper[4768]: I0217 13:38:51.138376 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 13:38:51 crc kubenswrapper[4768]: E0217 13:38:51.139029 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 13:38:51.639013973 +0000 UTC m=+150.918400415 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 13:38:51 crc kubenswrapper[4768]: I0217 13:38:51.246932 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vfpbq\" (UID: \"9cf79399-929e-43c8-9ceb-06619ef1edee\") " pod="openshift-image-registry/image-registry-697d97f7c8-vfpbq" Feb 17 13:38:51 crc kubenswrapper[4768]: E0217 13:38:51.247318 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 13:38:51.747304859 +0000 UTC m=+151.026691301 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vfpbq" (UID: "9cf79399-929e-43c8-9ceb-06619ef1edee") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 13:38:51 crc kubenswrapper[4768]: I0217 13:38:51.290567 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-7777fb866f-m66tk" Feb 17 13:38:51 crc kubenswrapper[4768]: I0217 13:38:51.310730 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-pc222"] Feb 17 13:38:51 crc kubenswrapper[4768]: I0217 13:38:51.358390 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 13:38:51 crc kubenswrapper[4768]: E0217 13:38:51.358683 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 13:38:51.858668642 +0000 UTC m=+151.138055084 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 13:38:51 crc kubenswrapper[4768]: I0217 13:38:51.459795 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vfpbq\" (UID: \"9cf79399-929e-43c8-9ceb-06619ef1edee\") " pod="openshift-image-registry/image-registry-697d97f7c8-vfpbq" Feb 17 13:38:51 crc kubenswrapper[4768]: E0217 13:38:51.460080 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 13:38:51.960068612 +0000 UTC m=+151.239455054 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vfpbq" (UID: "9cf79399-929e-43c8-9ceb-06619ef1edee") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 13:38:51 crc kubenswrapper[4768]: I0217 13:38:51.569661 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 13:38:51 crc kubenswrapper[4768]: E0217 13:38:51.570167 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 13:38:52.070154365 +0000 UTC m=+151.349540797 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 13:38:51 crc kubenswrapper[4768]: I0217 13:38:51.670803 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vfpbq\" (UID: \"9cf79399-929e-43c8-9ceb-06619ef1edee\") " pod="openshift-image-registry/image-registry-697d97f7c8-vfpbq" Feb 17 13:38:51 crc kubenswrapper[4768]: E0217 13:38:51.671164 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 13:38:52.171152972 +0000 UTC m=+151.450539414 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vfpbq" (UID: "9cf79399-929e-43c8-9ceb-06619ef1edee") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 13:38:51 crc kubenswrapper[4768]: I0217 13:38:51.691800 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-4tmgb"] Feb 17 13:38:51 crc kubenswrapper[4768]: I0217 13:38:51.742685 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Feb 17 13:38:51 crc kubenswrapper[4768]: I0217 13:38:51.743413 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 17 13:38:51 crc kubenswrapper[4768]: I0217 13:38:51.754866 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Feb 17 13:38:51 crc kubenswrapper[4768]: I0217 13:38:51.755148 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Feb 17 13:38:51 crc kubenswrapper[4768]: I0217 13:38:51.761270 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Feb 17 13:38:51 crc kubenswrapper[4768]: I0217 13:38:51.775897 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 13:38:51 crc kubenswrapper[4768]: E0217 13:38:51.776299 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 13:38:52.276280239 +0000 UTC m=+151.555666681 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 13:38:51 crc kubenswrapper[4768]: I0217 13:38:51.790527 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-wsm7m"] Feb 17 13:38:51 crc kubenswrapper[4768]: I0217 13:38:51.804369 4768 patch_prober.go:28] interesting pod/router-default-5444994796-ql5b5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 17 13:38:51 crc kubenswrapper[4768]: [-]has-synced failed: reason withheld Feb 17 13:38:51 crc kubenswrapper[4768]: [+]process-running ok Feb 17 13:38:51 crc kubenswrapper[4768]: healthz check failed Feb 17 13:38:51 crc kubenswrapper[4768]: I0217 13:38:51.804431 4768 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-ql5b5" podUID="49862fb8-6a93-48ac-926a-846f72a67989" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 17 13:38:51 crc kubenswrapper[4768]: I0217 13:38:51.878008 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/6379baf0-ecbe-40bd-bf11-a49e64a264d3-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"6379baf0-ecbe-40bd-bf11-a49e64a264d3\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 17 13:38:51 crc kubenswrapper[4768]: I0217 13:38:51.878119 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vfpbq\" (UID: \"9cf79399-929e-43c8-9ceb-06619ef1edee\") " pod="openshift-image-registry/image-registry-697d97f7c8-vfpbq" Feb 17 13:38:51 crc kubenswrapper[4768]: I0217 13:38:51.878169 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/6379baf0-ecbe-40bd-bf11-a49e64a264d3-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"6379baf0-ecbe-40bd-bf11-a49e64a264d3\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 17 13:38:51 crc kubenswrapper[4768]: E0217 13:38:51.878490 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 13:38:52.378477614 +0000 UTC m=+151.657864056 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vfpbq" (UID: "9cf79399-929e-43c8-9ceb-06619ef1edee") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 13:38:51 crc kubenswrapper[4768]: I0217 13:38:51.924845 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-9r46k"] Feb 17 13:38:51 crc kubenswrapper[4768]: I0217 13:38:51.980692 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 13:38:51 crc kubenswrapper[4768]: I0217 13:38:51.981076 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/6379baf0-ecbe-40bd-bf11-a49e64a264d3-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"6379baf0-ecbe-40bd-bf11-a49e64a264d3\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 17 13:38:51 crc kubenswrapper[4768]: I0217 13:38:51.981180 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/6379baf0-ecbe-40bd-bf11-a49e64a264d3-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"6379baf0-ecbe-40bd-bf11-a49e64a264d3\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 17 13:38:51 crc kubenswrapper[4768]: E0217 13:38:51.981670 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 13:38:52.48164678 +0000 UTC m=+151.761033222 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 13:38:51 crc kubenswrapper[4768]: I0217 13:38:51.981717 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/6379baf0-ecbe-40bd-bf11-a49e64a264d3-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"6379baf0-ecbe-40bd-bf11-a49e64a264d3\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 17 13:38:51 crc kubenswrapper[4768]: I0217 13:38:51.981761 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-ncxf5"] Feb 17 13:38:52 crc kubenswrapper[4768]: I0217 13:38:52.010121 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-ncxf5" Feb 17 13:38:52 crc kubenswrapper[4768]: I0217 13:38:52.011990 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-ncxf5"] Feb 17 13:38:52 crc kubenswrapper[4768]: I0217 13:38:52.016866 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Feb 17 13:38:52 crc kubenswrapper[4768]: I0217 13:38:52.024005 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/6379baf0-ecbe-40bd-bf11-a49e64a264d3-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"6379baf0-ecbe-40bd-bf11-a49e64a264d3\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 17 13:38:52 crc kubenswrapper[4768]: I0217 13:38:52.030137 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"7325a3517d777bb00661323869a5ba6e9ac8c5b1eeba30611b6e9fcdf1d953f3"} Feb 17 13:38:52 crc kubenswrapper[4768]: I0217 13:38:52.030703 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 13:38:52 crc kubenswrapper[4768]: I0217 13:38:52.058186 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4tmgb" event={"ID":"b7de1a69-e892-4ca4-a61f-20a221ce38ba","Type":"ContainerStarted","Data":"93c5a7bb556cb317acdd1e795a04c07ecd06768393a332b995cb23e5e67db69d"} Feb 17 13:38:52 crc kubenswrapper[4768]: I0217 13:38:52.073938 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"bcdd739db4d4b0fb71f0430e5f6242b02eed93dba9c0a18c4a1fd26ed3db244b"} Feb 17 13:38:52 crc kubenswrapper[4768]: I0217 13:38:52.086080 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rnlnm\" (UniqueName: \"kubernetes.io/projected/9497730e-2a05-40b9-a4ee-364b67a9133c-kube-api-access-rnlnm\") pod \"redhat-marketplace-ncxf5\" (UID: \"9497730e-2a05-40b9-a4ee-364b67a9133c\") " pod="openshift-marketplace/redhat-marketplace-ncxf5" Feb 17 13:38:52 crc kubenswrapper[4768]: I0217 13:38:52.086180 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vfpbq\" (UID: \"9cf79399-929e-43c8-9ceb-06619ef1edee\") " pod="openshift-image-registry/image-registry-697d97f7c8-vfpbq" Feb 17 13:38:52 crc kubenswrapper[4768]: I0217 13:38:52.086227 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9497730e-2a05-40b9-a4ee-364b67a9133c-utilities\") pod \"redhat-marketplace-ncxf5\" (UID: \"9497730e-2a05-40b9-a4ee-364b67a9133c\") " pod="openshift-marketplace/redhat-marketplace-ncxf5" Feb 17 13:38:52 crc kubenswrapper[4768]: I0217 13:38:52.086244 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9497730e-2a05-40b9-a4ee-364b67a9133c-catalog-content\") pod \"redhat-marketplace-ncxf5\" (UID: \"9497730e-2a05-40b9-a4ee-364b67a9133c\") " pod="openshift-marketplace/redhat-marketplace-ncxf5" Feb 17 13:38:52 crc kubenswrapper[4768]: E0217 13:38:52.086521 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 13:38:52.586510778 +0000 UTC m=+151.865897220 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vfpbq" (UID: "9cf79399-929e-43c8-9ceb-06619ef1edee") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 13:38:52 crc kubenswrapper[4768]: I0217 13:38:52.120092 4768 generic.go:334] "Generic (PLEG): container finished" podID="a409f38d-1da9-42e5-94ff-502133f6cee2" containerID="5cae6f7ee03b0057e0f8d790de9aef10c4daa71bfdb30e3bdad31b6102b36960" exitCode=0 Feb 17 13:38:52 crc kubenswrapper[4768]: I0217 13:38:52.120150 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pc222" event={"ID":"a409f38d-1da9-42e5-94ff-502133f6cee2","Type":"ContainerDied","Data":"5cae6f7ee03b0057e0f8d790de9aef10c4daa71bfdb30e3bdad31b6102b36960"} Feb 17 13:38:52 crc kubenswrapper[4768]: I0217 13:38:52.120192 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pc222" event={"ID":"a409f38d-1da9-42e5-94ff-502133f6cee2","Type":"ContainerStarted","Data":"ae08bbc669c26ea54a1eccf5d8a7581a7db482300f371666c002c6680cf517cc"} Feb 17 13:38:52 crc kubenswrapper[4768]: I0217 13:38:52.125848 4768 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 17 13:38:52 crc kubenswrapper[4768]: I0217 13:38:52.126352 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 17 13:38:52 crc kubenswrapper[4768]: I0217 13:38:52.132327 4768 generic.go:334] "Generic (PLEG): container finished" podID="0face492-83c1-49d4-bc1e-7de407151988" containerID="493e305c0fb6a166430cf00522a68a94918c96e88bc59e9b421c8bf6ccc2800b" exitCode=0 Feb 17 13:38:52 crc kubenswrapper[4768]: I0217 13:38:52.132390 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29522250-hq8zz" event={"ID":"0face492-83c1-49d4-bc1e-7de407151988","Type":"ContainerDied","Data":"493e305c0fb6a166430cf00522a68a94918c96e88bc59e9b421c8bf6ccc2800b"} Feb 17 13:38:52 crc kubenswrapper[4768]: I0217 13:38:52.146395 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"5ff4240302bbba15ef58f1579b5d680aaa71b67c292025d195bf29a131165a65"} Feb 17 13:38:52 crc kubenswrapper[4768]: I0217 13:38:52.153855 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wsm7m" event={"ID":"7e7136ca-949e-49ff-9f79-47e485a039cb","Type":"ContainerStarted","Data":"ce796e0af413a3d24605aa1cb123933bbacb432e8731ba5e1f6cf64cdcf3e78f"} Feb 17 13:38:52 crc kubenswrapper[4768]: I0217 13:38:52.155999 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-pls7p" event={"ID":"00ff8eee-3713-495f-a7c7-d05bba726cda","Type":"ContainerStarted","Data":"1d97eb3598622eaf388d1df833033efef06d71c97daadbad5e9c8040c21a094d"} Feb 17 13:38:52 crc kubenswrapper[4768]: I0217 13:38:52.187355 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 13:38:52 crc kubenswrapper[4768]: E0217 13:38:52.187558 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 13:38:52.687527416 +0000 UTC m=+151.966913868 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 13:38:52 crc kubenswrapper[4768]: I0217 13:38:52.187608 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rnlnm\" (UniqueName: \"kubernetes.io/projected/9497730e-2a05-40b9-a4ee-364b67a9133c-kube-api-access-rnlnm\") pod \"redhat-marketplace-ncxf5\" (UID: \"9497730e-2a05-40b9-a4ee-364b67a9133c\") " pod="openshift-marketplace/redhat-marketplace-ncxf5" Feb 17 13:38:52 crc kubenswrapper[4768]: I0217 13:38:52.187733 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vfpbq\" (UID: \"9cf79399-929e-43c8-9ceb-06619ef1edee\") " pod="openshift-image-registry/image-registry-697d97f7c8-vfpbq" Feb 17 13:38:52 crc kubenswrapper[4768]: I0217 13:38:52.187793 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9497730e-2a05-40b9-a4ee-364b67a9133c-utilities\") pod \"redhat-marketplace-ncxf5\" (UID: \"9497730e-2a05-40b9-a4ee-364b67a9133c\") " pod="openshift-marketplace/redhat-marketplace-ncxf5" Feb 17 13:38:52 crc kubenswrapper[4768]: I0217 13:38:52.187816 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9497730e-2a05-40b9-a4ee-364b67a9133c-catalog-content\") pod \"redhat-marketplace-ncxf5\" (UID: \"9497730e-2a05-40b9-a4ee-364b67a9133c\") " pod="openshift-marketplace/redhat-marketplace-ncxf5" Feb 17 13:38:52 crc kubenswrapper[4768]: I0217 13:38:52.188279 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9497730e-2a05-40b9-a4ee-364b67a9133c-catalog-content\") pod \"redhat-marketplace-ncxf5\" (UID: \"9497730e-2a05-40b9-a4ee-364b67a9133c\") " pod="openshift-marketplace/redhat-marketplace-ncxf5" Feb 17 13:38:52 crc kubenswrapper[4768]: I0217 13:38:52.189821 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9497730e-2a05-40b9-a4ee-364b67a9133c-utilities\") pod \"redhat-marketplace-ncxf5\" (UID: \"9497730e-2a05-40b9-a4ee-364b67a9133c\") " pod="openshift-marketplace/redhat-marketplace-ncxf5" Feb 17 13:38:52 crc kubenswrapper[4768]: E0217 13:38:52.189868 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 13:38:52.689851149 +0000 UTC m=+151.969237671 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vfpbq" (UID: "9cf79399-929e-43c8-9ceb-06619ef1edee") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 13:38:52 crc kubenswrapper[4768]: I0217 13:38:52.239324 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rnlnm\" (UniqueName: \"kubernetes.io/projected/9497730e-2a05-40b9-a4ee-364b67a9133c-kube-api-access-rnlnm\") pod \"redhat-marketplace-ncxf5\" (UID: \"9497730e-2a05-40b9-a4ee-364b67a9133c\") " pod="openshift-marketplace/redhat-marketplace-ncxf5" Feb 17 13:38:52 crc kubenswrapper[4768]: I0217 13:38:52.301000 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 13:38:52 crc kubenswrapper[4768]: E0217 13:38:52.301181 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 13:38:52.80115367 +0000 UTC m=+152.080540112 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 13:38:52 crc kubenswrapper[4768]: I0217 13:38:52.301342 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vfpbq\" (UID: \"9cf79399-929e-43c8-9ceb-06619ef1edee\") " pod="openshift-image-registry/image-registry-697d97f7c8-vfpbq" Feb 17 13:38:52 crc kubenswrapper[4768]: E0217 13:38:52.301876 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 13:38:52.801859772 +0000 UTC m=+152.081246214 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vfpbq" (UID: "9cf79399-929e-43c8-9ceb-06619ef1edee") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 13:38:52 crc kubenswrapper[4768]: I0217 13:38:52.333515 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-fbx28"] Feb 17 13:38:52 crc kubenswrapper[4768]: I0217 13:38:52.334657 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-fbx28" Feb 17 13:38:52 crc kubenswrapper[4768]: I0217 13:38:52.349752 4768 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock" Feb 17 13:38:52 crc kubenswrapper[4768]: I0217 13:38:52.358760 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-fbx28"] Feb 17 13:38:52 crc kubenswrapper[4768]: I0217 13:38:52.377767 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-ncxf5" Feb 17 13:38:52 crc kubenswrapper[4768]: I0217 13:38:52.405828 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 13:38:52 crc kubenswrapper[4768]: E0217 13:38:52.405973 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 13:38:52.905951076 +0000 UTC m=+152.185337518 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 13:38:52 crc kubenswrapper[4768]: I0217 13:38:52.406369 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vfpbq\" (UID: \"9cf79399-929e-43c8-9ceb-06619ef1edee\") " pod="openshift-image-registry/image-registry-697d97f7c8-vfpbq" Feb 17 13:38:52 crc kubenswrapper[4768]: E0217 13:38:52.406719 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 13:38:52.90671147 +0000 UTC m=+152.186097912 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vfpbq" (UID: "9cf79399-929e-43c8-9ceb-06619ef1edee") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 13:38:52 crc kubenswrapper[4768]: I0217 13:38:52.507894 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 13:38:52 crc kubenswrapper[4768]: I0217 13:38:52.508081 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e826302e-7052-4a6e-a626-93b7b433096a-catalog-content\") pod \"redhat-marketplace-fbx28\" (UID: \"e826302e-7052-4a6e-a626-93b7b433096a\") " pod="openshift-marketplace/redhat-marketplace-fbx28" Feb 17 13:38:52 crc kubenswrapper[4768]: I0217 13:38:52.508149 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e826302e-7052-4a6e-a626-93b7b433096a-utilities\") pod \"redhat-marketplace-fbx28\" (UID: \"e826302e-7052-4a6e-a626-93b7b433096a\") " pod="openshift-marketplace/redhat-marketplace-fbx28" Feb 17 13:38:52 crc kubenswrapper[4768]: I0217 13:38:52.508217 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sh9d2\" (UniqueName: \"kubernetes.io/projected/e826302e-7052-4a6e-a626-93b7b433096a-kube-api-access-sh9d2\") pod \"redhat-marketplace-fbx28\" (UID: \"e826302e-7052-4a6e-a626-93b7b433096a\") " pod="openshift-marketplace/redhat-marketplace-fbx28" Feb 17 13:38:52 crc kubenswrapper[4768]: E0217 13:38:52.508426 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 13:38:53.00840046 +0000 UTC m=+152.287786902 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 13:38:52 crc kubenswrapper[4768]: I0217 13:38:52.529776 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Feb 17 13:38:52 crc kubenswrapper[4768]: I0217 13:38:52.609375 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e826302e-7052-4a6e-a626-93b7b433096a-catalog-content\") pod \"redhat-marketplace-fbx28\" (UID: \"e826302e-7052-4a6e-a626-93b7b433096a\") " pod="openshift-marketplace/redhat-marketplace-fbx28" Feb 17 13:38:52 crc kubenswrapper[4768]: I0217 13:38:52.609438 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e826302e-7052-4a6e-a626-93b7b433096a-utilities\") pod \"redhat-marketplace-fbx28\" (UID: \"e826302e-7052-4a6e-a626-93b7b433096a\") " pod="openshift-marketplace/redhat-marketplace-fbx28" Feb 17 13:38:52 crc kubenswrapper[4768]: I0217 13:38:52.609479 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vfpbq\" (UID: \"9cf79399-929e-43c8-9ceb-06619ef1edee\") " pod="openshift-image-registry/image-registry-697d97f7c8-vfpbq" Feb 17 13:38:52 crc kubenswrapper[4768]: I0217 13:38:52.609500 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sh9d2\" (UniqueName: \"kubernetes.io/projected/e826302e-7052-4a6e-a626-93b7b433096a-kube-api-access-sh9d2\") pod \"redhat-marketplace-fbx28\" (UID: \"e826302e-7052-4a6e-a626-93b7b433096a\") " pod="openshift-marketplace/redhat-marketplace-fbx28" Feb 17 13:38:52 crc kubenswrapper[4768]: E0217 13:38:52.610030 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 13:38:53.110019857 +0000 UTC m=+152.389406299 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vfpbq" (UID: "9cf79399-929e-43c8-9ceb-06619ef1edee") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 13:38:52 crc kubenswrapper[4768]: I0217 13:38:52.610251 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e826302e-7052-4a6e-a626-93b7b433096a-utilities\") pod \"redhat-marketplace-fbx28\" (UID: \"e826302e-7052-4a6e-a626-93b7b433096a\") " pod="openshift-marketplace/redhat-marketplace-fbx28" Feb 17 13:38:52 crc kubenswrapper[4768]: I0217 13:38:52.611550 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e826302e-7052-4a6e-a626-93b7b433096a-catalog-content\") pod \"redhat-marketplace-fbx28\" (UID: \"e826302e-7052-4a6e-a626-93b7b433096a\") " pod="openshift-marketplace/redhat-marketplace-fbx28" Feb 17 13:38:52 crc kubenswrapper[4768]: I0217 13:38:52.630298 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sh9d2\" (UniqueName: \"kubernetes.io/projected/e826302e-7052-4a6e-a626-93b7b433096a-kube-api-access-sh9d2\") pod \"redhat-marketplace-fbx28\" (UID: \"e826302e-7052-4a6e-a626-93b7b433096a\") " pod="openshift-marketplace/redhat-marketplace-fbx28" Feb 17 13:38:52 crc kubenswrapper[4768]: I0217 13:38:52.661225 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-fbx28" Feb 17 13:38:52 crc kubenswrapper[4768]: I0217 13:38:52.709293 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Feb 17 13:38:52 crc kubenswrapper[4768]: I0217 13:38:52.710009 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 17 13:38:52 crc kubenswrapper[4768]: I0217 13:38:52.710430 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 13:38:52 crc kubenswrapper[4768]: E0217 13:38:52.713151 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 13:38:53.21312215 +0000 UTC m=+152.492508592 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 13:38:52 crc kubenswrapper[4768]: I0217 13:38:52.726676 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Feb 17 13:38:52 crc kubenswrapper[4768]: I0217 13:38:52.726769 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-kjl2n" Feb 17 13:38:52 crc kubenswrapper[4768]: I0217 13:38:52.736068 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Feb 17 13:38:52 crc kubenswrapper[4768]: I0217 13:38:52.745448 4768 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock","Timestamp":"2026-02-17T13:38:52.349792995Z","Handler":null,"Name":""} Feb 17 13:38:52 crc kubenswrapper[4768]: I0217 13:38:52.751809 4768 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: kubevirt.io.hostpath-provisioner endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0 Feb 17 13:38:52 crc kubenswrapper[4768]: I0217 13:38:52.751853 4768 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: kubevirt.io.hostpath-provisioner at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock Feb 17 13:38:52 crc kubenswrapper[4768]: I0217 13:38:52.782570 4768 patch_prober.go:28] interesting pod/downloads-7954f5f757-mvb69 container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.27:8080/\": dial tcp 10.217.0.27:8080: connect: connection refused" start-of-body= Feb 17 13:38:52 crc kubenswrapper[4768]: I0217 13:38:52.782635 4768 patch_prober.go:28] interesting pod/downloads-7954f5f757-mvb69 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.27:8080/\": dial tcp 10.217.0.27:8080: connect: connection refused" start-of-body= Feb 17 13:38:52 crc kubenswrapper[4768]: I0217 13:38:52.782628 4768 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-mvb69" podUID="7483ebd8-979d-429d-9197-cf5ae208af0a" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.27:8080/\": dial tcp 10.217.0.27:8080: connect: connection refused" Feb 17 13:38:52 crc kubenswrapper[4768]: I0217 13:38:52.782658 4768 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-mvb69" podUID="7483ebd8-979d-429d-9197-cf5ae208af0a" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.27:8080/\": dial tcp 10.217.0.27:8080: connect: connection refused" Feb 17 13:38:52 crc kubenswrapper[4768]: I0217 13:38:52.795681 4768 patch_prober.go:28] interesting pod/router-default-5444994796-ql5b5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 17 13:38:52 crc kubenswrapper[4768]: [-]has-synced failed: reason withheld Feb 17 13:38:52 crc kubenswrapper[4768]: [+]process-running ok Feb 17 13:38:52 crc kubenswrapper[4768]: healthz check failed Feb 17 13:38:52 crc kubenswrapper[4768]: I0217 13:38:52.795730 4768 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-ql5b5" podUID="49862fb8-6a93-48ac-926a-846f72a67989" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 17 13:38:52 crc kubenswrapper[4768]: I0217 13:38:52.813427 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/97d5e4e8-9db2-41ea-8516-3869d11a78ea-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"97d5e4e8-9db2-41ea-8516-3869d11a78ea\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 17 13:38:52 crc kubenswrapper[4768]: I0217 13:38:52.813481 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vfpbq\" (UID: \"9cf79399-929e-43c8-9ceb-06619ef1edee\") " pod="openshift-image-registry/image-registry-697d97f7c8-vfpbq" Feb 17 13:38:52 crc kubenswrapper[4768]: I0217 13:38:52.813501 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/97d5e4e8-9db2-41ea-8516-3869d11a78ea-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"97d5e4e8-9db2-41ea-8516-3869d11a78ea\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 17 13:38:52 crc kubenswrapper[4768]: I0217 13:38:52.816359 4768 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 17 13:38:52 crc kubenswrapper[4768]: I0217 13:38:52.816397 4768 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vfpbq\" (UID: \"9cf79399-929e-43c8-9ceb-06619ef1edee\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount\"" pod="openshift-image-registry/image-registry-697d97f7c8-vfpbq" Feb 17 13:38:52 crc kubenswrapper[4768]: I0217 13:38:52.884944 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-ncxf5"] Feb 17 13:38:52 crc kubenswrapper[4768]: I0217 13:38:52.888415 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vfpbq\" (UID: \"9cf79399-929e-43c8-9ceb-06619ef1edee\") " pod="openshift-image-registry/image-registry-697d97f7c8-vfpbq" Feb 17 13:38:52 crc kubenswrapper[4768]: I0217 13:38:52.914317 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 13:38:52 crc kubenswrapper[4768]: I0217 13:38:52.914617 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/97d5e4e8-9db2-41ea-8516-3869d11a78ea-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"97d5e4e8-9db2-41ea-8516-3869d11a78ea\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 17 13:38:52 crc kubenswrapper[4768]: I0217 13:38:52.914670 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/97d5e4e8-9db2-41ea-8516-3869d11a78ea-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"97d5e4e8-9db2-41ea-8516-3869d11a78ea\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 17 13:38:52 crc kubenswrapper[4768]: I0217 13:38:52.914756 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/97d5e4e8-9db2-41ea-8516-3869d11a78ea-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"97d5e4e8-9db2-41ea-8516-3869d11a78ea\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 17 13:38:52 crc kubenswrapper[4768]: I0217 13:38:52.930373 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 17 13:38:52 crc kubenswrapper[4768]: I0217 13:38:52.943318 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/97d5e4e8-9db2-41ea-8516-3869d11a78ea-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"97d5e4e8-9db2-41ea-8516-3869d11a78ea\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 17 13:38:52 crc kubenswrapper[4768]: I0217 13:38:52.956285 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-fbx28"] Feb 17 13:38:52 crc kubenswrapper[4768]: W0217 13:38:52.963297 4768 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode826302e_7052_4a6e_a626_93b7b433096a.slice/crio-8e94a8d68f3d4f0f21c79d09a09891ba67811ff6ebd97fe4a6cc7806f1915e53 WatchSource:0}: Error finding container 8e94a8d68f3d4f0f21c79d09a09891ba67811ff6ebd97fe4a6cc7806f1915e53: Status 404 returned error can't find the container with id 8e94a8d68f3d4f0f21c79d09a09891ba67811ff6ebd97fe4a6cc7806f1915e53 Feb 17 13:38:52 crc kubenswrapper[4768]: I0217 13:38:52.971030 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-f9d7485db-9fmzj" Feb 17 13:38:52 crc kubenswrapper[4768]: I0217 13:38:52.971445 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-f9d7485db-9fmzj" Feb 17 13:38:52 crc kubenswrapper[4768]: I0217 13:38:52.974888 4768 patch_prober.go:28] interesting pod/console-f9d7485db-9fmzj container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.8:8443/health\": dial tcp 10.217.0.8:8443: connect: connection refused" start-of-body= Feb 17 13:38:52 crc kubenswrapper[4768]: I0217 13:38:52.974937 4768 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-9fmzj" podUID="0030a046-d1bb-4a34-830c-c275306cee43" containerName="console" probeResult="failure" output="Get \"https://10.217.0.8:8443/health\": dial tcp 10.217.0.8:8443: connect: connection refused" Feb 17 13:38:52 crc kubenswrapper[4768]: I0217 13:38:52.988711 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-vfpbq" Feb 17 13:38:53 crc kubenswrapper[4768]: I0217 13:38:53.045269 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 17 13:38:53 crc kubenswrapper[4768]: I0217 13:38:53.131610 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-fg2l6"] Feb 17 13:38:53 crc kubenswrapper[4768]: I0217 13:38:53.133176 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-fg2l6" Feb 17 13:38:53 crc kubenswrapper[4768]: I0217 13:38:53.136510 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Feb 17 13:38:53 crc kubenswrapper[4768]: I0217 13:38:53.143518 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-fg2l6"] Feb 17 13:38:53 crc kubenswrapper[4768]: I0217 13:38:53.163615 4768 generic.go:334] "Generic (PLEG): container finished" podID="b7de1a69-e892-4ca4-a61f-20a221ce38ba" containerID="8c1edb7bbf59f6c069d4628f9c53364b649843a0212906e1d1b5db40b37b695c" exitCode=0 Feb 17 13:38:53 crc kubenswrapper[4768]: I0217 13:38:53.163679 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4tmgb" event={"ID":"b7de1a69-e892-4ca4-a61f-20a221ce38ba","Type":"ContainerDied","Data":"8c1edb7bbf59f6c069d4628f9c53364b649843a0212906e1d1b5db40b37b695c"} Feb 17 13:38:53 crc kubenswrapper[4768]: I0217 13:38:53.171082 4768 generic.go:334] "Generic (PLEG): container finished" podID="e826302e-7052-4a6e-a626-93b7b433096a" containerID="62cfd214a35ffcdcc96a8852a2c79c7be4c47a8220438d6ce5e177817f2f84df" exitCode=0 Feb 17 13:38:53 crc kubenswrapper[4768]: I0217 13:38:53.171354 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fbx28" event={"ID":"e826302e-7052-4a6e-a626-93b7b433096a","Type":"ContainerDied","Data":"62cfd214a35ffcdcc96a8852a2c79c7be4c47a8220438d6ce5e177817f2f84df"} Feb 17 13:38:53 crc kubenswrapper[4768]: I0217 13:38:53.171411 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fbx28" event={"ID":"e826302e-7052-4a6e-a626-93b7b433096a","Type":"ContainerStarted","Data":"8e94a8d68f3d4f0f21c79d09a09891ba67811ff6ebd97fe4a6cc7806f1915e53"} Feb 17 13:38:53 crc kubenswrapper[4768]: I0217 13:38:53.173171 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"6379baf0-ecbe-40bd-bf11-a49e64a264d3","Type":"ContainerStarted","Data":"15c928500f59901ce72e0797b675673a187bee8dbf45b8a47d4de6672df06afa"} Feb 17 13:38:53 crc kubenswrapper[4768]: I0217 13:38:53.173217 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"6379baf0-ecbe-40bd-bf11-a49e64a264d3","Type":"ContainerStarted","Data":"c7eb4bcb4142322450e900194ba16d7374fa7368dd0fcf4bfbf6e40acbed2ebd"} Feb 17 13:38:53 crc kubenswrapper[4768]: I0217 13:38:53.190717 4768 generic.go:334] "Generic (PLEG): container finished" podID="6a25c47f-7f1c-42ed-85bf-acfe8949338b" containerID="31b098a6831bc783c28ca6a8b854b3d0d0d61267e51480482a65db81ec03bea7" exitCode=0 Feb 17 13:38:53 crc kubenswrapper[4768]: I0217 13:38:53.190798 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9r46k" event={"ID":"6a25c47f-7f1c-42ed-85bf-acfe8949338b","Type":"ContainerDied","Data":"31b098a6831bc783c28ca6a8b854b3d0d0d61267e51480482a65db81ec03bea7"} Feb 17 13:38:53 crc kubenswrapper[4768]: I0217 13:38:53.190829 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9r46k" event={"ID":"6a25c47f-7f1c-42ed-85bf-acfe8949338b","Type":"ContainerStarted","Data":"d80785c03391662c9f34ed3e05d403bbf46f8c6b15a459593193e6e372efa0d2"} Feb 17 13:38:53 crc kubenswrapper[4768]: I0217 13:38:53.198175 4768 generic.go:334] "Generic (PLEG): container finished" podID="7e7136ca-949e-49ff-9f79-47e485a039cb" containerID="6144258bc968b02299b9af597df580d71e626025bd538c8f7c0eb55a6aee85bb" exitCode=0 Feb 17 13:38:53 crc kubenswrapper[4768]: I0217 13:38:53.198252 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wsm7m" event={"ID":"7e7136ca-949e-49ff-9f79-47e485a039cb","Type":"ContainerDied","Data":"6144258bc968b02299b9af597df580d71e626025bd538c8f7c0eb55a6aee85bb"} Feb 17 13:38:53 crc kubenswrapper[4768]: I0217 13:38:53.204281 4768 generic.go:334] "Generic (PLEG): container finished" podID="9497730e-2a05-40b9-a4ee-364b67a9133c" containerID="80749b682f01b18e73b2c01731acd65c9430e9eae428240679ac432771b2a66d" exitCode=0 Feb 17 13:38:53 crc kubenswrapper[4768]: I0217 13:38:53.204448 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ncxf5" event={"ID":"9497730e-2a05-40b9-a4ee-364b67a9133c","Type":"ContainerDied","Data":"80749b682f01b18e73b2c01731acd65c9430e9eae428240679ac432771b2a66d"} Feb 17 13:38:53 crc kubenswrapper[4768]: I0217 13:38:53.204482 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ncxf5" event={"ID":"9497730e-2a05-40b9-a4ee-364b67a9133c","Type":"ContainerStarted","Data":"d7b837c2ab3f225a7a1115fa7a02de21d79487055c24cad566eecffda97250cb"} Feb 17 13:38:53 crc kubenswrapper[4768]: I0217 13:38:53.207216 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-vfpbq"] Feb 17 13:38:53 crc kubenswrapper[4768]: I0217 13:38:53.223629 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3529a765-a06e-42c3-9a16-959ca7662469-catalog-content\") pod \"redhat-operators-fg2l6\" (UID: \"3529a765-a06e-42c3-9a16-959ca7662469\") " pod="openshift-marketplace/redhat-operators-fg2l6" Feb 17 13:38:53 crc kubenswrapper[4768]: I0217 13:38:53.223678 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3529a765-a06e-42c3-9a16-959ca7662469-utilities\") pod \"redhat-operators-fg2l6\" (UID: \"3529a765-a06e-42c3-9a16-959ca7662469\") " pod="openshift-marketplace/redhat-operators-fg2l6" Feb 17 13:38:53 crc kubenswrapper[4768]: I0217 13:38:53.223702 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9skbx\" (UniqueName: \"kubernetes.io/projected/3529a765-a06e-42c3-9a16-959ca7662469-kube-api-access-9skbx\") pod \"redhat-operators-fg2l6\" (UID: \"3529a765-a06e-42c3-9a16-959ca7662469\") " pod="openshift-marketplace/redhat-operators-fg2l6" Feb 17 13:38:53 crc kubenswrapper[4768]: I0217 13:38:53.224772 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-pls7p" event={"ID":"00ff8eee-3713-495f-a7c7-d05bba726cda","Type":"ContainerStarted","Data":"6d87a6eb8071779113c1dc0ae953259d6b401b28edd5e4ef28ec63086689ee9c"} Feb 17 13:38:53 crc kubenswrapper[4768]: I0217 13:38:53.224812 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-pls7p" event={"ID":"00ff8eee-3713-495f-a7c7-d05bba726cda","Type":"ContainerStarted","Data":"6735fc937fce16fcb25626b820e69d4e88f92e8d2ac791789360959be93412b6"} Feb 17 13:38:53 crc kubenswrapper[4768]: W0217 13:38:53.240388 4768 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9cf79399_929e_43c8_9ceb_06619ef1edee.slice/crio-8b3dda1eb812d3349d4195832ad6fa1ec1992b55af18e2e5c42653dc2d063b6a WatchSource:0}: Error finding container 8b3dda1eb812d3349d4195832ad6fa1ec1992b55af18e2e5c42653dc2d063b6a: Status 404 returned error can't find the container with id 8b3dda1eb812d3349d4195832ad6fa1ec1992b55af18e2e5c42653dc2d063b6a Feb 17 13:38:53 crc kubenswrapper[4768]: I0217 13:38:53.255441 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-8-crc" podStartSLOduration=2.255427597 podStartE2EDuration="2.255427597s" podCreationTimestamp="2026-02-17 13:38:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 13:38:53.222731632 +0000 UTC m=+152.502118074" watchObservedRunningTime="2026-02-17 13:38:53.255427597 +0000 UTC m=+152.534814039" Feb 17 13:38:53 crc kubenswrapper[4768]: I0217 13:38:53.326212 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="hostpath-provisioner/csi-hostpathplugin-pls7p" podStartSLOduration=13.326192046 podStartE2EDuration="13.326192046s" podCreationTimestamp="2026-02-17 13:38:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 13:38:53.325615448 +0000 UTC m=+152.605001890" watchObservedRunningTime="2026-02-17 13:38:53.326192046 +0000 UTC m=+152.605578488" Feb 17 13:38:53 crc kubenswrapper[4768]: I0217 13:38:53.326428 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3529a765-a06e-42c3-9a16-959ca7662469-catalog-content\") pod \"redhat-operators-fg2l6\" (UID: \"3529a765-a06e-42c3-9a16-959ca7662469\") " pod="openshift-marketplace/redhat-operators-fg2l6" Feb 17 13:38:53 crc kubenswrapper[4768]: I0217 13:38:53.326546 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3529a765-a06e-42c3-9a16-959ca7662469-utilities\") pod \"redhat-operators-fg2l6\" (UID: \"3529a765-a06e-42c3-9a16-959ca7662469\") " pod="openshift-marketplace/redhat-operators-fg2l6" Feb 17 13:38:53 crc kubenswrapper[4768]: I0217 13:38:53.326624 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9skbx\" (UniqueName: \"kubernetes.io/projected/3529a765-a06e-42c3-9a16-959ca7662469-kube-api-access-9skbx\") pod \"redhat-operators-fg2l6\" (UID: \"3529a765-a06e-42c3-9a16-959ca7662469\") " pod="openshift-marketplace/redhat-operators-fg2l6" Feb 17 13:38:53 crc kubenswrapper[4768]: I0217 13:38:53.327380 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3529a765-a06e-42c3-9a16-959ca7662469-catalog-content\") pod \"redhat-operators-fg2l6\" (UID: \"3529a765-a06e-42c3-9a16-959ca7662469\") " pod="openshift-marketplace/redhat-operators-fg2l6" Feb 17 13:38:53 crc kubenswrapper[4768]: I0217 13:38:53.329989 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3529a765-a06e-42c3-9a16-959ca7662469-utilities\") pod \"redhat-operators-fg2l6\" (UID: \"3529a765-a06e-42c3-9a16-959ca7662469\") " pod="openshift-marketplace/redhat-operators-fg2l6" Feb 17 13:38:53 crc kubenswrapper[4768]: I0217 13:38:53.368124 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9skbx\" (UniqueName: \"kubernetes.io/projected/3529a765-a06e-42c3-9a16-959ca7662469-kube-api-access-9skbx\") pod \"redhat-operators-fg2l6\" (UID: \"3529a765-a06e-42c3-9a16-959ca7662469\") " pod="openshift-marketplace/redhat-operators-fg2l6" Feb 17 13:38:53 crc kubenswrapper[4768]: I0217 13:38:53.416393 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Feb 17 13:38:53 crc kubenswrapper[4768]: W0217 13:38:53.426381 4768 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod97d5e4e8_9db2_41ea_8516_3869d11a78ea.slice/crio-a12b95cad68c52bcbc8df45178ecfbee6f435b079350ca9419e3ab06686bdbc0 WatchSource:0}: Error finding container a12b95cad68c52bcbc8df45178ecfbee6f435b079350ca9419e3ab06686bdbc0: Status 404 returned error can't find the container with id a12b95cad68c52bcbc8df45178ecfbee6f435b079350ca9419e3ab06686bdbc0 Feb 17 13:38:53 crc kubenswrapper[4768]: I0217 13:38:53.515784 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-fg2l6" Feb 17 13:38:53 crc kubenswrapper[4768]: I0217 13:38:53.546792 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f668bae-612b-4b75-9490-919e737c6a3b" path="/var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes" Feb 17 13:38:53 crc kubenswrapper[4768]: I0217 13:38:53.547453 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-s777d"] Feb 17 13:38:53 crc kubenswrapper[4768]: I0217 13:38:53.548119 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29522250-hq8zz" Feb 17 13:38:53 crc kubenswrapper[4768]: E0217 13:38:53.548377 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0face492-83c1-49d4-bc1e-7de407151988" containerName="collect-profiles" Feb 17 13:38:53 crc kubenswrapper[4768]: I0217 13:38:53.548404 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="0face492-83c1-49d4-bc1e-7de407151988" containerName="collect-profiles" Feb 17 13:38:53 crc kubenswrapper[4768]: I0217 13:38:53.548538 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="0face492-83c1-49d4-bc1e-7de407151988" containerName="collect-profiles" Feb 17 13:38:53 crc kubenswrapper[4768]: I0217 13:38:53.548906 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-s777d"] Feb 17 13:38:53 crc kubenswrapper[4768]: I0217 13:38:53.548978 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-s777d" Feb 17 13:38:53 crc kubenswrapper[4768]: I0217 13:38:53.734539 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0face492-83c1-49d4-bc1e-7de407151988-secret-volume\") pod \"0face492-83c1-49d4-bc1e-7de407151988\" (UID: \"0face492-83c1-49d4-bc1e-7de407151988\") " Feb 17 13:38:53 crc kubenswrapper[4768]: I0217 13:38:53.734580 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0face492-83c1-49d4-bc1e-7de407151988-config-volume\") pod \"0face492-83c1-49d4-bc1e-7de407151988\" (UID: \"0face492-83c1-49d4-bc1e-7de407151988\") " Feb 17 13:38:53 crc kubenswrapper[4768]: I0217 13:38:53.734611 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zwbct\" (UniqueName: \"kubernetes.io/projected/0face492-83c1-49d4-bc1e-7de407151988-kube-api-access-zwbct\") pod \"0face492-83c1-49d4-bc1e-7de407151988\" (UID: \"0face492-83c1-49d4-bc1e-7de407151988\") " Feb 17 13:38:53 crc kubenswrapper[4768]: I0217 13:38:53.735658 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gq67p\" (UniqueName: \"kubernetes.io/projected/1c56706d-59bb-4e83-a1d8-a39c61f50cc2-kube-api-access-gq67p\") pod \"redhat-operators-s777d\" (UID: \"1c56706d-59bb-4e83-a1d8-a39c61f50cc2\") " pod="openshift-marketplace/redhat-operators-s777d" Feb 17 13:38:53 crc kubenswrapper[4768]: I0217 13:38:53.735864 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1c56706d-59bb-4e83-a1d8-a39c61f50cc2-utilities\") pod \"redhat-operators-s777d\" (UID: \"1c56706d-59bb-4e83-a1d8-a39c61f50cc2\") " pod="openshift-marketplace/redhat-operators-s777d" Feb 17 13:38:53 crc kubenswrapper[4768]: I0217 13:38:53.735975 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1c56706d-59bb-4e83-a1d8-a39c61f50cc2-catalog-content\") pod \"redhat-operators-s777d\" (UID: \"1c56706d-59bb-4e83-a1d8-a39c61f50cc2\") " pod="openshift-marketplace/redhat-operators-s777d" Feb 17 13:38:53 crc kubenswrapper[4768]: I0217 13:38:53.736019 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0face492-83c1-49d4-bc1e-7de407151988-config-volume" (OuterVolumeSpecName: "config-volume") pod "0face492-83c1-49d4-bc1e-7de407151988" (UID: "0face492-83c1-49d4-bc1e-7de407151988"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 13:38:53 crc kubenswrapper[4768]: I0217 13:38:53.736178 4768 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0face492-83c1-49d4-bc1e-7de407151988-config-volume\") on node \"crc\" DevicePath \"\"" Feb 17 13:38:53 crc kubenswrapper[4768]: I0217 13:38:53.740088 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0face492-83c1-49d4-bc1e-7de407151988-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "0face492-83c1-49d4-bc1e-7de407151988" (UID: "0face492-83c1-49d4-bc1e-7de407151988"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 13:38:53 crc kubenswrapper[4768]: I0217 13:38:53.740533 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0face492-83c1-49d4-bc1e-7de407151988-kube-api-access-zwbct" (OuterVolumeSpecName: "kube-api-access-zwbct") pod "0face492-83c1-49d4-bc1e-7de407151988" (UID: "0face492-83c1-49d4-bc1e-7de407151988"). InnerVolumeSpecName "kube-api-access-zwbct". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 13:38:53 crc kubenswrapper[4768]: I0217 13:38:53.777973 4768 patch_prober.go:28] interesting pod/router-default-5444994796-ql5b5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 17 13:38:53 crc kubenswrapper[4768]: [-]has-synced failed: reason withheld Feb 17 13:38:53 crc kubenswrapper[4768]: [+]process-running ok Feb 17 13:38:53 crc kubenswrapper[4768]: healthz check failed Feb 17 13:38:53 crc kubenswrapper[4768]: I0217 13:38:53.778255 4768 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-ql5b5" podUID="49862fb8-6a93-48ac-926a-846f72a67989" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 17 13:38:53 crc kubenswrapper[4768]: I0217 13:38:53.837219 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1c56706d-59bb-4e83-a1d8-a39c61f50cc2-utilities\") pod \"redhat-operators-s777d\" (UID: \"1c56706d-59bb-4e83-a1d8-a39c61f50cc2\") " pod="openshift-marketplace/redhat-operators-s777d" Feb 17 13:38:53 crc kubenswrapper[4768]: I0217 13:38:53.837294 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1c56706d-59bb-4e83-a1d8-a39c61f50cc2-catalog-content\") pod \"redhat-operators-s777d\" (UID: \"1c56706d-59bb-4e83-a1d8-a39c61f50cc2\") " pod="openshift-marketplace/redhat-operators-s777d" Feb 17 13:38:53 crc kubenswrapper[4768]: I0217 13:38:53.837362 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gq67p\" (UniqueName: \"kubernetes.io/projected/1c56706d-59bb-4e83-a1d8-a39c61f50cc2-kube-api-access-gq67p\") pod \"redhat-operators-s777d\" (UID: \"1c56706d-59bb-4e83-a1d8-a39c61f50cc2\") " pod="openshift-marketplace/redhat-operators-s777d" Feb 17 13:38:53 crc kubenswrapper[4768]: I0217 13:38:53.837410 4768 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0face492-83c1-49d4-bc1e-7de407151988-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 17 13:38:53 crc kubenswrapper[4768]: I0217 13:38:53.837428 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zwbct\" (UniqueName: \"kubernetes.io/projected/0face492-83c1-49d4-bc1e-7de407151988-kube-api-access-zwbct\") on node \"crc\" DevicePath \"\"" Feb 17 13:38:53 crc kubenswrapper[4768]: I0217 13:38:53.838173 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1c56706d-59bb-4e83-a1d8-a39c61f50cc2-catalog-content\") pod \"redhat-operators-s777d\" (UID: \"1c56706d-59bb-4e83-a1d8-a39c61f50cc2\") " pod="openshift-marketplace/redhat-operators-s777d" Feb 17 13:38:53 crc kubenswrapper[4768]: I0217 13:38:53.838227 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1c56706d-59bb-4e83-a1d8-a39c61f50cc2-utilities\") pod \"redhat-operators-s777d\" (UID: \"1c56706d-59bb-4e83-a1d8-a39c61f50cc2\") " pod="openshift-marketplace/redhat-operators-s777d" Feb 17 13:38:53 crc kubenswrapper[4768]: I0217 13:38:53.863986 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gq67p\" (UniqueName: \"kubernetes.io/projected/1c56706d-59bb-4e83-a1d8-a39c61f50cc2-kube-api-access-gq67p\") pod \"redhat-operators-s777d\" (UID: \"1c56706d-59bb-4e83-a1d8-a39c61f50cc2\") " pod="openshift-marketplace/redhat-operators-s777d" Feb 17 13:38:53 crc kubenswrapper[4768]: I0217 13:38:53.869843 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-s777d" Feb 17 13:38:54 crc kubenswrapper[4768]: I0217 13:38:54.007543 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-fg2l6"] Feb 17 13:38:54 crc kubenswrapper[4768]: W0217 13:38:54.057357 4768 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3529a765_a06e_42c3_9a16_959ca7662469.slice/crio-a89dda73380302fda39de9b4c0ce4be9cc46329aed64bcbc91e1c222bfee5caf WatchSource:0}: Error finding container a89dda73380302fda39de9b4c0ce4be9cc46329aed64bcbc91e1c222bfee5caf: Status 404 returned error can't find the container with id a89dda73380302fda39de9b4c0ce4be9cc46329aed64bcbc91e1c222bfee5caf Feb 17 13:38:54 crc kubenswrapper[4768]: I0217 13:38:54.075389 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-879f6c89f-bsrtm" Feb 17 13:38:54 crc kubenswrapper[4768]: I0217 13:38:54.232370 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-s777d"] Feb 17 13:38:54 crc kubenswrapper[4768]: I0217 13:38:54.233069 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fg2l6" event={"ID":"3529a765-a06e-42c3-9a16-959ca7662469","Type":"ContainerStarted","Data":"a89dda73380302fda39de9b4c0ce4be9cc46329aed64bcbc91e1c222bfee5caf"} Feb 17 13:38:54 crc kubenswrapper[4768]: I0217 13:38:54.234896 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-vfpbq" event={"ID":"9cf79399-929e-43c8-9ceb-06619ef1edee","Type":"ContainerStarted","Data":"729407bfd6a8cbbf40d5853d35be386b4cb4d4d836987b5cf5a752d6f135c6da"} Feb 17 13:38:54 crc kubenswrapper[4768]: I0217 13:38:54.234939 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-vfpbq" event={"ID":"9cf79399-929e-43c8-9ceb-06619ef1edee","Type":"ContainerStarted","Data":"8b3dda1eb812d3349d4195832ad6fa1ec1992b55af18e2e5c42653dc2d063b6a"} Feb 17 13:38:54 crc kubenswrapper[4768]: I0217 13:38:54.234988 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-697d97f7c8-vfpbq" Feb 17 13:38:54 crc kubenswrapper[4768]: I0217 13:38:54.238478 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"97d5e4e8-9db2-41ea-8516-3869d11a78ea","Type":"ContainerStarted","Data":"70f7a5aeccdb4a44ce456787bb9dc861a67a5d0f14f274b7c60d06022f8cc496"} Feb 17 13:38:54 crc kubenswrapper[4768]: I0217 13:38:54.238512 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"97d5e4e8-9db2-41ea-8516-3869d11a78ea","Type":"ContainerStarted","Data":"a12b95cad68c52bcbc8df45178ecfbee6f435b079350ca9419e3ab06686bdbc0"} Feb 17 13:38:54 crc kubenswrapper[4768]: I0217 13:38:54.240632 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29522250-hq8zz" event={"ID":"0face492-83c1-49d4-bc1e-7de407151988","Type":"ContainerDied","Data":"48da0ef58925f14b1014f92b1ed89ac8055c2064e235f3a6a574fd06ae1f262c"} Feb 17 13:38:54 crc kubenswrapper[4768]: I0217 13:38:54.240660 4768 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="48da0ef58925f14b1014f92b1ed89ac8055c2064e235f3a6a574fd06ae1f262c" Feb 17 13:38:54 crc kubenswrapper[4768]: I0217 13:38:54.240714 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29522250-hq8zz" Feb 17 13:38:54 crc kubenswrapper[4768]: I0217 13:38:54.257221 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-697d97f7c8-vfpbq" podStartSLOduration=133.257198355 podStartE2EDuration="2m13.257198355s" podCreationTimestamp="2026-02-17 13:36:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 13:38:54.255711438 +0000 UTC m=+153.535097880" watchObservedRunningTime="2026-02-17 13:38:54.257198355 +0000 UTC m=+153.536584797" Feb 17 13:38:54 crc kubenswrapper[4768]: I0217 13:38:54.270722 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/revision-pruner-9-crc" podStartSLOduration=2.2707047989999998 podStartE2EDuration="2.270704799s" podCreationTimestamp="2026-02-17 13:38:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 13:38:54.266762234 +0000 UTC m=+153.546148676" watchObservedRunningTime="2026-02-17 13:38:54.270704799 +0000 UTC m=+153.550091241" Feb 17 13:38:54 crc kubenswrapper[4768]: I0217 13:38:54.278441 4768 generic.go:334] "Generic (PLEG): container finished" podID="6379baf0-ecbe-40bd-bf11-a49e64a264d3" containerID="15c928500f59901ce72e0797b675673a187bee8dbf45b8a47d4de6672df06afa" exitCode=0 Feb 17 13:38:54 crc kubenswrapper[4768]: I0217 13:38:54.278536 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"6379baf0-ecbe-40bd-bf11-a49e64a264d3","Type":"ContainerDied","Data":"15c928500f59901ce72e0797b675673a187bee8dbf45b8a47d4de6672df06afa"} Feb 17 13:38:54 crc kubenswrapper[4768]: I0217 13:38:54.431731 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-ghmbf" Feb 17 13:38:54 crc kubenswrapper[4768]: I0217 13:38:54.541021 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-5wgl6" Feb 17 13:38:54 crc kubenswrapper[4768]: I0217 13:38:54.714998 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-76f77b778f-hxzgb" Feb 17 13:38:54 crc kubenswrapper[4768]: I0217 13:38:54.715190 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-76f77b778f-hxzgb" Feb 17 13:38:54 crc kubenswrapper[4768]: I0217 13:38:54.720949 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-76f77b778f-hxzgb" Feb 17 13:38:54 crc kubenswrapper[4768]: I0217 13:38:54.775734 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-5444994796-ql5b5" Feb 17 13:38:54 crc kubenswrapper[4768]: I0217 13:38:54.778800 4768 patch_prober.go:28] interesting pod/router-default-5444994796-ql5b5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 17 13:38:54 crc kubenswrapper[4768]: [-]has-synced failed: reason withheld Feb 17 13:38:54 crc kubenswrapper[4768]: [+]process-running ok Feb 17 13:38:54 crc kubenswrapper[4768]: healthz check failed Feb 17 13:38:54 crc kubenswrapper[4768]: I0217 13:38:54.778855 4768 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-ql5b5" podUID="49862fb8-6a93-48ac-926a-846f72a67989" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 17 13:38:55 crc kubenswrapper[4768]: I0217 13:38:55.281155 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-58897d9998-8c6lh" Feb 17 13:38:55 crc kubenswrapper[4768]: I0217 13:38:55.310550 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-s777d" event={"ID":"1c56706d-59bb-4e83-a1d8-a39c61f50cc2","Type":"ContainerStarted","Data":"bfa071440500237200d2db7c87e97d775c5f114b85c300627c60c31edf15193c"} Feb 17 13:38:55 crc kubenswrapper[4768]: I0217 13:38:55.326121 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-76f77b778f-hxzgb" Feb 17 13:38:55 crc kubenswrapper[4768]: I0217 13:38:55.692207 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 17 13:38:55 crc kubenswrapper[4768]: I0217 13:38:55.772112 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/6379baf0-ecbe-40bd-bf11-a49e64a264d3-kube-api-access\") pod \"6379baf0-ecbe-40bd-bf11-a49e64a264d3\" (UID: \"6379baf0-ecbe-40bd-bf11-a49e64a264d3\") " Feb 17 13:38:55 crc kubenswrapper[4768]: I0217 13:38:55.772185 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/6379baf0-ecbe-40bd-bf11-a49e64a264d3-kubelet-dir\") pod \"6379baf0-ecbe-40bd-bf11-a49e64a264d3\" (UID: \"6379baf0-ecbe-40bd-bf11-a49e64a264d3\") " Feb 17 13:38:55 crc kubenswrapper[4768]: I0217 13:38:55.772471 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6379baf0-ecbe-40bd-bf11-a49e64a264d3-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "6379baf0-ecbe-40bd-bf11-a49e64a264d3" (UID: "6379baf0-ecbe-40bd-bf11-a49e64a264d3"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 13:38:55 crc kubenswrapper[4768]: I0217 13:38:55.778526 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6379baf0-ecbe-40bd-bf11-a49e64a264d3-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "6379baf0-ecbe-40bd-bf11-a49e64a264d3" (UID: "6379baf0-ecbe-40bd-bf11-a49e64a264d3"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 13:38:55 crc kubenswrapper[4768]: I0217 13:38:55.784007 4768 patch_prober.go:28] interesting pod/router-default-5444994796-ql5b5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 17 13:38:55 crc kubenswrapper[4768]: [-]has-synced failed: reason withheld Feb 17 13:38:55 crc kubenswrapper[4768]: [+]process-running ok Feb 17 13:38:55 crc kubenswrapper[4768]: healthz check failed Feb 17 13:38:55 crc kubenswrapper[4768]: I0217 13:38:55.784055 4768 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-ql5b5" podUID="49862fb8-6a93-48ac-926a-846f72a67989" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 17 13:38:55 crc kubenswrapper[4768]: I0217 13:38:55.873707 4768 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/6379baf0-ecbe-40bd-bf11-a49e64a264d3-kubelet-dir\") on node \"crc\" DevicePath \"\"" Feb 17 13:38:55 crc kubenswrapper[4768]: I0217 13:38:55.873740 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/6379baf0-ecbe-40bd-bf11-a49e64a264d3-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 17 13:38:56 crc kubenswrapper[4768]: I0217 13:38:56.320780 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"6379baf0-ecbe-40bd-bf11-a49e64a264d3","Type":"ContainerDied","Data":"c7eb4bcb4142322450e900194ba16d7374fa7368dd0fcf4bfbf6e40acbed2ebd"} Feb 17 13:38:56 crc kubenswrapper[4768]: I0217 13:38:56.320832 4768 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c7eb4bcb4142322450e900194ba16d7374fa7368dd0fcf4bfbf6e40acbed2ebd" Feb 17 13:38:56 crc kubenswrapper[4768]: I0217 13:38:56.320794 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 17 13:38:56 crc kubenswrapper[4768]: I0217 13:38:56.331432 4768 generic.go:334] "Generic (PLEG): container finished" podID="3529a765-a06e-42c3-9a16-959ca7662469" containerID="544862a676076d1f0165f8b3d61bb247d08c75adb18b02e4917db1dd1636c75e" exitCode=0 Feb 17 13:38:56 crc kubenswrapper[4768]: I0217 13:38:56.331494 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fg2l6" event={"ID":"3529a765-a06e-42c3-9a16-959ca7662469","Type":"ContainerDied","Data":"544862a676076d1f0165f8b3d61bb247d08c75adb18b02e4917db1dd1636c75e"} Feb 17 13:38:56 crc kubenswrapper[4768]: I0217 13:38:56.333967 4768 generic.go:334] "Generic (PLEG): container finished" podID="97d5e4e8-9db2-41ea-8516-3869d11a78ea" containerID="70f7a5aeccdb4a44ce456787bb9dc861a67a5d0f14f274b7c60d06022f8cc496" exitCode=0 Feb 17 13:38:56 crc kubenswrapper[4768]: I0217 13:38:56.334023 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"97d5e4e8-9db2-41ea-8516-3869d11a78ea","Type":"ContainerDied","Data":"70f7a5aeccdb4a44ce456787bb9dc861a67a5d0f14f274b7c60d06022f8cc496"} Feb 17 13:38:56 crc kubenswrapper[4768]: I0217 13:38:56.339835 4768 generic.go:334] "Generic (PLEG): container finished" podID="1c56706d-59bb-4e83-a1d8-a39c61f50cc2" containerID="e71010aba13536ec436d60e28042aefb0c36e9e2433efeaba76c1ff8886e68bd" exitCode=0 Feb 17 13:38:56 crc kubenswrapper[4768]: I0217 13:38:56.339913 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-s777d" event={"ID":"1c56706d-59bb-4e83-a1d8-a39c61f50cc2","Type":"ContainerDied","Data":"e71010aba13536ec436d60e28042aefb0c36e9e2433efeaba76c1ff8886e68bd"} Feb 17 13:38:56 crc kubenswrapper[4768]: I0217 13:38:56.601178 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-sqgwf" Feb 17 13:38:56 crc kubenswrapper[4768]: I0217 13:38:56.776956 4768 patch_prober.go:28] interesting pod/router-default-5444994796-ql5b5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 17 13:38:56 crc kubenswrapper[4768]: [-]has-synced failed: reason withheld Feb 17 13:38:56 crc kubenswrapper[4768]: [+]process-running ok Feb 17 13:38:56 crc kubenswrapper[4768]: healthz check failed Feb 17 13:38:56 crc kubenswrapper[4768]: I0217 13:38:56.777037 4768 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-ql5b5" podUID="49862fb8-6a93-48ac-926a-846f72a67989" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 17 13:38:57 crc kubenswrapper[4768]: I0217 13:38:57.776925 4768 patch_prober.go:28] interesting pod/router-default-5444994796-ql5b5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 17 13:38:57 crc kubenswrapper[4768]: [-]has-synced failed: reason withheld Feb 17 13:38:57 crc kubenswrapper[4768]: [+]process-running ok Feb 17 13:38:57 crc kubenswrapper[4768]: healthz check failed Feb 17 13:38:57 crc kubenswrapper[4768]: I0217 13:38:57.776978 4768 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-ql5b5" podUID="49862fb8-6a93-48ac-926a-846f72a67989" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 17 13:38:58 crc kubenswrapper[4768]: I0217 13:38:58.059735 4768 patch_prober.go:28] interesting pod/machine-config-daemon-p97z4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 13:38:58 crc kubenswrapper[4768]: I0217 13:38:58.059792 4768 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-p97z4" podUID="10c685ba-8fe0-425c-958c-3fb6754d3d84" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 13:38:58 crc kubenswrapper[4768]: I0217 13:38:58.790574 4768 patch_prober.go:28] interesting pod/router-default-5444994796-ql5b5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 17 13:38:58 crc kubenswrapper[4768]: [-]has-synced failed: reason withheld Feb 17 13:38:58 crc kubenswrapper[4768]: [+]process-running ok Feb 17 13:38:58 crc kubenswrapper[4768]: healthz check failed Feb 17 13:38:58 crc kubenswrapper[4768]: I0217 13:38:58.790855 4768 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-ql5b5" podUID="49862fb8-6a93-48ac-926a-846f72a67989" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 17 13:38:59 crc kubenswrapper[4768]: I0217 13:38:59.776892 4768 patch_prober.go:28] interesting pod/router-default-5444994796-ql5b5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 17 13:38:59 crc kubenswrapper[4768]: [-]has-synced failed: reason withheld Feb 17 13:38:59 crc kubenswrapper[4768]: [+]process-running ok Feb 17 13:38:59 crc kubenswrapper[4768]: healthz check failed Feb 17 13:38:59 crc kubenswrapper[4768]: I0217 13:38:59.776951 4768 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-ql5b5" podUID="49862fb8-6a93-48ac-926a-846f72a67989" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 17 13:39:00 crc kubenswrapper[4768]: I0217 13:39:00.366817 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-samples-operator_cluster-samples-operator-665b6dd947-rjn87_28c95f1d-fd75-4161-92ca-4cc1e928a1ba/cluster-samples-operator/0.log" Feb 17 13:39:00 crc kubenswrapper[4768]: I0217 13:39:00.367150 4768 generic.go:334] "Generic (PLEG): container finished" podID="28c95f1d-fd75-4161-92ca-4cc1e928a1ba" containerID="31fbcf3e938cb38f482dd713eb622c307d3210f7a43eb501f6e0ee8e9662ec43" exitCode=2 Feb 17 13:39:00 crc kubenswrapper[4768]: I0217 13:39:00.367184 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-rjn87" event={"ID":"28c95f1d-fd75-4161-92ca-4cc1e928a1ba","Type":"ContainerDied","Data":"31fbcf3e938cb38f482dd713eb622c307d3210f7a43eb501f6e0ee8e9662ec43"} Feb 17 13:39:00 crc kubenswrapper[4768]: I0217 13:39:00.367785 4768 scope.go:117] "RemoveContainer" containerID="31fbcf3e938cb38f482dd713eb622c307d3210f7a43eb501f6e0ee8e9662ec43" Feb 17 13:39:00 crc kubenswrapper[4768]: I0217 13:39:00.777175 4768 patch_prober.go:28] interesting pod/router-default-5444994796-ql5b5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 17 13:39:00 crc kubenswrapper[4768]: [-]has-synced failed: reason withheld Feb 17 13:39:00 crc kubenswrapper[4768]: [+]process-running ok Feb 17 13:39:00 crc kubenswrapper[4768]: healthz check failed Feb 17 13:39:00 crc kubenswrapper[4768]: I0217 13:39:00.777309 4768 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-ql5b5" podUID="49862fb8-6a93-48ac-926a-846f72a67989" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 17 13:39:01 crc kubenswrapper[4768]: I0217 13:39:01.777675 4768 patch_prober.go:28] interesting pod/router-default-5444994796-ql5b5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 17 13:39:01 crc kubenswrapper[4768]: [-]has-synced failed: reason withheld Feb 17 13:39:01 crc kubenswrapper[4768]: [+]process-running ok Feb 17 13:39:01 crc kubenswrapper[4768]: healthz check failed Feb 17 13:39:01 crc kubenswrapper[4768]: I0217 13:39:01.777731 4768 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-ql5b5" podUID="49862fb8-6a93-48ac-926a-846f72a67989" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 17 13:39:02 crc kubenswrapper[4768]: I0217 13:39:02.777274 4768 patch_prober.go:28] interesting pod/router-default-5444994796-ql5b5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 17 13:39:02 crc kubenswrapper[4768]: [-]has-synced failed: reason withheld Feb 17 13:39:02 crc kubenswrapper[4768]: [+]process-running ok Feb 17 13:39:02 crc kubenswrapper[4768]: healthz check failed Feb 17 13:39:02 crc kubenswrapper[4768]: I0217 13:39:02.777535 4768 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-ql5b5" podUID="49862fb8-6a93-48ac-926a-846f72a67989" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 17 13:39:02 crc kubenswrapper[4768]: I0217 13:39:02.786860 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-7954f5f757-mvb69" Feb 17 13:39:02 crc kubenswrapper[4768]: I0217 13:39:02.971221 4768 patch_prober.go:28] interesting pod/console-f9d7485db-9fmzj container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.8:8443/health\": dial tcp 10.217.0.8:8443: connect: connection refused" start-of-body= Feb 17 13:39:02 crc kubenswrapper[4768]: I0217 13:39:02.971275 4768 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-9fmzj" podUID="0030a046-d1bb-4a34-830c-c275306cee43" containerName="console" probeResult="failure" output="Get \"https://10.217.0.8:8443/health\": dial tcp 10.217.0.8:8443: connect: connection refused" Feb 17 13:39:03 crc kubenswrapper[4768]: I0217 13:39:03.406216 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/8c8b1469-ed55-4743-9553-f81efd79e5f1-metrics-certs\") pod \"network-metrics-daemon-5bxh7\" (UID: \"8c8b1469-ed55-4743-9553-f81efd79e5f1\") " pod="openshift-multus/network-metrics-daemon-5bxh7" Feb 17 13:39:03 crc kubenswrapper[4768]: I0217 13:39:03.416776 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/8c8b1469-ed55-4743-9553-f81efd79e5f1-metrics-certs\") pod \"network-metrics-daemon-5bxh7\" (UID: \"8c8b1469-ed55-4743-9553-f81efd79e5f1\") " pod="openshift-multus/network-metrics-daemon-5bxh7" Feb 17 13:39:03 crc kubenswrapper[4768]: I0217 13:39:03.651396 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5bxh7" Feb 17 13:39:03 crc kubenswrapper[4768]: I0217 13:39:03.776739 4768 patch_prober.go:28] interesting pod/router-default-5444994796-ql5b5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 17 13:39:03 crc kubenswrapper[4768]: [+]has-synced ok Feb 17 13:39:03 crc kubenswrapper[4768]: [+]process-running ok Feb 17 13:39:03 crc kubenswrapper[4768]: healthz check failed Feb 17 13:39:03 crc kubenswrapper[4768]: I0217 13:39:03.776791 4768 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-ql5b5" podUID="49862fb8-6a93-48ac-926a-846f72a67989" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 17 13:39:04 crc kubenswrapper[4768]: I0217 13:39:04.783192 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-5444994796-ql5b5" Feb 17 13:39:04 crc kubenswrapper[4768]: I0217 13:39:04.786910 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-5444994796-ql5b5" Feb 17 13:39:05 crc kubenswrapper[4768]: I0217 13:39:05.225318 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 17 13:39:05 crc kubenswrapper[4768]: I0217 13:39:05.349241 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/97d5e4e8-9db2-41ea-8516-3869d11a78ea-kubelet-dir\") pod \"97d5e4e8-9db2-41ea-8516-3869d11a78ea\" (UID: \"97d5e4e8-9db2-41ea-8516-3869d11a78ea\") " Feb 17 13:39:05 crc kubenswrapper[4768]: I0217 13:39:05.349676 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/97d5e4e8-9db2-41ea-8516-3869d11a78ea-kube-api-access\") pod \"97d5e4e8-9db2-41ea-8516-3869d11a78ea\" (UID: \"97d5e4e8-9db2-41ea-8516-3869d11a78ea\") " Feb 17 13:39:05 crc kubenswrapper[4768]: I0217 13:39:05.350784 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/97d5e4e8-9db2-41ea-8516-3869d11a78ea-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "97d5e4e8-9db2-41ea-8516-3869d11a78ea" (UID: "97d5e4e8-9db2-41ea-8516-3869d11a78ea"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 13:39:05 crc kubenswrapper[4768]: I0217 13:39:05.354212 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/97d5e4e8-9db2-41ea-8516-3869d11a78ea-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "97d5e4e8-9db2-41ea-8516-3869d11a78ea" (UID: "97d5e4e8-9db2-41ea-8516-3869d11a78ea"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 13:39:05 crc kubenswrapper[4768]: I0217 13:39:05.395112 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"97d5e4e8-9db2-41ea-8516-3869d11a78ea","Type":"ContainerDied","Data":"a12b95cad68c52bcbc8df45178ecfbee6f435b079350ca9419e3ab06686bdbc0"} Feb 17 13:39:05 crc kubenswrapper[4768]: I0217 13:39:05.395147 4768 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a12b95cad68c52bcbc8df45178ecfbee6f435b079350ca9419e3ab06686bdbc0" Feb 17 13:39:05 crc kubenswrapper[4768]: I0217 13:39:05.395195 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 17 13:39:05 crc kubenswrapper[4768]: I0217 13:39:05.451502 4768 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/97d5e4e8-9db2-41ea-8516-3869d11a78ea-kubelet-dir\") on node \"crc\" DevicePath \"\"" Feb 17 13:39:05 crc kubenswrapper[4768]: I0217 13:39:05.451537 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/97d5e4e8-9db2-41ea-8516-3869d11a78ea-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 17 13:39:05 crc kubenswrapper[4768]: I0217 13:39:05.677752 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-5bxh7"] Feb 17 13:39:06 crc kubenswrapper[4768]: I0217 13:39:06.407094 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-5bxh7" event={"ID":"8c8b1469-ed55-4743-9553-f81efd79e5f1","Type":"ContainerStarted","Data":"f97def35c0fcbf895233f6b4894e4f41b77627f0b1e4c9342a6f4451d351bd31"} Feb 17 13:39:06 crc kubenswrapper[4768]: I0217 13:39:06.413557 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-samples-operator_cluster-samples-operator-665b6dd947-rjn87_28c95f1d-fd75-4161-92ca-4cc1e928a1ba/cluster-samples-operator/0.log" Feb 17 13:39:06 crc kubenswrapper[4768]: I0217 13:39:06.413642 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-rjn87" event={"ID":"28c95f1d-fd75-4161-92ca-4cc1e928a1ba","Type":"ContainerStarted","Data":"ae31fafaeb8696472e479d9e501d8ee3cc5bbce4b3935b2860a82bd1565c360a"} Feb 17 13:39:10 crc kubenswrapper[4768]: I0217 13:39:10.345562 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-bsrtm"] Feb 17 13:39:10 crc kubenswrapper[4768]: I0217 13:39:10.346355 4768 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-879f6c89f-bsrtm" podUID="0af47ec2-b35c-48df-8f91-9c878fb5ee94" containerName="controller-manager" containerID="cri-o://bd662b0fcf80ae8592adf3fc20a8ef23c5f24a6752356787a93772e3687f6125" gracePeriod=30 Feb 17 13:39:10 crc kubenswrapper[4768]: I0217 13:39:10.349643 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-c55bg"] Feb 17 13:39:10 crc kubenswrapper[4768]: I0217 13:39:10.349919 4768 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-c55bg" podUID="97d3e088-9eff-4b50-a5ac-c5bd6bfcb773" containerName="route-controller-manager" containerID="cri-o://7b69291183765f34d566784290a299965003c1870273482c5c4cdce1e9600f77" gracePeriod=30 Feb 17 13:39:10 crc kubenswrapper[4768]: I0217 13:39:10.453173 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-5bxh7" event={"ID":"8c8b1469-ed55-4743-9553-f81efd79e5f1","Type":"ContainerStarted","Data":"b12be6bb5c83b1deceb0f1b2ab9ed2b256ec9e8bf79dac2702181bbd9fe34715"} Feb 17 13:39:12 crc kubenswrapper[4768]: I0217 13:39:12.905388 4768 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-c55bg container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.5:8443/healthz\": dial tcp 10.217.0.5:8443: connect: connection refused" start-of-body= Feb 17 13:39:12 crc kubenswrapper[4768]: I0217 13:39:12.905463 4768 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-c55bg" podUID="97d3e088-9eff-4b50-a5ac-c5bd6bfcb773" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.5:8443/healthz\": dial tcp 10.217.0.5:8443: connect: connection refused" Feb 17 13:39:12 crc kubenswrapper[4768]: I0217 13:39:12.993874 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-697d97f7c8-vfpbq" Feb 17 13:39:13 crc kubenswrapper[4768]: I0217 13:39:13.000749 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-f9d7485db-9fmzj" Feb 17 13:39:13 crc kubenswrapper[4768]: I0217 13:39:13.005172 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-f9d7485db-9fmzj" Feb 17 13:39:15 crc kubenswrapper[4768]: I0217 13:39:15.061445 4768 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-bsrtm container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.28:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 17 13:39:15 crc kubenswrapper[4768]: I0217 13:39:15.061791 4768 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-bsrtm" podUID="0af47ec2-b35c-48df-8f91-9c878fb5ee94" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.28:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 17 13:39:18 crc kubenswrapper[4768]: E0217 13:39:18.222154 4768 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: writing blob: storing blob to file \"/var/tmp/container_images_storage1146210373/2\": happened during read: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Feb 17 13:39:18 crc kubenswrapper[4768]: E0217 13:39:18.222621 4768 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4wvbd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-4tmgb_openshift-marketplace(b7de1a69-e892-4ca4-a61f-20a221ce38ba): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: writing blob: storing blob to file \"/var/tmp/container_images_storage1146210373/2\": happened during read: context canceled" logger="UnhandledError" Feb 17 13:39:18 crc kubenswrapper[4768]: E0217 13:39:18.225025 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: writing blob: storing blob to file \\\"/var/tmp/container_images_storage1146210373/2\\\": happened during read: context canceled\"" pod="openshift-marketplace/certified-operators-4tmgb" podUID="b7de1a69-e892-4ca4-a61f-20a221ce38ba" Feb 17 13:39:18 crc kubenswrapper[4768]: I0217 13:39:18.511375 4768 generic.go:334] "Generic (PLEG): container finished" podID="0af47ec2-b35c-48df-8f91-9c878fb5ee94" containerID="bd662b0fcf80ae8592adf3fc20a8ef23c5f24a6752356787a93772e3687f6125" exitCode=0 Feb 17 13:39:18 crc kubenswrapper[4768]: I0217 13:39:18.511459 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-bsrtm" event={"ID":"0af47ec2-b35c-48df-8f91-9c878fb5ee94","Type":"ContainerDied","Data":"bd662b0fcf80ae8592adf3fc20a8ef23c5f24a6752356787a93772e3687f6125"} Feb 17 13:39:18 crc kubenswrapper[4768]: I0217 13:39:18.513407 4768 generic.go:334] "Generic (PLEG): container finished" podID="97d3e088-9eff-4b50-a5ac-c5bd6bfcb773" containerID="7b69291183765f34d566784290a299965003c1870273482c5c4cdce1e9600f77" exitCode=0 Feb 17 13:39:18 crc kubenswrapper[4768]: I0217 13:39:18.513488 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-c55bg" event={"ID":"97d3e088-9eff-4b50-a5ac-c5bd6bfcb773","Type":"ContainerDied","Data":"7b69291183765f34d566784290a299965003c1870273482c5c4cdce1e9600f77"} Feb 17 13:39:23 crc kubenswrapper[4768]: I0217 13:39:23.906358 4768 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-c55bg container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.5:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 17 13:39:23 crc kubenswrapper[4768]: I0217 13:39:23.906813 4768 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-c55bg" podUID="97d3e088-9eff-4b50-a5ac-c5bd6bfcb773" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.5:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 17 13:39:24 crc kubenswrapper[4768]: I0217 13:39:24.062087 4768 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-bsrtm container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.28:8443/healthz\": dial tcp 10.217.0.28:8443: connect: connection refused" start-of-body= Feb 17 13:39:24 crc kubenswrapper[4768]: I0217 13:39:24.062231 4768 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-bsrtm" podUID="0af47ec2-b35c-48df-8f91-9c878fb5ee94" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.28:8443/healthz\": dial tcp 10.217.0.28:8443: connect: connection refused" Feb 17 13:39:24 crc kubenswrapper[4768]: I0217 13:39:24.450862 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-vgjr8" Feb 17 13:39:24 crc kubenswrapper[4768]: E0217 13:39:24.631072 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-4tmgb" podUID="b7de1a69-e892-4ca4-a61f-20a221ce38ba" Feb 17 13:39:24 crc kubenswrapper[4768]: I0217 13:39:24.668300 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-c55bg" Feb 17 13:39:24 crc kubenswrapper[4768]: I0217 13:39:24.703772 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-56bfc9748-k8g8k"] Feb 17 13:39:24 crc kubenswrapper[4768]: E0217 13:39:24.704040 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="97d5e4e8-9db2-41ea-8516-3869d11a78ea" containerName="pruner" Feb 17 13:39:24 crc kubenswrapper[4768]: I0217 13:39:24.704056 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="97d5e4e8-9db2-41ea-8516-3869d11a78ea" containerName="pruner" Feb 17 13:39:24 crc kubenswrapper[4768]: E0217 13:39:24.704069 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6379baf0-ecbe-40bd-bf11-a49e64a264d3" containerName="pruner" Feb 17 13:39:24 crc kubenswrapper[4768]: I0217 13:39:24.704076 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="6379baf0-ecbe-40bd-bf11-a49e64a264d3" containerName="pruner" Feb 17 13:39:24 crc kubenswrapper[4768]: E0217 13:39:24.704092 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="97d3e088-9eff-4b50-a5ac-c5bd6bfcb773" containerName="route-controller-manager" Feb 17 13:39:24 crc kubenswrapper[4768]: I0217 13:39:24.704119 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="97d3e088-9eff-4b50-a5ac-c5bd6bfcb773" containerName="route-controller-manager" Feb 17 13:39:24 crc kubenswrapper[4768]: I0217 13:39:24.704235 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="97d3e088-9eff-4b50-a5ac-c5bd6bfcb773" containerName="route-controller-manager" Feb 17 13:39:24 crc kubenswrapper[4768]: I0217 13:39:24.704253 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="6379baf0-ecbe-40bd-bf11-a49e64a264d3" containerName="pruner" Feb 17 13:39:24 crc kubenswrapper[4768]: I0217 13:39:24.704263 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="97d5e4e8-9db2-41ea-8516-3869d11a78ea" containerName="pruner" Feb 17 13:39:24 crc kubenswrapper[4768]: I0217 13:39:24.704737 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-56bfc9748-k8g8k" Feb 17 13:39:24 crc kubenswrapper[4768]: I0217 13:39:24.712214 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-56bfc9748-k8g8k"] Feb 17 13:39:24 crc kubenswrapper[4768]: I0217 13:39:24.811328 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/97d3e088-9eff-4b50-a5ac-c5bd6bfcb773-client-ca\") pod \"97d3e088-9eff-4b50-a5ac-c5bd6bfcb773\" (UID: \"97d3e088-9eff-4b50-a5ac-c5bd6bfcb773\") " Feb 17 13:39:24 crc kubenswrapper[4768]: I0217 13:39:24.811437 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d5nr7\" (UniqueName: \"kubernetes.io/projected/97d3e088-9eff-4b50-a5ac-c5bd6bfcb773-kube-api-access-d5nr7\") pod \"97d3e088-9eff-4b50-a5ac-c5bd6bfcb773\" (UID: \"97d3e088-9eff-4b50-a5ac-c5bd6bfcb773\") " Feb 17 13:39:24 crc kubenswrapper[4768]: I0217 13:39:24.811503 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/97d3e088-9eff-4b50-a5ac-c5bd6bfcb773-serving-cert\") pod \"97d3e088-9eff-4b50-a5ac-c5bd6bfcb773\" (UID: \"97d3e088-9eff-4b50-a5ac-c5bd6bfcb773\") " Feb 17 13:39:24 crc kubenswrapper[4768]: I0217 13:39:24.811557 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/97d3e088-9eff-4b50-a5ac-c5bd6bfcb773-config\") pod \"97d3e088-9eff-4b50-a5ac-c5bd6bfcb773\" (UID: \"97d3e088-9eff-4b50-a5ac-c5bd6bfcb773\") " Feb 17 13:39:24 crc kubenswrapper[4768]: I0217 13:39:24.811697 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-67jnn\" (UniqueName: \"kubernetes.io/projected/2cfdf977-ff76-4f4f-8a53-a69143dec6ec-kube-api-access-67jnn\") pod \"route-controller-manager-56bfc9748-k8g8k\" (UID: \"2cfdf977-ff76-4f4f-8a53-a69143dec6ec\") " pod="openshift-route-controller-manager/route-controller-manager-56bfc9748-k8g8k" Feb 17 13:39:24 crc kubenswrapper[4768]: I0217 13:39:24.811760 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2cfdf977-ff76-4f4f-8a53-a69143dec6ec-serving-cert\") pod \"route-controller-manager-56bfc9748-k8g8k\" (UID: \"2cfdf977-ff76-4f4f-8a53-a69143dec6ec\") " pod="openshift-route-controller-manager/route-controller-manager-56bfc9748-k8g8k" Feb 17 13:39:24 crc kubenswrapper[4768]: I0217 13:39:24.811839 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2cfdf977-ff76-4f4f-8a53-a69143dec6ec-config\") pod \"route-controller-manager-56bfc9748-k8g8k\" (UID: \"2cfdf977-ff76-4f4f-8a53-a69143dec6ec\") " pod="openshift-route-controller-manager/route-controller-manager-56bfc9748-k8g8k" Feb 17 13:39:24 crc kubenswrapper[4768]: I0217 13:39:24.811860 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2cfdf977-ff76-4f4f-8a53-a69143dec6ec-client-ca\") pod \"route-controller-manager-56bfc9748-k8g8k\" (UID: \"2cfdf977-ff76-4f4f-8a53-a69143dec6ec\") " pod="openshift-route-controller-manager/route-controller-manager-56bfc9748-k8g8k" Feb 17 13:39:24 crc kubenswrapper[4768]: I0217 13:39:24.812238 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/97d3e088-9eff-4b50-a5ac-c5bd6bfcb773-client-ca" (OuterVolumeSpecName: "client-ca") pod "97d3e088-9eff-4b50-a5ac-c5bd6bfcb773" (UID: "97d3e088-9eff-4b50-a5ac-c5bd6bfcb773"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 13:39:24 crc kubenswrapper[4768]: I0217 13:39:24.812524 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/97d3e088-9eff-4b50-a5ac-c5bd6bfcb773-config" (OuterVolumeSpecName: "config") pod "97d3e088-9eff-4b50-a5ac-c5bd6bfcb773" (UID: "97d3e088-9eff-4b50-a5ac-c5bd6bfcb773"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 13:39:24 crc kubenswrapper[4768]: I0217 13:39:24.824396 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/97d3e088-9eff-4b50-a5ac-c5bd6bfcb773-kube-api-access-d5nr7" (OuterVolumeSpecName: "kube-api-access-d5nr7") pod "97d3e088-9eff-4b50-a5ac-c5bd6bfcb773" (UID: "97d3e088-9eff-4b50-a5ac-c5bd6bfcb773"). InnerVolumeSpecName "kube-api-access-d5nr7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 13:39:24 crc kubenswrapper[4768]: I0217 13:39:24.825569 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/97d3e088-9eff-4b50-a5ac-c5bd6bfcb773-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "97d3e088-9eff-4b50-a5ac-c5bd6bfcb773" (UID: "97d3e088-9eff-4b50-a5ac-c5bd6bfcb773"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 13:39:24 crc kubenswrapper[4768]: I0217 13:39:24.913491 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2cfdf977-ff76-4f4f-8a53-a69143dec6ec-serving-cert\") pod \"route-controller-manager-56bfc9748-k8g8k\" (UID: \"2cfdf977-ff76-4f4f-8a53-a69143dec6ec\") " pod="openshift-route-controller-manager/route-controller-manager-56bfc9748-k8g8k" Feb 17 13:39:24 crc kubenswrapper[4768]: I0217 13:39:24.913573 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2cfdf977-ff76-4f4f-8a53-a69143dec6ec-config\") pod \"route-controller-manager-56bfc9748-k8g8k\" (UID: \"2cfdf977-ff76-4f4f-8a53-a69143dec6ec\") " pod="openshift-route-controller-manager/route-controller-manager-56bfc9748-k8g8k" Feb 17 13:39:24 crc kubenswrapper[4768]: I0217 13:39:24.913594 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2cfdf977-ff76-4f4f-8a53-a69143dec6ec-client-ca\") pod \"route-controller-manager-56bfc9748-k8g8k\" (UID: \"2cfdf977-ff76-4f4f-8a53-a69143dec6ec\") " pod="openshift-route-controller-manager/route-controller-manager-56bfc9748-k8g8k" Feb 17 13:39:24 crc kubenswrapper[4768]: I0217 13:39:24.913662 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-67jnn\" (UniqueName: \"kubernetes.io/projected/2cfdf977-ff76-4f4f-8a53-a69143dec6ec-kube-api-access-67jnn\") pod \"route-controller-manager-56bfc9748-k8g8k\" (UID: \"2cfdf977-ff76-4f4f-8a53-a69143dec6ec\") " pod="openshift-route-controller-manager/route-controller-manager-56bfc9748-k8g8k" Feb 17 13:39:24 crc kubenswrapper[4768]: I0217 13:39:24.913722 4768 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/97d3e088-9eff-4b50-a5ac-c5bd6bfcb773-config\") on node \"crc\" DevicePath \"\"" Feb 17 13:39:24 crc kubenswrapper[4768]: I0217 13:39:24.913733 4768 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/97d3e088-9eff-4b50-a5ac-c5bd6bfcb773-client-ca\") on node \"crc\" DevicePath \"\"" Feb 17 13:39:24 crc kubenswrapper[4768]: I0217 13:39:24.913744 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d5nr7\" (UniqueName: \"kubernetes.io/projected/97d3e088-9eff-4b50-a5ac-c5bd6bfcb773-kube-api-access-d5nr7\") on node \"crc\" DevicePath \"\"" Feb 17 13:39:24 crc kubenswrapper[4768]: I0217 13:39:24.913754 4768 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/97d3e088-9eff-4b50-a5ac-c5bd6bfcb773-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 17 13:39:24 crc kubenswrapper[4768]: I0217 13:39:24.915638 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2cfdf977-ff76-4f4f-8a53-a69143dec6ec-client-ca\") pod \"route-controller-manager-56bfc9748-k8g8k\" (UID: \"2cfdf977-ff76-4f4f-8a53-a69143dec6ec\") " pod="openshift-route-controller-manager/route-controller-manager-56bfc9748-k8g8k" Feb 17 13:39:24 crc kubenswrapper[4768]: I0217 13:39:24.916025 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2cfdf977-ff76-4f4f-8a53-a69143dec6ec-config\") pod \"route-controller-manager-56bfc9748-k8g8k\" (UID: \"2cfdf977-ff76-4f4f-8a53-a69143dec6ec\") " pod="openshift-route-controller-manager/route-controller-manager-56bfc9748-k8g8k" Feb 17 13:39:24 crc kubenswrapper[4768]: I0217 13:39:24.928847 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-67jnn\" (UniqueName: \"kubernetes.io/projected/2cfdf977-ff76-4f4f-8a53-a69143dec6ec-kube-api-access-67jnn\") pod \"route-controller-manager-56bfc9748-k8g8k\" (UID: \"2cfdf977-ff76-4f4f-8a53-a69143dec6ec\") " pod="openshift-route-controller-manager/route-controller-manager-56bfc9748-k8g8k" Feb 17 13:39:24 crc kubenswrapper[4768]: I0217 13:39:24.930364 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2cfdf977-ff76-4f4f-8a53-a69143dec6ec-serving-cert\") pod \"route-controller-manager-56bfc9748-k8g8k\" (UID: \"2cfdf977-ff76-4f4f-8a53-a69143dec6ec\") " pod="openshift-route-controller-manager/route-controller-manager-56bfc9748-k8g8k" Feb 17 13:39:24 crc kubenswrapper[4768]: E0217 13:39:24.964382 4768 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Feb 17 13:39:24 crc kubenswrapper[4768]: E0217 13:39:24.964595 4768 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9skbx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-fg2l6_openshift-marketplace(3529a765-a06e-42c3-9a16-959ca7662469): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 17 13:39:24 crc kubenswrapper[4768]: E0217 13:39:24.965891 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-fg2l6" podUID="3529a765-a06e-42c3-9a16-959ca7662469" Feb 17 13:39:25 crc kubenswrapper[4768]: I0217 13:39:25.024400 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-56bfc9748-k8g8k" Feb 17 13:39:25 crc kubenswrapper[4768]: I0217 13:39:25.555449 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-c55bg" event={"ID":"97d3e088-9eff-4b50-a5ac-c5bd6bfcb773","Type":"ContainerDied","Data":"ad40713bc17388a9ffd14843fcb8a014b014908d81db7a14249450bbe09501b3"} Feb 17 13:39:25 crc kubenswrapper[4768]: I0217 13:39:25.555464 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-c55bg" Feb 17 13:39:25 crc kubenswrapper[4768]: I0217 13:39:25.555524 4768 scope.go:117] "RemoveContainer" containerID="7b69291183765f34d566784290a299965003c1870273482c5c4cdce1e9600f77" Feb 17 13:39:25 crc kubenswrapper[4768]: I0217 13:39:25.607181 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-c55bg"] Feb 17 13:39:25 crc kubenswrapper[4768]: I0217 13:39:25.610794 4768 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-c55bg"] Feb 17 13:39:26 crc kubenswrapper[4768]: E0217 13:39:26.160916 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-fg2l6" podUID="3529a765-a06e-42c3-9a16-959ca7662469" Feb 17 13:39:26 crc kubenswrapper[4768]: E0217 13:39:26.224045 4768 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Feb 17 13:39:26 crc kubenswrapper[4768]: E0217 13:39:26.224238 4768 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-sh9d2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-fbx28_openshift-marketplace(e826302e-7052-4a6e-a626-93b7b433096a): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 17 13:39:26 crc kubenswrapper[4768]: E0217 13:39:26.226369 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-fbx28" podUID="e826302e-7052-4a6e-a626-93b7b433096a" Feb 17 13:39:26 crc kubenswrapper[4768]: E0217 13:39:26.271125 4768 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Feb 17 13:39:26 crc kubenswrapper[4768]: E0217 13:39:26.271308 4768 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rnlnm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-ncxf5_openshift-marketplace(9497730e-2a05-40b9-a4ee-364b67a9133c): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 17 13:39:26 crc kubenswrapper[4768]: E0217 13:39:26.272833 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-ncxf5" podUID="9497730e-2a05-40b9-a4ee-364b67a9133c" Feb 17 13:39:27 crc kubenswrapper[4768]: I0217 13:39:27.540462 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="97d3e088-9eff-4b50-a5ac-c5bd6bfcb773" path="/var/lib/kubelet/pods/97d3e088-9eff-4b50-a5ac-c5bd6bfcb773/volumes" Feb 17 13:39:27 crc kubenswrapper[4768]: E0217 13:39:27.618401 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-fbx28" podUID="e826302e-7052-4a6e-a626-93b7b433096a" Feb 17 13:39:27 crc kubenswrapper[4768]: E0217 13:39:27.618483 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-ncxf5" podUID="9497730e-2a05-40b9-a4ee-364b67a9133c" Feb 17 13:39:27 crc kubenswrapper[4768]: E0217 13:39:27.777166 4768 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Feb 17 13:39:27 crc kubenswrapper[4768]: E0217 13:39:27.777353 4768 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7n4dv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-pc222_openshift-marketplace(a409f38d-1da9-42e5-94ff-502133f6cee2): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 17 13:39:27 crc kubenswrapper[4768]: E0217 13:39:27.778517 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-pc222" podUID="a409f38d-1da9-42e5-94ff-502133f6cee2" Feb 17 13:39:28 crc kubenswrapper[4768]: I0217 13:39:28.059781 4768 patch_prober.go:28] interesting pod/machine-config-daemon-p97z4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 13:39:28 crc kubenswrapper[4768]: I0217 13:39:28.060093 4768 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-p97z4" podUID="10c685ba-8fe0-425c-958c-3fb6754d3d84" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 13:39:29 crc kubenswrapper[4768]: E0217 13:39:29.167905 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-pc222" podUID="a409f38d-1da9-42e5-94ff-502133f6cee2" Feb 17 13:39:29 crc kubenswrapper[4768]: E0217 13:39:29.183415 4768 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Feb 17 13:39:29 crc kubenswrapper[4768]: E0217 13:39:29.183565 4768 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rkcmq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-wsm7m_openshift-marketplace(7e7136ca-949e-49ff-9f79-47e485a039cb): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 17 13:39:29 crc kubenswrapper[4768]: E0217 13:39:29.184787 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-wsm7m" podUID="7e7136ca-949e-49ff-9f79-47e485a039cb" Feb 17 13:39:29 crc kubenswrapper[4768]: E0217 13:39:29.238185 4768 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Feb 17 13:39:29 crc kubenswrapper[4768]: E0217 13:39:29.238641 4768 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7hvg2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-9r46k_openshift-marketplace(6a25c47f-7f1c-42ed-85bf-acfe8949338b): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 17 13:39:29 crc kubenswrapper[4768]: E0217 13:39:29.239863 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-9r46k" podUID="6a25c47f-7f1c-42ed-85bf-acfe8949338b" Feb 17 13:39:29 crc kubenswrapper[4768]: I0217 13:39:29.249007 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-bsrtm" Feb 17 13:39:29 crc kubenswrapper[4768]: I0217 13:39:29.282737 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-7cb9c7f474-44f84"] Feb 17 13:39:29 crc kubenswrapper[4768]: E0217 13:39:29.282956 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0af47ec2-b35c-48df-8f91-9c878fb5ee94" containerName="controller-manager" Feb 17 13:39:29 crc kubenswrapper[4768]: I0217 13:39:29.282966 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="0af47ec2-b35c-48df-8f91-9c878fb5ee94" containerName="controller-manager" Feb 17 13:39:29 crc kubenswrapper[4768]: I0217 13:39:29.283071 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="0af47ec2-b35c-48df-8f91-9c878fb5ee94" containerName="controller-manager" Feb 17 13:39:29 crc kubenswrapper[4768]: I0217 13:39:29.283523 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7cb9c7f474-44f84" Feb 17 13:39:29 crc kubenswrapper[4768]: I0217 13:39:29.287440 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-7cb9c7f474-44f84"] Feb 17 13:39:29 crc kubenswrapper[4768]: I0217 13:39:29.367448 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0af47ec2-b35c-48df-8f91-9c878fb5ee94-client-ca\") pod \"0af47ec2-b35c-48df-8f91-9c878fb5ee94\" (UID: \"0af47ec2-b35c-48df-8f91-9c878fb5ee94\") " Feb 17 13:39:29 crc kubenswrapper[4768]: I0217 13:39:29.367836 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/0af47ec2-b35c-48df-8f91-9c878fb5ee94-proxy-ca-bundles\") pod \"0af47ec2-b35c-48df-8f91-9c878fb5ee94\" (UID: \"0af47ec2-b35c-48df-8f91-9c878fb5ee94\") " Feb 17 13:39:29 crc kubenswrapper[4768]: I0217 13:39:29.367890 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0af47ec2-b35c-48df-8f91-9c878fb5ee94-config\") pod \"0af47ec2-b35c-48df-8f91-9c878fb5ee94\" (UID: \"0af47ec2-b35c-48df-8f91-9c878fb5ee94\") " Feb 17 13:39:29 crc kubenswrapper[4768]: I0217 13:39:29.367927 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cccpk\" (UniqueName: \"kubernetes.io/projected/0af47ec2-b35c-48df-8f91-9c878fb5ee94-kube-api-access-cccpk\") pod \"0af47ec2-b35c-48df-8f91-9c878fb5ee94\" (UID: \"0af47ec2-b35c-48df-8f91-9c878fb5ee94\") " Feb 17 13:39:29 crc kubenswrapper[4768]: I0217 13:39:29.367955 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0af47ec2-b35c-48df-8f91-9c878fb5ee94-serving-cert\") pod \"0af47ec2-b35c-48df-8f91-9c878fb5ee94\" (UID: \"0af47ec2-b35c-48df-8f91-9c878fb5ee94\") " Feb 17 13:39:29 crc kubenswrapper[4768]: I0217 13:39:29.368388 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0af47ec2-b35c-48df-8f91-9c878fb5ee94-client-ca" (OuterVolumeSpecName: "client-ca") pod "0af47ec2-b35c-48df-8f91-9c878fb5ee94" (UID: "0af47ec2-b35c-48df-8f91-9c878fb5ee94"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 13:39:29 crc kubenswrapper[4768]: I0217 13:39:29.368734 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0af47ec2-b35c-48df-8f91-9c878fb5ee94-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "0af47ec2-b35c-48df-8f91-9c878fb5ee94" (UID: "0af47ec2-b35c-48df-8f91-9c878fb5ee94"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 13:39:29 crc kubenswrapper[4768]: I0217 13:39:29.368828 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0af47ec2-b35c-48df-8f91-9c878fb5ee94-config" (OuterVolumeSpecName: "config") pod "0af47ec2-b35c-48df-8f91-9c878fb5ee94" (UID: "0af47ec2-b35c-48df-8f91-9c878fb5ee94"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 13:39:29 crc kubenswrapper[4768]: I0217 13:39:29.376751 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0af47ec2-b35c-48df-8f91-9c878fb5ee94-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0af47ec2-b35c-48df-8f91-9c878fb5ee94" (UID: "0af47ec2-b35c-48df-8f91-9c878fb5ee94"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 13:39:29 crc kubenswrapper[4768]: I0217 13:39:29.385588 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0af47ec2-b35c-48df-8f91-9c878fb5ee94-kube-api-access-cccpk" (OuterVolumeSpecName: "kube-api-access-cccpk") pod "0af47ec2-b35c-48df-8f91-9c878fb5ee94" (UID: "0af47ec2-b35c-48df-8f91-9c878fb5ee94"). InnerVolumeSpecName "kube-api-access-cccpk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 13:39:29 crc kubenswrapper[4768]: I0217 13:39:29.469189 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/262d064f-a10f-4c7f-b256-59fa234849e8-client-ca\") pod \"controller-manager-7cb9c7f474-44f84\" (UID: \"262d064f-a10f-4c7f-b256-59fa234849e8\") " pod="openshift-controller-manager/controller-manager-7cb9c7f474-44f84" Feb 17 13:39:29 crc kubenswrapper[4768]: I0217 13:39:29.469437 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9wtx8\" (UniqueName: \"kubernetes.io/projected/262d064f-a10f-4c7f-b256-59fa234849e8-kube-api-access-9wtx8\") pod \"controller-manager-7cb9c7f474-44f84\" (UID: \"262d064f-a10f-4c7f-b256-59fa234849e8\") " pod="openshift-controller-manager/controller-manager-7cb9c7f474-44f84" Feb 17 13:39:29 crc kubenswrapper[4768]: I0217 13:39:29.469476 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/262d064f-a10f-4c7f-b256-59fa234849e8-config\") pod \"controller-manager-7cb9c7f474-44f84\" (UID: \"262d064f-a10f-4c7f-b256-59fa234849e8\") " pod="openshift-controller-manager/controller-manager-7cb9c7f474-44f84" Feb 17 13:39:29 crc kubenswrapper[4768]: I0217 13:39:29.469507 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/262d064f-a10f-4c7f-b256-59fa234849e8-proxy-ca-bundles\") pod \"controller-manager-7cb9c7f474-44f84\" (UID: \"262d064f-a10f-4c7f-b256-59fa234849e8\") " pod="openshift-controller-manager/controller-manager-7cb9c7f474-44f84" Feb 17 13:39:29 crc kubenswrapper[4768]: I0217 13:39:29.469554 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/262d064f-a10f-4c7f-b256-59fa234849e8-serving-cert\") pod \"controller-manager-7cb9c7f474-44f84\" (UID: \"262d064f-a10f-4c7f-b256-59fa234849e8\") " pod="openshift-controller-manager/controller-manager-7cb9c7f474-44f84" Feb 17 13:39:29 crc kubenswrapper[4768]: I0217 13:39:29.469596 4768 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0af47ec2-b35c-48df-8f91-9c878fb5ee94-config\") on node \"crc\" DevicePath \"\"" Feb 17 13:39:29 crc kubenswrapper[4768]: I0217 13:39:29.469606 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cccpk\" (UniqueName: \"kubernetes.io/projected/0af47ec2-b35c-48df-8f91-9c878fb5ee94-kube-api-access-cccpk\") on node \"crc\" DevicePath \"\"" Feb 17 13:39:29 crc kubenswrapper[4768]: I0217 13:39:29.469614 4768 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0af47ec2-b35c-48df-8f91-9c878fb5ee94-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 17 13:39:29 crc kubenswrapper[4768]: I0217 13:39:29.469622 4768 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0af47ec2-b35c-48df-8f91-9c878fb5ee94-client-ca\") on node \"crc\" DevicePath \"\"" Feb 17 13:39:29 crc kubenswrapper[4768]: I0217 13:39:29.469630 4768 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/0af47ec2-b35c-48df-8f91-9c878fb5ee94-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 17 13:39:29 crc kubenswrapper[4768]: I0217 13:39:29.562418 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 13:39:29 crc kubenswrapper[4768]: I0217 13:39:29.570327 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/262d064f-a10f-4c7f-b256-59fa234849e8-client-ca\") pod \"controller-manager-7cb9c7f474-44f84\" (UID: \"262d064f-a10f-4c7f-b256-59fa234849e8\") " pod="openshift-controller-manager/controller-manager-7cb9c7f474-44f84" Feb 17 13:39:29 crc kubenswrapper[4768]: I0217 13:39:29.570378 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9wtx8\" (UniqueName: \"kubernetes.io/projected/262d064f-a10f-4c7f-b256-59fa234849e8-kube-api-access-9wtx8\") pod \"controller-manager-7cb9c7f474-44f84\" (UID: \"262d064f-a10f-4c7f-b256-59fa234849e8\") " pod="openshift-controller-manager/controller-manager-7cb9c7f474-44f84" Feb 17 13:39:29 crc kubenswrapper[4768]: I0217 13:39:29.570422 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/262d064f-a10f-4c7f-b256-59fa234849e8-config\") pod \"controller-manager-7cb9c7f474-44f84\" (UID: \"262d064f-a10f-4c7f-b256-59fa234849e8\") " pod="openshift-controller-manager/controller-manager-7cb9c7f474-44f84" Feb 17 13:39:29 crc kubenswrapper[4768]: I0217 13:39:29.570456 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/262d064f-a10f-4c7f-b256-59fa234849e8-proxy-ca-bundles\") pod \"controller-manager-7cb9c7f474-44f84\" (UID: \"262d064f-a10f-4c7f-b256-59fa234849e8\") " pod="openshift-controller-manager/controller-manager-7cb9c7f474-44f84" Feb 17 13:39:29 crc kubenswrapper[4768]: I0217 13:39:29.571814 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/262d064f-a10f-4c7f-b256-59fa234849e8-serving-cert\") pod \"controller-manager-7cb9c7f474-44f84\" (UID: \"262d064f-a10f-4c7f-b256-59fa234849e8\") " pod="openshift-controller-manager/controller-manager-7cb9c7f474-44f84" Feb 17 13:39:29 crc kubenswrapper[4768]: I0217 13:39:29.572327 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/262d064f-a10f-4c7f-b256-59fa234849e8-client-ca\") pod \"controller-manager-7cb9c7f474-44f84\" (UID: \"262d064f-a10f-4c7f-b256-59fa234849e8\") " pod="openshift-controller-manager/controller-manager-7cb9c7f474-44f84" Feb 17 13:39:29 crc kubenswrapper[4768]: I0217 13:39:29.572573 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/262d064f-a10f-4c7f-b256-59fa234849e8-proxy-ca-bundles\") pod \"controller-manager-7cb9c7f474-44f84\" (UID: \"262d064f-a10f-4c7f-b256-59fa234849e8\") " pod="openshift-controller-manager/controller-manager-7cb9c7f474-44f84" Feb 17 13:39:29 crc kubenswrapper[4768]: I0217 13:39:29.574277 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-s777d" event={"ID":"1c56706d-59bb-4e83-a1d8-a39c61f50cc2","Type":"ContainerStarted","Data":"c363b98c16390bd7e98fd1623f76661d5b5588dea54247fd9d6323848a02a283"} Feb 17 13:39:29 crc kubenswrapper[4768]: I0217 13:39:29.577897 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/262d064f-a10f-4c7f-b256-59fa234849e8-serving-cert\") pod \"controller-manager-7cb9c7f474-44f84\" (UID: \"262d064f-a10f-4c7f-b256-59fa234849e8\") " pod="openshift-controller-manager/controller-manager-7cb9c7f474-44f84" Feb 17 13:39:29 crc kubenswrapper[4768]: I0217 13:39:29.578153 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-bsrtm" event={"ID":"0af47ec2-b35c-48df-8f91-9c878fb5ee94","Type":"ContainerDied","Data":"aa33fd1f159cb25aee488540f8590926d6e38d5886ecaed15983fe3d5b472941"} Feb 17 13:39:29 crc kubenswrapper[4768]: I0217 13:39:29.578197 4768 scope.go:117] "RemoveContainer" containerID="bd662b0fcf80ae8592adf3fc20a8ef23c5f24a6752356787a93772e3687f6125" Feb 17 13:39:29 crc kubenswrapper[4768]: I0217 13:39:29.578323 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-bsrtm" Feb 17 13:39:29 crc kubenswrapper[4768]: I0217 13:39:29.578636 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/262d064f-a10f-4c7f-b256-59fa234849e8-config\") pod \"controller-manager-7cb9c7f474-44f84\" (UID: \"262d064f-a10f-4c7f-b256-59fa234849e8\") " pod="openshift-controller-manager/controller-manager-7cb9c7f474-44f84" Feb 17 13:39:29 crc kubenswrapper[4768]: I0217 13:39:29.584499 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-5bxh7" event={"ID":"8c8b1469-ed55-4743-9553-f81efd79e5f1","Type":"ContainerStarted","Data":"d624db18eb9695438b9c330e2fe7fd33f20cef9efba89a4ae7229aab4e2de71b"} Feb 17 13:39:29 crc kubenswrapper[4768]: E0217 13:39:29.585088 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-9r46k" podUID="6a25c47f-7f1c-42ed-85bf-acfe8949338b" Feb 17 13:39:29 crc kubenswrapper[4768]: E0217 13:39:29.587853 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-wsm7m" podUID="7e7136ca-949e-49ff-9f79-47e485a039cb" Feb 17 13:39:29 crc kubenswrapper[4768]: I0217 13:39:29.594146 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9wtx8\" (UniqueName: \"kubernetes.io/projected/262d064f-a10f-4c7f-b256-59fa234849e8-kube-api-access-9wtx8\") pod \"controller-manager-7cb9c7f474-44f84\" (UID: \"262d064f-a10f-4c7f-b256-59fa234849e8\") " pod="openshift-controller-manager/controller-manager-7cb9c7f474-44f84" Feb 17 13:39:29 crc kubenswrapper[4768]: I0217 13:39:29.605427 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7cb9c7f474-44f84" Feb 17 13:39:29 crc kubenswrapper[4768]: I0217 13:39:29.618235 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-bsrtm"] Feb 17 13:39:29 crc kubenswrapper[4768]: I0217 13:39:29.622149 4768 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-bsrtm"] Feb 17 13:39:29 crc kubenswrapper[4768]: I0217 13:39:29.636596 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-56bfc9748-k8g8k"] Feb 17 13:39:29 crc kubenswrapper[4768]: W0217 13:39:29.655132 4768 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2cfdf977_ff76_4f4f_8a53_a69143dec6ec.slice/crio-f03fff2b9dac833a4502094ae0d3583631d23b6c1d928a87870b59fd02852259 WatchSource:0}: Error finding container f03fff2b9dac833a4502094ae0d3583631d23b6c1d928a87870b59fd02852259: Status 404 returned error can't find the container with id f03fff2b9dac833a4502094ae0d3583631d23b6c1d928a87870b59fd02852259 Feb 17 13:39:29 crc kubenswrapper[4768]: I0217 13:39:29.671296 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/network-metrics-daemon-5bxh7" podStartSLOduration=168.671281302 podStartE2EDuration="2m48.671281302s" podCreationTimestamp="2026-02-17 13:36:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 13:39:29.670657042 +0000 UTC m=+188.950043484" watchObservedRunningTime="2026-02-17 13:39:29.671281302 +0000 UTC m=+188.950667744" Feb 17 13:39:29 crc kubenswrapper[4768]: I0217 13:39:29.877199 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-7cb9c7f474-44f84"] Feb 17 13:39:29 crc kubenswrapper[4768]: W0217 13:39:29.884518 4768 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod262d064f_a10f_4c7f_b256_59fa234849e8.slice/crio-58109480b1d1aad304029d868f4c3fe40da95f4b042a9cbb9cc808dd6f9c554d WatchSource:0}: Error finding container 58109480b1d1aad304029d868f4c3fe40da95f4b042a9cbb9cc808dd6f9c554d: Status 404 returned error can't find the container with id 58109480b1d1aad304029d868f4c3fe40da95f4b042a9cbb9cc808dd6f9c554d Feb 17 13:39:30 crc kubenswrapper[4768]: I0217 13:39:30.330903 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-7cb9c7f474-44f84"] Feb 17 13:39:30 crc kubenswrapper[4768]: I0217 13:39:30.426670 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-56bfc9748-k8g8k"] Feb 17 13:39:30 crc kubenswrapper[4768]: I0217 13:39:30.593160 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7cb9c7f474-44f84" event={"ID":"262d064f-a10f-4c7f-b256-59fa234849e8","Type":"ContainerStarted","Data":"e83f118f66d95a204afae29a469e0040890d58e7bf8537ed5989bff86e7dae7b"} Feb 17 13:39:30 crc kubenswrapper[4768]: I0217 13:39:30.593225 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7cb9c7f474-44f84" event={"ID":"262d064f-a10f-4c7f-b256-59fa234849e8","Type":"ContainerStarted","Data":"58109480b1d1aad304029d868f4c3fe40da95f4b042a9cbb9cc808dd6f9c554d"} Feb 17 13:39:30 crc kubenswrapper[4768]: I0217 13:39:30.593398 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-7cb9c7f474-44f84" Feb 17 13:39:30 crc kubenswrapper[4768]: I0217 13:39:30.594335 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-56bfc9748-k8g8k" event={"ID":"2cfdf977-ff76-4f4f-8a53-a69143dec6ec","Type":"ContainerStarted","Data":"f1dcfd0c1bed2c108b3162bba15fe101da2e4fcb401480902150598ceb4bdd50"} Feb 17 13:39:30 crc kubenswrapper[4768]: I0217 13:39:30.594384 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-56bfc9748-k8g8k" event={"ID":"2cfdf977-ff76-4f4f-8a53-a69143dec6ec","Type":"ContainerStarted","Data":"f03fff2b9dac833a4502094ae0d3583631d23b6c1d928a87870b59fd02852259"} Feb 17 13:39:30 crc kubenswrapper[4768]: I0217 13:39:30.594649 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-56bfc9748-k8g8k" Feb 17 13:39:30 crc kubenswrapper[4768]: I0217 13:39:30.596609 4768 generic.go:334] "Generic (PLEG): container finished" podID="1c56706d-59bb-4e83-a1d8-a39c61f50cc2" containerID="c363b98c16390bd7e98fd1623f76661d5b5588dea54247fd9d6323848a02a283" exitCode=0 Feb 17 13:39:30 crc kubenswrapper[4768]: I0217 13:39:30.597015 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-s777d" event={"ID":"1c56706d-59bb-4e83-a1d8-a39c61f50cc2","Type":"ContainerDied","Data":"c363b98c16390bd7e98fd1623f76661d5b5588dea54247fd9d6323848a02a283"} Feb 17 13:39:30 crc kubenswrapper[4768]: I0217 13:39:30.599616 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-56bfc9748-k8g8k" Feb 17 13:39:30 crc kubenswrapper[4768]: I0217 13:39:30.601220 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-7cb9c7f474-44f84" Feb 17 13:39:30 crc kubenswrapper[4768]: I0217 13:39:30.616832 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-7cb9c7f474-44f84" podStartSLOduration=20.616811906 podStartE2EDuration="20.616811906s" podCreationTimestamp="2026-02-17 13:39:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 13:39:30.61217772 +0000 UTC m=+189.891564192" watchObservedRunningTime="2026-02-17 13:39:30.616811906 +0000 UTC m=+189.896198348" Feb 17 13:39:30 crc kubenswrapper[4768]: I0217 13:39:30.661974 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-56bfc9748-k8g8k" podStartSLOduration=20.661954671 podStartE2EDuration="20.661954671s" podCreationTimestamp="2026-02-17 13:39:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 13:39:30.659200345 +0000 UTC m=+189.938586797" watchObservedRunningTime="2026-02-17 13:39:30.661954671 +0000 UTC m=+189.941341113" Feb 17 13:39:31 crc kubenswrapper[4768]: I0217 13:39:31.496934 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Feb 17 13:39:31 crc kubenswrapper[4768]: I0217 13:39:31.500279 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 17 13:39:31 crc kubenswrapper[4768]: I0217 13:39:31.502311 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Feb 17 13:39:31 crc kubenswrapper[4768]: I0217 13:39:31.503915 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Feb 17 13:39:31 crc kubenswrapper[4768]: I0217 13:39:31.508361 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Feb 17 13:39:31 crc kubenswrapper[4768]: I0217 13:39:31.549878 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0af47ec2-b35c-48df-8f91-9c878fb5ee94" path="/var/lib/kubelet/pods/0af47ec2-b35c-48df-8f91-9c878fb5ee94/volumes" Feb 17 13:39:31 crc kubenswrapper[4768]: I0217 13:39:31.597815 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/72577e68-d50a-435c-ba95-a3c36b742154-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"72577e68-d50a-435c-ba95-a3c36b742154\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 17 13:39:31 crc kubenswrapper[4768]: I0217 13:39:31.597860 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/72577e68-d50a-435c-ba95-a3c36b742154-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"72577e68-d50a-435c-ba95-a3c36b742154\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 17 13:39:31 crc kubenswrapper[4768]: I0217 13:39:31.605907 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-s777d" event={"ID":"1c56706d-59bb-4e83-a1d8-a39c61f50cc2","Type":"ContainerStarted","Data":"8d4a7aa1483123b2d17e6d3a8b1e0e63045a8478d2ac1c3c8d455394b2542187"} Feb 17 13:39:31 crc kubenswrapper[4768]: I0217 13:39:31.606049 4768 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-56bfc9748-k8g8k" podUID="2cfdf977-ff76-4f4f-8a53-a69143dec6ec" containerName="route-controller-manager" containerID="cri-o://f1dcfd0c1bed2c108b3162bba15fe101da2e4fcb401480902150598ceb4bdd50" gracePeriod=30 Feb 17 13:39:31 crc kubenswrapper[4768]: I0217 13:39:31.607875 4768 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-7cb9c7f474-44f84" podUID="262d064f-a10f-4c7f-b256-59fa234849e8" containerName="controller-manager" containerID="cri-o://e83f118f66d95a204afae29a469e0040890d58e7bf8537ed5989bff86e7dae7b" gracePeriod=30 Feb 17 13:39:31 crc kubenswrapper[4768]: I0217 13:39:31.633840 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-s777d" podStartSLOduration=4.013561948 podStartE2EDuration="38.63380237s" podCreationTimestamp="2026-02-17 13:38:53 +0000 UTC" firstStartedPulling="2026-02-17 13:38:56.342976098 +0000 UTC m=+155.622362540" lastFinishedPulling="2026-02-17 13:39:30.96321652 +0000 UTC m=+190.242602962" observedRunningTime="2026-02-17 13:39:31.631749236 +0000 UTC m=+190.911135678" watchObservedRunningTime="2026-02-17 13:39:31.63380237 +0000 UTC m=+190.913188832" Feb 17 13:39:31 crc kubenswrapper[4768]: I0217 13:39:31.699742 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/72577e68-d50a-435c-ba95-a3c36b742154-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"72577e68-d50a-435c-ba95-a3c36b742154\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 17 13:39:31 crc kubenswrapper[4768]: I0217 13:39:31.699809 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/72577e68-d50a-435c-ba95-a3c36b742154-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"72577e68-d50a-435c-ba95-a3c36b742154\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 17 13:39:31 crc kubenswrapper[4768]: I0217 13:39:31.699893 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/72577e68-d50a-435c-ba95-a3c36b742154-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"72577e68-d50a-435c-ba95-a3c36b742154\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 17 13:39:31 crc kubenswrapper[4768]: I0217 13:39:31.722831 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/72577e68-d50a-435c-ba95-a3c36b742154-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"72577e68-d50a-435c-ba95-a3c36b742154\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 17 13:39:31 crc kubenswrapper[4768]: I0217 13:39:31.823269 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 17 13:39:32 crc kubenswrapper[4768]: I0217 13:39:32.104993 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7cb9c7f474-44f84" Feb 17 13:39:32 crc kubenswrapper[4768]: I0217 13:39:32.113279 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-56bfc9748-k8g8k" Feb 17 13:39:32 crc kubenswrapper[4768]: I0217 13:39:32.143888 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-756dc596b5-xvlv5"] Feb 17 13:39:32 crc kubenswrapper[4768]: E0217 13:39:32.144134 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2cfdf977-ff76-4f4f-8a53-a69143dec6ec" containerName="route-controller-manager" Feb 17 13:39:32 crc kubenswrapper[4768]: I0217 13:39:32.144147 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="2cfdf977-ff76-4f4f-8a53-a69143dec6ec" containerName="route-controller-manager" Feb 17 13:39:32 crc kubenswrapper[4768]: E0217 13:39:32.144165 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="262d064f-a10f-4c7f-b256-59fa234849e8" containerName="controller-manager" Feb 17 13:39:32 crc kubenswrapper[4768]: I0217 13:39:32.144171 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="262d064f-a10f-4c7f-b256-59fa234849e8" containerName="controller-manager" Feb 17 13:39:32 crc kubenswrapper[4768]: I0217 13:39:32.144257 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="2cfdf977-ff76-4f4f-8a53-a69143dec6ec" containerName="route-controller-manager" Feb 17 13:39:32 crc kubenswrapper[4768]: I0217 13:39:32.144277 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="262d064f-a10f-4c7f-b256-59fa234849e8" containerName="controller-manager" Feb 17 13:39:32 crc kubenswrapper[4768]: I0217 13:39:32.144619 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-756dc596b5-xvlv5" Feb 17 13:39:32 crc kubenswrapper[4768]: I0217 13:39:32.164502 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-756dc596b5-xvlv5"] Feb 17 13:39:32 crc kubenswrapper[4768]: I0217 13:39:32.204764 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/262d064f-a10f-4c7f-b256-59fa234849e8-proxy-ca-bundles\") pod \"262d064f-a10f-4c7f-b256-59fa234849e8\" (UID: \"262d064f-a10f-4c7f-b256-59fa234849e8\") " Feb 17 13:39:32 crc kubenswrapper[4768]: I0217 13:39:32.204807 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/262d064f-a10f-4c7f-b256-59fa234849e8-serving-cert\") pod \"262d064f-a10f-4c7f-b256-59fa234849e8\" (UID: \"262d064f-a10f-4c7f-b256-59fa234849e8\") " Feb 17 13:39:32 crc kubenswrapper[4768]: I0217 13:39:32.204835 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2cfdf977-ff76-4f4f-8a53-a69143dec6ec-client-ca\") pod \"2cfdf977-ff76-4f4f-8a53-a69143dec6ec\" (UID: \"2cfdf977-ff76-4f4f-8a53-a69143dec6ec\") " Feb 17 13:39:32 crc kubenswrapper[4768]: I0217 13:39:32.204883 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-67jnn\" (UniqueName: \"kubernetes.io/projected/2cfdf977-ff76-4f4f-8a53-a69143dec6ec-kube-api-access-67jnn\") pod \"2cfdf977-ff76-4f4f-8a53-a69143dec6ec\" (UID: \"2cfdf977-ff76-4f4f-8a53-a69143dec6ec\") " Feb 17 13:39:32 crc kubenswrapper[4768]: I0217 13:39:32.205235 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9wtx8\" (UniqueName: \"kubernetes.io/projected/262d064f-a10f-4c7f-b256-59fa234849e8-kube-api-access-9wtx8\") pod \"262d064f-a10f-4c7f-b256-59fa234849e8\" (UID: \"262d064f-a10f-4c7f-b256-59fa234849e8\") " Feb 17 13:39:32 crc kubenswrapper[4768]: I0217 13:39:32.205837 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2cfdf977-ff76-4f4f-8a53-a69143dec6ec-client-ca" (OuterVolumeSpecName: "client-ca") pod "2cfdf977-ff76-4f4f-8a53-a69143dec6ec" (UID: "2cfdf977-ff76-4f4f-8a53-a69143dec6ec"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 13:39:32 crc kubenswrapper[4768]: I0217 13:39:32.206019 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2cfdf977-ff76-4f4f-8a53-a69143dec6ec-config\") pod \"2cfdf977-ff76-4f4f-8a53-a69143dec6ec\" (UID: \"2cfdf977-ff76-4f4f-8a53-a69143dec6ec\") " Feb 17 13:39:32 crc kubenswrapper[4768]: I0217 13:39:32.206058 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/262d064f-a10f-4c7f-b256-59fa234849e8-config\") pod \"262d064f-a10f-4c7f-b256-59fa234849e8\" (UID: \"262d064f-a10f-4c7f-b256-59fa234849e8\") " Feb 17 13:39:32 crc kubenswrapper[4768]: I0217 13:39:32.206084 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2cfdf977-ff76-4f4f-8a53-a69143dec6ec-serving-cert\") pod \"2cfdf977-ff76-4f4f-8a53-a69143dec6ec\" (UID: \"2cfdf977-ff76-4f4f-8a53-a69143dec6ec\") " Feb 17 13:39:32 crc kubenswrapper[4768]: I0217 13:39:32.206148 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/262d064f-a10f-4c7f-b256-59fa234849e8-client-ca\") pod \"262d064f-a10f-4c7f-b256-59fa234849e8\" (UID: \"262d064f-a10f-4c7f-b256-59fa234849e8\") " Feb 17 13:39:32 crc kubenswrapper[4768]: I0217 13:39:32.206320 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/262d064f-a10f-4c7f-b256-59fa234849e8-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "262d064f-a10f-4c7f-b256-59fa234849e8" (UID: "262d064f-a10f-4c7f-b256-59fa234849e8"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 13:39:32 crc kubenswrapper[4768]: I0217 13:39:32.207000 4768 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/262d064f-a10f-4c7f-b256-59fa234849e8-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 17 13:39:32 crc kubenswrapper[4768]: I0217 13:39:32.207023 4768 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2cfdf977-ff76-4f4f-8a53-a69143dec6ec-client-ca\") on node \"crc\" DevicePath \"\"" Feb 17 13:39:32 crc kubenswrapper[4768]: I0217 13:39:32.207166 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2cfdf977-ff76-4f4f-8a53-a69143dec6ec-config" (OuterVolumeSpecName: "config") pod "2cfdf977-ff76-4f4f-8a53-a69143dec6ec" (UID: "2cfdf977-ff76-4f4f-8a53-a69143dec6ec"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 13:39:32 crc kubenswrapper[4768]: I0217 13:39:32.207168 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/262d064f-a10f-4c7f-b256-59fa234849e8-config" (OuterVolumeSpecName: "config") pod "262d064f-a10f-4c7f-b256-59fa234849e8" (UID: "262d064f-a10f-4c7f-b256-59fa234849e8"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 13:39:32 crc kubenswrapper[4768]: I0217 13:39:32.207361 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/262d064f-a10f-4c7f-b256-59fa234849e8-client-ca" (OuterVolumeSpecName: "client-ca") pod "262d064f-a10f-4c7f-b256-59fa234849e8" (UID: "262d064f-a10f-4c7f-b256-59fa234849e8"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 13:39:32 crc kubenswrapper[4768]: I0217 13:39:32.209909 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/262d064f-a10f-4c7f-b256-59fa234849e8-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "262d064f-a10f-4c7f-b256-59fa234849e8" (UID: "262d064f-a10f-4c7f-b256-59fa234849e8"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 13:39:32 crc kubenswrapper[4768]: I0217 13:39:32.209989 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2cfdf977-ff76-4f4f-8a53-a69143dec6ec-kube-api-access-67jnn" (OuterVolumeSpecName: "kube-api-access-67jnn") pod "2cfdf977-ff76-4f4f-8a53-a69143dec6ec" (UID: "2cfdf977-ff76-4f4f-8a53-a69143dec6ec"). InnerVolumeSpecName "kube-api-access-67jnn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 13:39:32 crc kubenswrapper[4768]: I0217 13:39:32.210048 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/262d064f-a10f-4c7f-b256-59fa234849e8-kube-api-access-9wtx8" (OuterVolumeSpecName: "kube-api-access-9wtx8") pod "262d064f-a10f-4c7f-b256-59fa234849e8" (UID: "262d064f-a10f-4c7f-b256-59fa234849e8"). InnerVolumeSpecName "kube-api-access-9wtx8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 13:39:32 crc kubenswrapper[4768]: I0217 13:39:32.210556 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2cfdf977-ff76-4f4f-8a53-a69143dec6ec-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "2cfdf977-ff76-4f4f-8a53-a69143dec6ec" (UID: "2cfdf977-ff76-4f4f-8a53-a69143dec6ec"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 13:39:32 crc kubenswrapper[4768]: I0217 13:39:32.264375 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Feb 17 13:39:32 crc kubenswrapper[4768]: I0217 13:39:32.308171 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e584c520-463b-4a51-8368-aeea8510245d-serving-cert\") pod \"controller-manager-756dc596b5-xvlv5\" (UID: \"e584c520-463b-4a51-8368-aeea8510245d\") " pod="openshift-controller-manager/controller-manager-756dc596b5-xvlv5" Feb 17 13:39:32 crc kubenswrapper[4768]: I0217 13:39:32.308227 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mb95t\" (UniqueName: \"kubernetes.io/projected/e584c520-463b-4a51-8368-aeea8510245d-kube-api-access-mb95t\") pod \"controller-manager-756dc596b5-xvlv5\" (UID: \"e584c520-463b-4a51-8368-aeea8510245d\") " pod="openshift-controller-manager/controller-manager-756dc596b5-xvlv5" Feb 17 13:39:32 crc kubenswrapper[4768]: I0217 13:39:32.308384 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e584c520-463b-4a51-8368-aeea8510245d-config\") pod \"controller-manager-756dc596b5-xvlv5\" (UID: \"e584c520-463b-4a51-8368-aeea8510245d\") " pod="openshift-controller-manager/controller-manager-756dc596b5-xvlv5" Feb 17 13:39:32 crc kubenswrapper[4768]: I0217 13:39:32.308422 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e584c520-463b-4a51-8368-aeea8510245d-proxy-ca-bundles\") pod \"controller-manager-756dc596b5-xvlv5\" (UID: \"e584c520-463b-4a51-8368-aeea8510245d\") " pod="openshift-controller-manager/controller-manager-756dc596b5-xvlv5" Feb 17 13:39:32 crc kubenswrapper[4768]: I0217 13:39:32.308502 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e584c520-463b-4a51-8368-aeea8510245d-client-ca\") pod \"controller-manager-756dc596b5-xvlv5\" (UID: \"e584c520-463b-4a51-8368-aeea8510245d\") " pod="openshift-controller-manager/controller-manager-756dc596b5-xvlv5" Feb 17 13:39:32 crc kubenswrapper[4768]: I0217 13:39:32.308564 4768 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/262d064f-a10f-4c7f-b256-59fa234849e8-client-ca\") on node \"crc\" DevicePath \"\"" Feb 17 13:39:32 crc kubenswrapper[4768]: I0217 13:39:32.308588 4768 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/262d064f-a10f-4c7f-b256-59fa234849e8-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 17 13:39:32 crc kubenswrapper[4768]: I0217 13:39:32.308601 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-67jnn\" (UniqueName: \"kubernetes.io/projected/2cfdf977-ff76-4f4f-8a53-a69143dec6ec-kube-api-access-67jnn\") on node \"crc\" DevicePath \"\"" Feb 17 13:39:32 crc kubenswrapper[4768]: I0217 13:39:32.308809 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9wtx8\" (UniqueName: \"kubernetes.io/projected/262d064f-a10f-4c7f-b256-59fa234849e8-kube-api-access-9wtx8\") on node \"crc\" DevicePath \"\"" Feb 17 13:39:32 crc kubenswrapper[4768]: I0217 13:39:32.308832 4768 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2cfdf977-ff76-4f4f-8a53-a69143dec6ec-config\") on node \"crc\" DevicePath \"\"" Feb 17 13:39:32 crc kubenswrapper[4768]: I0217 13:39:32.308845 4768 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/262d064f-a10f-4c7f-b256-59fa234849e8-config\") on node \"crc\" DevicePath \"\"" Feb 17 13:39:32 crc kubenswrapper[4768]: I0217 13:39:32.308856 4768 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2cfdf977-ff76-4f4f-8a53-a69143dec6ec-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 17 13:39:32 crc kubenswrapper[4768]: I0217 13:39:32.410485 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e584c520-463b-4a51-8368-aeea8510245d-client-ca\") pod \"controller-manager-756dc596b5-xvlv5\" (UID: \"e584c520-463b-4a51-8368-aeea8510245d\") " pod="openshift-controller-manager/controller-manager-756dc596b5-xvlv5" Feb 17 13:39:32 crc kubenswrapper[4768]: I0217 13:39:32.410563 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e584c520-463b-4a51-8368-aeea8510245d-serving-cert\") pod \"controller-manager-756dc596b5-xvlv5\" (UID: \"e584c520-463b-4a51-8368-aeea8510245d\") " pod="openshift-controller-manager/controller-manager-756dc596b5-xvlv5" Feb 17 13:39:32 crc kubenswrapper[4768]: I0217 13:39:32.410606 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mb95t\" (UniqueName: \"kubernetes.io/projected/e584c520-463b-4a51-8368-aeea8510245d-kube-api-access-mb95t\") pod \"controller-manager-756dc596b5-xvlv5\" (UID: \"e584c520-463b-4a51-8368-aeea8510245d\") " pod="openshift-controller-manager/controller-manager-756dc596b5-xvlv5" Feb 17 13:39:32 crc kubenswrapper[4768]: I0217 13:39:32.410652 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e584c520-463b-4a51-8368-aeea8510245d-config\") pod \"controller-manager-756dc596b5-xvlv5\" (UID: \"e584c520-463b-4a51-8368-aeea8510245d\") " pod="openshift-controller-manager/controller-manager-756dc596b5-xvlv5" Feb 17 13:39:32 crc kubenswrapper[4768]: I0217 13:39:32.410676 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e584c520-463b-4a51-8368-aeea8510245d-proxy-ca-bundles\") pod \"controller-manager-756dc596b5-xvlv5\" (UID: \"e584c520-463b-4a51-8368-aeea8510245d\") " pod="openshift-controller-manager/controller-manager-756dc596b5-xvlv5" Feb 17 13:39:32 crc kubenswrapper[4768]: I0217 13:39:32.411404 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e584c520-463b-4a51-8368-aeea8510245d-client-ca\") pod \"controller-manager-756dc596b5-xvlv5\" (UID: \"e584c520-463b-4a51-8368-aeea8510245d\") " pod="openshift-controller-manager/controller-manager-756dc596b5-xvlv5" Feb 17 13:39:32 crc kubenswrapper[4768]: I0217 13:39:32.412229 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e584c520-463b-4a51-8368-aeea8510245d-proxy-ca-bundles\") pod \"controller-manager-756dc596b5-xvlv5\" (UID: \"e584c520-463b-4a51-8368-aeea8510245d\") " pod="openshift-controller-manager/controller-manager-756dc596b5-xvlv5" Feb 17 13:39:32 crc kubenswrapper[4768]: I0217 13:39:32.416474 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e584c520-463b-4a51-8368-aeea8510245d-config\") pod \"controller-manager-756dc596b5-xvlv5\" (UID: \"e584c520-463b-4a51-8368-aeea8510245d\") " pod="openshift-controller-manager/controller-manager-756dc596b5-xvlv5" Feb 17 13:39:32 crc kubenswrapper[4768]: I0217 13:39:32.417796 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e584c520-463b-4a51-8368-aeea8510245d-serving-cert\") pod \"controller-manager-756dc596b5-xvlv5\" (UID: \"e584c520-463b-4a51-8368-aeea8510245d\") " pod="openshift-controller-manager/controller-manager-756dc596b5-xvlv5" Feb 17 13:39:32 crc kubenswrapper[4768]: I0217 13:39:32.424634 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mb95t\" (UniqueName: \"kubernetes.io/projected/e584c520-463b-4a51-8368-aeea8510245d-kube-api-access-mb95t\") pod \"controller-manager-756dc596b5-xvlv5\" (UID: \"e584c520-463b-4a51-8368-aeea8510245d\") " pod="openshift-controller-manager/controller-manager-756dc596b5-xvlv5" Feb 17 13:39:32 crc kubenswrapper[4768]: I0217 13:39:32.469980 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-756dc596b5-xvlv5" Feb 17 13:39:32 crc kubenswrapper[4768]: I0217 13:39:32.614121 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"72577e68-d50a-435c-ba95-a3c36b742154","Type":"ContainerStarted","Data":"9302c372551b07f827b4484f8316715f31b604da73414040c3b119634b70ea0f"} Feb 17 13:39:32 crc kubenswrapper[4768]: I0217 13:39:32.614174 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"72577e68-d50a-435c-ba95-a3c36b742154","Type":"ContainerStarted","Data":"6f23590e4def43d61efaaddfb69cd3c8424faddfbd13144516f8f84485e70776"} Feb 17 13:39:32 crc kubenswrapper[4768]: I0217 13:39:32.615392 4768 generic.go:334] "Generic (PLEG): container finished" podID="262d064f-a10f-4c7f-b256-59fa234849e8" containerID="e83f118f66d95a204afae29a469e0040890d58e7bf8537ed5989bff86e7dae7b" exitCode=0 Feb 17 13:39:32 crc kubenswrapper[4768]: I0217 13:39:32.615602 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7cb9c7f474-44f84" event={"ID":"262d064f-a10f-4c7f-b256-59fa234849e8","Type":"ContainerDied","Data":"e83f118f66d95a204afae29a469e0040890d58e7bf8537ed5989bff86e7dae7b"} Feb 17 13:39:32 crc kubenswrapper[4768]: I0217 13:39:32.615677 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7cb9c7f474-44f84" Feb 17 13:39:32 crc kubenswrapper[4768]: I0217 13:39:32.615699 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7cb9c7f474-44f84" event={"ID":"262d064f-a10f-4c7f-b256-59fa234849e8","Type":"ContainerDied","Data":"58109480b1d1aad304029d868f4c3fe40da95f4b042a9cbb9cc808dd6f9c554d"} Feb 17 13:39:32 crc kubenswrapper[4768]: I0217 13:39:32.615723 4768 scope.go:117] "RemoveContainer" containerID="e83f118f66d95a204afae29a469e0040890d58e7bf8537ed5989bff86e7dae7b" Feb 17 13:39:32 crc kubenswrapper[4768]: I0217 13:39:32.618149 4768 generic.go:334] "Generic (PLEG): container finished" podID="2cfdf977-ff76-4f4f-8a53-a69143dec6ec" containerID="f1dcfd0c1bed2c108b3162bba15fe101da2e4fcb401480902150598ceb4bdd50" exitCode=0 Feb 17 13:39:32 crc kubenswrapper[4768]: I0217 13:39:32.618810 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-56bfc9748-k8g8k" Feb 17 13:39:32 crc kubenswrapper[4768]: I0217 13:39:32.619025 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-56bfc9748-k8g8k" event={"ID":"2cfdf977-ff76-4f4f-8a53-a69143dec6ec","Type":"ContainerDied","Data":"f1dcfd0c1bed2c108b3162bba15fe101da2e4fcb401480902150598ceb4bdd50"} Feb 17 13:39:32 crc kubenswrapper[4768]: I0217 13:39:32.619053 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-56bfc9748-k8g8k" event={"ID":"2cfdf977-ff76-4f4f-8a53-a69143dec6ec","Type":"ContainerDied","Data":"f03fff2b9dac833a4502094ae0d3583631d23b6c1d928a87870b59fd02852259"} Feb 17 13:39:32 crc kubenswrapper[4768]: I0217 13:39:32.647081 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-9-crc" podStartSLOduration=1.647060498 podStartE2EDuration="1.647060498s" podCreationTimestamp="2026-02-17 13:39:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 13:39:32.629744584 +0000 UTC m=+191.909131026" watchObservedRunningTime="2026-02-17 13:39:32.647060498 +0000 UTC m=+191.926446940" Feb 17 13:39:32 crc kubenswrapper[4768]: I0217 13:39:32.653146 4768 scope.go:117] "RemoveContainer" containerID="e83f118f66d95a204afae29a469e0040890d58e7bf8537ed5989bff86e7dae7b" Feb 17 13:39:32 crc kubenswrapper[4768]: E0217 13:39:32.657160 4768 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e83f118f66d95a204afae29a469e0040890d58e7bf8537ed5989bff86e7dae7b\": container with ID starting with e83f118f66d95a204afae29a469e0040890d58e7bf8537ed5989bff86e7dae7b not found: ID does not exist" containerID="e83f118f66d95a204afae29a469e0040890d58e7bf8537ed5989bff86e7dae7b" Feb 17 13:39:32 crc kubenswrapper[4768]: I0217 13:39:32.657322 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e83f118f66d95a204afae29a469e0040890d58e7bf8537ed5989bff86e7dae7b"} err="failed to get container status \"e83f118f66d95a204afae29a469e0040890d58e7bf8537ed5989bff86e7dae7b\": rpc error: code = NotFound desc = could not find container \"e83f118f66d95a204afae29a469e0040890d58e7bf8537ed5989bff86e7dae7b\": container with ID starting with e83f118f66d95a204afae29a469e0040890d58e7bf8537ed5989bff86e7dae7b not found: ID does not exist" Feb 17 13:39:32 crc kubenswrapper[4768]: I0217 13:39:32.657394 4768 scope.go:117] "RemoveContainer" containerID="f1dcfd0c1bed2c108b3162bba15fe101da2e4fcb401480902150598ceb4bdd50" Feb 17 13:39:32 crc kubenswrapper[4768]: I0217 13:39:32.664262 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-7cb9c7f474-44f84"] Feb 17 13:39:32 crc kubenswrapper[4768]: I0217 13:39:32.683469 4768 scope.go:117] "RemoveContainer" containerID="f1dcfd0c1bed2c108b3162bba15fe101da2e4fcb401480902150598ceb4bdd50" Feb 17 13:39:32 crc kubenswrapper[4768]: E0217 13:39:32.683971 4768 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f1dcfd0c1bed2c108b3162bba15fe101da2e4fcb401480902150598ceb4bdd50\": container with ID starting with f1dcfd0c1bed2c108b3162bba15fe101da2e4fcb401480902150598ceb4bdd50 not found: ID does not exist" containerID="f1dcfd0c1bed2c108b3162bba15fe101da2e4fcb401480902150598ceb4bdd50" Feb 17 13:39:32 crc kubenswrapper[4768]: I0217 13:39:32.684143 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f1dcfd0c1bed2c108b3162bba15fe101da2e4fcb401480902150598ceb4bdd50"} err="failed to get container status \"f1dcfd0c1bed2c108b3162bba15fe101da2e4fcb401480902150598ceb4bdd50\": rpc error: code = NotFound desc = could not find container \"f1dcfd0c1bed2c108b3162bba15fe101da2e4fcb401480902150598ceb4bdd50\": container with ID starting with f1dcfd0c1bed2c108b3162bba15fe101da2e4fcb401480902150598ceb4bdd50 not found: ID does not exist" Feb 17 13:39:32 crc kubenswrapper[4768]: I0217 13:39:32.684761 4768 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-7cb9c7f474-44f84"] Feb 17 13:39:32 crc kubenswrapper[4768]: I0217 13:39:32.691637 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-56bfc9748-k8g8k"] Feb 17 13:39:32 crc kubenswrapper[4768]: I0217 13:39:32.694194 4768 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-56bfc9748-k8g8k"] Feb 17 13:39:32 crc kubenswrapper[4768]: I0217 13:39:32.698503 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-756dc596b5-xvlv5"] Feb 17 13:39:33 crc kubenswrapper[4768]: I0217 13:39:33.544343 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="262d064f-a10f-4c7f-b256-59fa234849e8" path="/var/lib/kubelet/pods/262d064f-a10f-4c7f-b256-59fa234849e8/volumes" Feb 17 13:39:33 crc kubenswrapper[4768]: I0217 13:39:33.545565 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2cfdf977-ff76-4f4f-8a53-a69143dec6ec" path="/var/lib/kubelet/pods/2cfdf977-ff76-4f4f-8a53-a69143dec6ec/volumes" Feb 17 13:39:33 crc kubenswrapper[4768]: I0217 13:39:33.628415 4768 generic.go:334] "Generic (PLEG): container finished" podID="72577e68-d50a-435c-ba95-a3c36b742154" containerID="9302c372551b07f827b4484f8316715f31b604da73414040c3b119634b70ea0f" exitCode=0 Feb 17 13:39:33 crc kubenswrapper[4768]: I0217 13:39:33.628481 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"72577e68-d50a-435c-ba95-a3c36b742154","Type":"ContainerDied","Data":"9302c372551b07f827b4484f8316715f31b604da73414040c3b119634b70ea0f"} Feb 17 13:39:33 crc kubenswrapper[4768]: I0217 13:39:33.633751 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-756dc596b5-xvlv5" event={"ID":"e584c520-463b-4a51-8368-aeea8510245d","Type":"ContainerStarted","Data":"924fa2955598bb3b610ba4f57564a6d425355e5a4084abdbff9ca2d542bfca08"} Feb 17 13:39:33 crc kubenswrapper[4768]: I0217 13:39:33.633783 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-756dc596b5-xvlv5" event={"ID":"e584c520-463b-4a51-8368-aeea8510245d","Type":"ContainerStarted","Data":"4e42afd49347ceffd2290e86bc142f05b722ed839f49a5e9152adfc2e4bc9c5c"} Feb 17 13:39:33 crc kubenswrapper[4768]: I0217 13:39:33.634607 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-756dc596b5-xvlv5" Feb 17 13:39:33 crc kubenswrapper[4768]: I0217 13:39:33.637978 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-756dc596b5-xvlv5" Feb 17 13:39:33 crc kubenswrapper[4768]: I0217 13:39:33.662920 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-756dc596b5-xvlv5" podStartSLOduration=3.662651589 podStartE2EDuration="3.662651589s" podCreationTimestamp="2026-02-17 13:39:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 13:39:33.659238942 +0000 UTC m=+192.938625384" watchObservedRunningTime="2026-02-17 13:39:33.662651589 +0000 UTC m=+192.942038041" Feb 17 13:39:33 crc kubenswrapper[4768]: I0217 13:39:33.870156 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-s777d" Feb 17 13:39:33 crc kubenswrapper[4768]: I0217 13:39:33.870237 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-s777d" Feb 17 13:39:34 crc kubenswrapper[4768]: I0217 13:39:34.880865 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 17 13:39:35 crc kubenswrapper[4768]: I0217 13:39:35.041589 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/72577e68-d50a-435c-ba95-a3c36b742154-kubelet-dir\") pod \"72577e68-d50a-435c-ba95-a3c36b742154\" (UID: \"72577e68-d50a-435c-ba95-a3c36b742154\") " Feb 17 13:39:35 crc kubenswrapper[4768]: I0217 13:39:35.041737 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/72577e68-d50a-435c-ba95-a3c36b742154-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "72577e68-d50a-435c-ba95-a3c36b742154" (UID: "72577e68-d50a-435c-ba95-a3c36b742154"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 13:39:35 crc kubenswrapper[4768]: I0217 13:39:35.041877 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/72577e68-d50a-435c-ba95-a3c36b742154-kube-api-access\") pod \"72577e68-d50a-435c-ba95-a3c36b742154\" (UID: \"72577e68-d50a-435c-ba95-a3c36b742154\") " Feb 17 13:39:35 crc kubenswrapper[4768]: I0217 13:39:35.043523 4768 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/72577e68-d50a-435c-ba95-a3c36b742154-kubelet-dir\") on node \"crc\" DevicePath \"\"" Feb 17 13:39:35 crc kubenswrapper[4768]: I0217 13:39:35.051870 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/72577e68-d50a-435c-ba95-a3c36b742154-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "72577e68-d50a-435c-ba95-a3c36b742154" (UID: "72577e68-d50a-435c-ba95-a3c36b742154"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 13:39:35 crc kubenswrapper[4768]: I0217 13:39:35.088608 4768 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-s777d" podUID="1c56706d-59bb-4e83-a1d8-a39c61f50cc2" containerName="registry-server" probeResult="failure" output=< Feb 17 13:39:35 crc kubenswrapper[4768]: timeout: failed to connect service ":50051" within 1s Feb 17 13:39:35 crc kubenswrapper[4768]: > Feb 17 13:39:35 crc kubenswrapper[4768]: I0217 13:39:35.105056 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-58c549fbf6-h8dnn"] Feb 17 13:39:35 crc kubenswrapper[4768]: E0217 13:39:35.105605 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="72577e68-d50a-435c-ba95-a3c36b742154" containerName="pruner" Feb 17 13:39:35 crc kubenswrapper[4768]: I0217 13:39:35.105619 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="72577e68-d50a-435c-ba95-a3c36b742154" containerName="pruner" Feb 17 13:39:35 crc kubenswrapper[4768]: I0217 13:39:35.105717 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="72577e68-d50a-435c-ba95-a3c36b742154" containerName="pruner" Feb 17 13:39:35 crc kubenswrapper[4768]: I0217 13:39:35.106085 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-58c549fbf6-h8dnn" Feb 17 13:39:35 crc kubenswrapper[4768]: I0217 13:39:35.109123 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 17 13:39:35 crc kubenswrapper[4768]: I0217 13:39:35.109413 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Feb 17 13:39:35 crc kubenswrapper[4768]: I0217 13:39:35.109508 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 17 13:39:35 crc kubenswrapper[4768]: I0217 13:39:35.111608 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 17 13:39:35 crc kubenswrapper[4768]: I0217 13:39:35.111775 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 17 13:39:35 crc kubenswrapper[4768]: I0217 13:39:35.111803 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 17 13:39:35 crc kubenswrapper[4768]: I0217 13:39:35.129197 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-58c549fbf6-h8dnn"] Feb 17 13:39:35 crc kubenswrapper[4768]: I0217 13:39:35.145010 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/72577e68-d50a-435c-ba95-a3c36b742154-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 17 13:39:35 crc kubenswrapper[4768]: I0217 13:39:35.246489 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/46f4979b-0975-4ba7-86b1-56ef16779495-config\") pod \"route-controller-manager-58c549fbf6-h8dnn\" (UID: \"46f4979b-0975-4ba7-86b1-56ef16779495\") " pod="openshift-route-controller-manager/route-controller-manager-58c549fbf6-h8dnn" Feb 17 13:39:35 crc kubenswrapper[4768]: I0217 13:39:35.246555 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/46f4979b-0975-4ba7-86b1-56ef16779495-serving-cert\") pod \"route-controller-manager-58c549fbf6-h8dnn\" (UID: \"46f4979b-0975-4ba7-86b1-56ef16779495\") " pod="openshift-route-controller-manager/route-controller-manager-58c549fbf6-h8dnn" Feb 17 13:39:35 crc kubenswrapper[4768]: I0217 13:39:35.246624 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xjrhf\" (UniqueName: \"kubernetes.io/projected/46f4979b-0975-4ba7-86b1-56ef16779495-kube-api-access-xjrhf\") pod \"route-controller-manager-58c549fbf6-h8dnn\" (UID: \"46f4979b-0975-4ba7-86b1-56ef16779495\") " pod="openshift-route-controller-manager/route-controller-manager-58c549fbf6-h8dnn" Feb 17 13:39:35 crc kubenswrapper[4768]: I0217 13:39:35.246648 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/46f4979b-0975-4ba7-86b1-56ef16779495-client-ca\") pod \"route-controller-manager-58c549fbf6-h8dnn\" (UID: \"46f4979b-0975-4ba7-86b1-56ef16779495\") " pod="openshift-route-controller-manager/route-controller-manager-58c549fbf6-h8dnn" Feb 17 13:39:35 crc kubenswrapper[4768]: I0217 13:39:35.348075 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xjrhf\" (UniqueName: \"kubernetes.io/projected/46f4979b-0975-4ba7-86b1-56ef16779495-kube-api-access-xjrhf\") pod \"route-controller-manager-58c549fbf6-h8dnn\" (UID: \"46f4979b-0975-4ba7-86b1-56ef16779495\") " pod="openshift-route-controller-manager/route-controller-manager-58c549fbf6-h8dnn" Feb 17 13:39:35 crc kubenswrapper[4768]: I0217 13:39:35.348128 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/46f4979b-0975-4ba7-86b1-56ef16779495-client-ca\") pod \"route-controller-manager-58c549fbf6-h8dnn\" (UID: \"46f4979b-0975-4ba7-86b1-56ef16779495\") " pod="openshift-route-controller-manager/route-controller-manager-58c549fbf6-h8dnn" Feb 17 13:39:35 crc kubenswrapper[4768]: I0217 13:39:35.348185 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/46f4979b-0975-4ba7-86b1-56ef16779495-config\") pod \"route-controller-manager-58c549fbf6-h8dnn\" (UID: \"46f4979b-0975-4ba7-86b1-56ef16779495\") " pod="openshift-route-controller-manager/route-controller-manager-58c549fbf6-h8dnn" Feb 17 13:39:35 crc kubenswrapper[4768]: I0217 13:39:35.348213 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/46f4979b-0975-4ba7-86b1-56ef16779495-serving-cert\") pod \"route-controller-manager-58c549fbf6-h8dnn\" (UID: \"46f4979b-0975-4ba7-86b1-56ef16779495\") " pod="openshift-route-controller-manager/route-controller-manager-58c549fbf6-h8dnn" Feb 17 13:39:35 crc kubenswrapper[4768]: I0217 13:39:35.349312 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/46f4979b-0975-4ba7-86b1-56ef16779495-config\") pod \"route-controller-manager-58c549fbf6-h8dnn\" (UID: \"46f4979b-0975-4ba7-86b1-56ef16779495\") " pod="openshift-route-controller-manager/route-controller-manager-58c549fbf6-h8dnn" Feb 17 13:39:35 crc kubenswrapper[4768]: I0217 13:39:35.353394 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/46f4979b-0975-4ba7-86b1-56ef16779495-client-ca\") pod \"route-controller-manager-58c549fbf6-h8dnn\" (UID: \"46f4979b-0975-4ba7-86b1-56ef16779495\") " pod="openshift-route-controller-manager/route-controller-manager-58c549fbf6-h8dnn" Feb 17 13:39:35 crc kubenswrapper[4768]: I0217 13:39:35.357923 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/46f4979b-0975-4ba7-86b1-56ef16779495-serving-cert\") pod \"route-controller-manager-58c549fbf6-h8dnn\" (UID: \"46f4979b-0975-4ba7-86b1-56ef16779495\") " pod="openshift-route-controller-manager/route-controller-manager-58c549fbf6-h8dnn" Feb 17 13:39:35 crc kubenswrapper[4768]: I0217 13:39:35.366801 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xjrhf\" (UniqueName: \"kubernetes.io/projected/46f4979b-0975-4ba7-86b1-56ef16779495-kube-api-access-xjrhf\") pod \"route-controller-manager-58c549fbf6-h8dnn\" (UID: \"46f4979b-0975-4ba7-86b1-56ef16779495\") " pod="openshift-route-controller-manager/route-controller-manager-58c549fbf6-h8dnn" Feb 17 13:39:35 crc kubenswrapper[4768]: I0217 13:39:35.424832 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-58c549fbf6-h8dnn" Feb 17 13:39:35 crc kubenswrapper[4768]: I0217 13:39:35.672153 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"72577e68-d50a-435c-ba95-a3c36b742154","Type":"ContainerDied","Data":"6f23590e4def43d61efaaddfb69cd3c8424faddfbd13144516f8f84485e70776"} Feb 17 13:39:35 crc kubenswrapper[4768]: I0217 13:39:35.672225 4768 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6f23590e4def43d61efaaddfb69cd3c8424faddfbd13144516f8f84485e70776" Feb 17 13:39:35 crc kubenswrapper[4768]: I0217 13:39:35.672985 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 17 13:39:35 crc kubenswrapper[4768]: I0217 13:39:35.869699 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-58c549fbf6-h8dnn"] Feb 17 13:39:35 crc kubenswrapper[4768]: W0217 13:39:35.877049 4768 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod46f4979b_0975_4ba7_86b1_56ef16779495.slice/crio-8d1879c2d61526d2a90ec7c28bc5308e7c5b7f0899e50341bf4939cd89a341ee WatchSource:0}: Error finding container 8d1879c2d61526d2a90ec7c28bc5308e7c5b7f0899e50341bf4939cd89a341ee: Status 404 returned error can't find the container with id 8d1879c2d61526d2a90ec7c28bc5308e7c5b7f0899e50341bf4939cd89a341ee Feb 17 13:39:36 crc kubenswrapper[4768]: I0217 13:39:36.679564 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-58c549fbf6-h8dnn" event={"ID":"46f4979b-0975-4ba7-86b1-56ef16779495","Type":"ContainerStarted","Data":"c534307b94e911788db3da3be38cc36c1359dcda729537f16c514cf54c2fad53"} Feb 17 13:39:36 crc kubenswrapper[4768]: I0217 13:39:36.679619 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-58c549fbf6-h8dnn" event={"ID":"46f4979b-0975-4ba7-86b1-56ef16779495","Type":"ContainerStarted","Data":"8d1879c2d61526d2a90ec7c28bc5308e7c5b7f0899e50341bf4939cd89a341ee"} Feb 17 13:39:36 crc kubenswrapper[4768]: I0217 13:39:36.680428 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-58c549fbf6-h8dnn" Feb 17 13:39:36 crc kubenswrapper[4768]: I0217 13:39:36.686492 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-58c549fbf6-h8dnn" Feb 17 13:39:36 crc kubenswrapper[4768]: I0217 13:39:36.696661 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-58c549fbf6-h8dnn" podStartSLOduration=6.696644524 podStartE2EDuration="6.696644524s" podCreationTimestamp="2026-02-17 13:39:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 13:39:36.695424129 +0000 UTC m=+195.974810571" watchObservedRunningTime="2026-02-17 13:39:36.696644524 +0000 UTC m=+195.976030966" Feb 17 13:39:37 crc kubenswrapper[4768]: I0217 13:39:37.689805 4768 generic.go:334] "Generic (PLEG): container finished" podID="b7de1a69-e892-4ca4-a61f-20a221ce38ba" containerID="2e8e516ccec666d0db47e7e57894b43c5d307b9ebaa45fe6a75acb61c381244c" exitCode=0 Feb 17 13:39:37 crc kubenswrapper[4768]: I0217 13:39:37.690356 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4tmgb" event={"ID":"b7de1a69-e892-4ca4-a61f-20a221ce38ba","Type":"ContainerDied","Data":"2e8e516ccec666d0db47e7e57894b43c5d307b9ebaa45fe6a75acb61c381244c"} Feb 17 13:39:37 crc kubenswrapper[4768]: I0217 13:39:37.694809 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Feb 17 13:39:37 crc kubenswrapper[4768]: I0217 13:39:37.695468 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Feb 17 13:39:37 crc kubenswrapper[4768]: I0217 13:39:37.697017 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Feb 17 13:39:37 crc kubenswrapper[4768]: I0217 13:39:37.699703 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Feb 17 13:39:37 crc kubenswrapper[4768]: I0217 13:39:37.716278 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Feb 17 13:39:37 crc kubenswrapper[4768]: I0217 13:39:37.892992 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/81f5c792-d672-4014-84b4-c7b05fbb1139-kube-api-access\") pod \"installer-9-crc\" (UID: \"81f5c792-d672-4014-84b4-c7b05fbb1139\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 17 13:39:37 crc kubenswrapper[4768]: I0217 13:39:37.893135 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/81f5c792-d672-4014-84b4-c7b05fbb1139-var-lock\") pod \"installer-9-crc\" (UID: \"81f5c792-d672-4014-84b4-c7b05fbb1139\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 17 13:39:37 crc kubenswrapper[4768]: I0217 13:39:37.893230 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/81f5c792-d672-4014-84b4-c7b05fbb1139-kubelet-dir\") pod \"installer-9-crc\" (UID: \"81f5c792-d672-4014-84b4-c7b05fbb1139\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 17 13:39:37 crc kubenswrapper[4768]: I0217 13:39:37.994021 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/81f5c792-d672-4014-84b4-c7b05fbb1139-kube-api-access\") pod \"installer-9-crc\" (UID: \"81f5c792-d672-4014-84b4-c7b05fbb1139\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 17 13:39:37 crc kubenswrapper[4768]: I0217 13:39:37.994062 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/81f5c792-d672-4014-84b4-c7b05fbb1139-var-lock\") pod \"installer-9-crc\" (UID: \"81f5c792-d672-4014-84b4-c7b05fbb1139\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 17 13:39:37 crc kubenswrapper[4768]: I0217 13:39:37.994147 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/81f5c792-d672-4014-84b4-c7b05fbb1139-kubelet-dir\") pod \"installer-9-crc\" (UID: \"81f5c792-d672-4014-84b4-c7b05fbb1139\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 17 13:39:37 crc kubenswrapper[4768]: I0217 13:39:37.994210 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/81f5c792-d672-4014-84b4-c7b05fbb1139-kubelet-dir\") pod \"installer-9-crc\" (UID: \"81f5c792-d672-4014-84b4-c7b05fbb1139\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 17 13:39:37 crc kubenswrapper[4768]: I0217 13:39:37.994239 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/81f5c792-d672-4014-84b4-c7b05fbb1139-var-lock\") pod \"installer-9-crc\" (UID: \"81f5c792-d672-4014-84b4-c7b05fbb1139\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 17 13:39:38 crc kubenswrapper[4768]: I0217 13:39:38.011065 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/81f5c792-d672-4014-84b4-c7b05fbb1139-kube-api-access\") pod \"installer-9-crc\" (UID: \"81f5c792-d672-4014-84b4-c7b05fbb1139\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 17 13:39:38 crc kubenswrapper[4768]: I0217 13:39:38.072396 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Feb 17 13:39:38 crc kubenswrapper[4768]: I0217 13:39:38.474914 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Feb 17 13:39:38 crc kubenswrapper[4768]: W0217 13:39:38.490733 4768 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod81f5c792_d672_4014_84b4_c7b05fbb1139.slice/crio-64202ad4659f8938866d6c0d4534b4c9d815740fa76dc37600bfe19ea425aac3 WatchSource:0}: Error finding container 64202ad4659f8938866d6c0d4534b4c9d815740fa76dc37600bfe19ea425aac3: Status 404 returned error can't find the container with id 64202ad4659f8938866d6c0d4534b4c9d815740fa76dc37600bfe19ea425aac3 Feb 17 13:39:38 crc kubenswrapper[4768]: I0217 13:39:38.705083 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"81f5c792-d672-4014-84b4-c7b05fbb1139","Type":"ContainerStarted","Data":"64202ad4659f8938866d6c0d4534b4c9d815740fa76dc37600bfe19ea425aac3"} Feb 17 13:39:38 crc kubenswrapper[4768]: I0217 13:39:38.707496 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4tmgb" event={"ID":"b7de1a69-e892-4ca4-a61f-20a221ce38ba","Type":"ContainerStarted","Data":"71032de5b6c96e576123376dc42a0ecc62eee333dd0b5158c94d5c2bd0296832"} Feb 17 13:39:38 crc kubenswrapper[4768]: I0217 13:39:38.709006 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fg2l6" event={"ID":"3529a765-a06e-42c3-9a16-959ca7662469","Type":"ContainerStarted","Data":"7629fd7b10653bd059d0bd1befbe43869303faaf4ac25417b177c53c311a55fc"} Feb 17 13:39:38 crc kubenswrapper[4768]: I0217 13:39:38.726981 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-4tmgb" podStartSLOduration=3.685891061 podStartE2EDuration="48.726963163s" podCreationTimestamp="2026-02-17 13:38:50 +0000 UTC" firstStartedPulling="2026-02-17 13:38:53.167636704 +0000 UTC m=+152.447023156" lastFinishedPulling="2026-02-17 13:39:38.208708816 +0000 UTC m=+197.488095258" observedRunningTime="2026-02-17 13:39:38.724520021 +0000 UTC m=+198.003906463" watchObservedRunningTime="2026-02-17 13:39:38.726963163 +0000 UTC m=+198.006349605" Feb 17 13:39:39 crc kubenswrapper[4768]: I0217 13:39:39.714711 4768 generic.go:334] "Generic (PLEG): container finished" podID="3529a765-a06e-42c3-9a16-959ca7662469" containerID="7629fd7b10653bd059d0bd1befbe43869303faaf4ac25417b177c53c311a55fc" exitCode=0 Feb 17 13:39:39 crc kubenswrapper[4768]: I0217 13:39:39.714786 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fg2l6" event={"ID":"3529a765-a06e-42c3-9a16-959ca7662469","Type":"ContainerDied","Data":"7629fd7b10653bd059d0bd1befbe43869303faaf4ac25417b177c53c311a55fc"} Feb 17 13:39:39 crc kubenswrapper[4768]: I0217 13:39:39.716299 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"81f5c792-d672-4014-84b4-c7b05fbb1139","Type":"ContainerStarted","Data":"e5f4cec90449264cb08f44fc8d7c61f6443966fa498283ba332f6f62f8e55442"} Feb 17 13:39:40 crc kubenswrapper[4768]: I0217 13:39:40.552887 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-9-crc" podStartSLOduration=3.552871015 podStartE2EDuration="3.552871015s" podCreationTimestamp="2026-02-17 13:39:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 13:39:39.752334501 +0000 UTC m=+199.031720943" watchObservedRunningTime="2026-02-17 13:39:40.552871015 +0000 UTC m=+199.832257457" Feb 17 13:39:40 crc kubenswrapper[4768]: I0217 13:39:40.723297 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fg2l6" event={"ID":"3529a765-a06e-42c3-9a16-959ca7662469","Type":"ContainerStarted","Data":"77952fd948545e2c1b9533b2e824eea69775681b93fb2de558e55544ac829e05"} Feb 17 13:39:40 crc kubenswrapper[4768]: I0217 13:39:40.737869 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-fg2l6" podStartSLOduration=3.887602262 podStartE2EDuration="47.737849392s" podCreationTimestamp="2026-02-17 13:38:53 +0000 UTC" firstStartedPulling="2026-02-17 13:38:56.335245566 +0000 UTC m=+155.614632008" lastFinishedPulling="2026-02-17 13:39:40.185492696 +0000 UTC m=+199.464879138" observedRunningTime="2026-02-17 13:39:40.736507312 +0000 UTC m=+200.015893754" watchObservedRunningTime="2026-02-17 13:39:40.737849392 +0000 UTC m=+200.017235844" Feb 17 13:39:40 crc kubenswrapper[4768]: I0217 13:39:40.802390 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-4tmgb" Feb 17 13:39:40 crc kubenswrapper[4768]: I0217 13:39:40.802720 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-4tmgb" Feb 17 13:39:40 crc kubenswrapper[4768]: I0217 13:39:40.869605 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-4tmgb" Feb 17 13:39:43 crc kubenswrapper[4768]: I0217 13:39:43.517230 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-fg2l6" Feb 17 13:39:43 crc kubenswrapper[4768]: I0217 13:39:43.517725 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-fg2l6" Feb 17 13:39:43 crc kubenswrapper[4768]: I0217 13:39:43.907293 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-s777d" Feb 17 13:39:43 crc kubenswrapper[4768]: I0217 13:39:43.952170 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-s777d" Feb 17 13:39:44 crc kubenswrapper[4768]: I0217 13:39:44.585228 4768 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-fg2l6" podUID="3529a765-a06e-42c3-9a16-959ca7662469" containerName="registry-server" probeResult="failure" output=< Feb 17 13:39:44 crc kubenswrapper[4768]: timeout: failed to connect service ":50051" within 1s Feb 17 13:39:44 crc kubenswrapper[4768]: > Feb 17 13:39:44 crc kubenswrapper[4768]: I0217 13:39:44.750344 4768 generic.go:334] "Generic (PLEG): container finished" podID="a409f38d-1da9-42e5-94ff-502133f6cee2" containerID="4ed86827d60fc6573a53c0e8439f2fc255f866b9306ecc21f22ad7f8a4ee90f7" exitCode=0 Feb 17 13:39:44 crc kubenswrapper[4768]: I0217 13:39:44.750426 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pc222" event={"ID":"a409f38d-1da9-42e5-94ff-502133f6cee2","Type":"ContainerDied","Data":"4ed86827d60fc6573a53c0e8439f2fc255f866b9306ecc21f22ad7f8a4ee90f7"} Feb 17 13:39:44 crc kubenswrapper[4768]: I0217 13:39:44.761917 4768 generic.go:334] "Generic (PLEG): container finished" podID="7e7136ca-949e-49ff-9f79-47e485a039cb" containerID="73e337e7f7e17e46af93898770869505955501061a0c1e3fa57eaf7f6ec4d634" exitCode=0 Feb 17 13:39:44 crc kubenswrapper[4768]: I0217 13:39:44.762003 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wsm7m" event={"ID":"7e7136ca-949e-49ff-9f79-47e485a039cb","Type":"ContainerDied","Data":"73e337e7f7e17e46af93898770869505955501061a0c1e3fa57eaf7f6ec4d634"} Feb 17 13:39:44 crc kubenswrapper[4768]: I0217 13:39:44.765416 4768 generic.go:334] "Generic (PLEG): container finished" podID="9497730e-2a05-40b9-a4ee-364b67a9133c" containerID="9468aeb0713155aeea4c7bd0be2a7ec89a9fb482cef3b0e24b2809af75e71ba4" exitCode=0 Feb 17 13:39:44 crc kubenswrapper[4768]: I0217 13:39:44.765492 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ncxf5" event={"ID":"9497730e-2a05-40b9-a4ee-364b67a9133c","Type":"ContainerDied","Data":"9468aeb0713155aeea4c7bd0be2a7ec89a9fb482cef3b0e24b2809af75e71ba4"} Feb 17 13:39:44 crc kubenswrapper[4768]: I0217 13:39:44.769383 4768 generic.go:334] "Generic (PLEG): container finished" podID="e826302e-7052-4a6e-a626-93b7b433096a" containerID="0bd11d1036a60c7713d23152ceb0b98b9cec201933764b0d444dcc751c2f9f06" exitCode=0 Feb 17 13:39:44 crc kubenswrapper[4768]: I0217 13:39:44.769462 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fbx28" event={"ID":"e826302e-7052-4a6e-a626-93b7b433096a","Type":"ContainerDied","Data":"0bd11d1036a60c7713d23152ceb0b98b9cec201933764b0d444dcc751c2f9f06"} Feb 17 13:39:44 crc kubenswrapper[4768]: I0217 13:39:44.772895 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9r46k" event={"ID":"6a25c47f-7f1c-42ed-85bf-acfe8949338b","Type":"ContainerStarted","Data":"1d0eabb20024b09a164aa7eb0f24b373aa36e888cc131dfa1ebafaaadde57c60"} Feb 17 13:39:45 crc kubenswrapper[4768]: I0217 13:39:45.780089 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wsm7m" event={"ID":"7e7136ca-949e-49ff-9f79-47e485a039cb","Type":"ContainerStarted","Data":"9278fefa4c49699021989ee3cb743044d055facb0872caf9a9d46dffdd91d025"} Feb 17 13:39:45 crc kubenswrapper[4768]: I0217 13:39:45.785394 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ncxf5" event={"ID":"9497730e-2a05-40b9-a4ee-364b67a9133c","Type":"ContainerStarted","Data":"dcb86514e3bef6771df47f752ee3adfae73ed35b177b35ee43a6e3ef35c0672f"} Feb 17 13:39:45 crc kubenswrapper[4768]: I0217 13:39:45.788392 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fbx28" event={"ID":"e826302e-7052-4a6e-a626-93b7b433096a","Type":"ContainerStarted","Data":"338245eb2569d04c0e8f6ab4a8e4e89319d363ca04bb08c5fb71337c4fa68ffa"} Feb 17 13:39:45 crc kubenswrapper[4768]: I0217 13:39:45.790983 4768 generic.go:334] "Generic (PLEG): container finished" podID="6a25c47f-7f1c-42ed-85bf-acfe8949338b" containerID="1d0eabb20024b09a164aa7eb0f24b373aa36e888cc131dfa1ebafaaadde57c60" exitCode=0 Feb 17 13:39:45 crc kubenswrapper[4768]: I0217 13:39:45.791079 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9r46k" event={"ID":"6a25c47f-7f1c-42ed-85bf-acfe8949338b","Type":"ContainerDied","Data":"1d0eabb20024b09a164aa7eb0f24b373aa36e888cc131dfa1ebafaaadde57c60"} Feb 17 13:39:45 crc kubenswrapper[4768]: I0217 13:39:45.791130 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9r46k" event={"ID":"6a25c47f-7f1c-42ed-85bf-acfe8949338b","Type":"ContainerStarted","Data":"03d02b603bb5139c3ef49f573d035c9467063bdbf4f03b019b691b38c3cdbd6a"} Feb 17 13:39:45 crc kubenswrapper[4768]: I0217 13:39:45.793656 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pc222" event={"ID":"a409f38d-1da9-42e5-94ff-502133f6cee2","Type":"ContainerStarted","Data":"38cea3e6dea32bbb82bf277a79c52a6f19ef1e1488aff139fc90b871b156d1e5"} Feb 17 13:39:45 crc kubenswrapper[4768]: I0217 13:39:45.810807 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-wsm7m" podStartSLOduration=3.791496734 podStartE2EDuration="55.810784714s" podCreationTimestamp="2026-02-17 13:38:50 +0000 UTC" firstStartedPulling="2026-02-17 13:38:53.201374092 +0000 UTC m=+152.480760534" lastFinishedPulling="2026-02-17 13:39:45.220662072 +0000 UTC m=+204.500048514" observedRunningTime="2026-02-17 13:39:45.807294661 +0000 UTC m=+205.086681103" watchObservedRunningTime="2026-02-17 13:39:45.810784714 +0000 UTC m=+205.090171156" Feb 17 13:39:45 crc kubenswrapper[4768]: I0217 13:39:45.839037 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-ncxf5" podStartSLOduration=2.878867075 podStartE2EDuration="54.83901943s" podCreationTimestamp="2026-02-17 13:38:51 +0000 UTC" firstStartedPulling="2026-02-17 13:38:53.207390781 +0000 UTC m=+152.486777223" lastFinishedPulling="2026-02-17 13:39:45.167543136 +0000 UTC m=+204.446929578" observedRunningTime="2026-02-17 13:39:45.835008362 +0000 UTC m=+205.114394814" watchObservedRunningTime="2026-02-17 13:39:45.83901943 +0000 UTC m=+205.118405872" Feb 17 13:39:45 crc kubenswrapper[4768]: I0217 13:39:45.856529 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-pc222" podStartSLOduration=3.793109972 podStartE2EDuration="56.856508662s" podCreationTimestamp="2026-02-17 13:38:49 +0000 UTC" firstStartedPulling="2026-02-17 13:38:52.125590774 +0000 UTC m=+151.404977216" lastFinishedPulling="2026-02-17 13:39:45.188989464 +0000 UTC m=+204.468375906" observedRunningTime="2026-02-17 13:39:45.856341988 +0000 UTC m=+205.135728440" watchObservedRunningTime="2026-02-17 13:39:45.856508662 +0000 UTC m=+205.135895104" Feb 17 13:39:45 crc kubenswrapper[4768]: I0217 13:39:45.873392 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-fbx28" podStartSLOduration=1.834723259 podStartE2EDuration="53.873374156s" podCreationTimestamp="2026-02-17 13:38:52 +0000 UTC" firstStartedPulling="2026-02-17 13:38:53.172504277 +0000 UTC m=+152.451890719" lastFinishedPulling="2026-02-17 13:39:45.211155174 +0000 UTC m=+204.490541616" observedRunningTime="2026-02-17 13:39:45.872996975 +0000 UTC m=+205.152383417" watchObservedRunningTime="2026-02-17 13:39:45.873374156 +0000 UTC m=+205.152760598" Feb 17 13:39:45 crc kubenswrapper[4768]: I0217 13:39:45.894577 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-9r46k" podStartSLOduration=3.774037775 podStartE2EDuration="55.894558107s" podCreationTimestamp="2026-02-17 13:38:50 +0000 UTC" firstStartedPulling="2026-02-17 13:38:53.192799883 +0000 UTC m=+152.472186325" lastFinishedPulling="2026-02-17 13:39:45.313320205 +0000 UTC m=+204.592706657" observedRunningTime="2026-02-17 13:39:45.894437753 +0000 UTC m=+205.173824195" watchObservedRunningTime="2026-02-17 13:39:45.894558107 +0000 UTC m=+205.173944559" Feb 17 13:39:46 crc kubenswrapper[4768]: I0217 13:39:46.974068 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-s777d"] Feb 17 13:39:46 crc kubenswrapper[4768]: I0217 13:39:46.974306 4768 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-s777d" podUID="1c56706d-59bb-4e83-a1d8-a39c61f50cc2" containerName="registry-server" containerID="cri-o://8d4a7aa1483123b2d17e6d3a8b1e0e63045a8478d2ac1c3c8d455394b2542187" gracePeriod=2 Feb 17 13:39:47 crc kubenswrapper[4768]: I0217 13:39:47.471676 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-s777d" Feb 17 13:39:47 crc kubenswrapper[4768]: I0217 13:39:47.536979 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1c56706d-59bb-4e83-a1d8-a39c61f50cc2-catalog-content\") pod \"1c56706d-59bb-4e83-a1d8-a39c61f50cc2\" (UID: \"1c56706d-59bb-4e83-a1d8-a39c61f50cc2\") " Feb 17 13:39:47 crc kubenswrapper[4768]: I0217 13:39:47.537087 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gq67p\" (UniqueName: \"kubernetes.io/projected/1c56706d-59bb-4e83-a1d8-a39c61f50cc2-kube-api-access-gq67p\") pod \"1c56706d-59bb-4e83-a1d8-a39c61f50cc2\" (UID: \"1c56706d-59bb-4e83-a1d8-a39c61f50cc2\") " Feb 17 13:39:47 crc kubenswrapper[4768]: I0217 13:39:47.537129 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1c56706d-59bb-4e83-a1d8-a39c61f50cc2-utilities\") pod \"1c56706d-59bb-4e83-a1d8-a39c61f50cc2\" (UID: \"1c56706d-59bb-4e83-a1d8-a39c61f50cc2\") " Feb 17 13:39:47 crc kubenswrapper[4768]: I0217 13:39:47.537811 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1c56706d-59bb-4e83-a1d8-a39c61f50cc2-utilities" (OuterVolumeSpecName: "utilities") pod "1c56706d-59bb-4e83-a1d8-a39c61f50cc2" (UID: "1c56706d-59bb-4e83-a1d8-a39c61f50cc2"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 13:39:47 crc kubenswrapper[4768]: I0217 13:39:47.546117 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1c56706d-59bb-4e83-a1d8-a39c61f50cc2-kube-api-access-gq67p" (OuterVolumeSpecName: "kube-api-access-gq67p") pod "1c56706d-59bb-4e83-a1d8-a39c61f50cc2" (UID: "1c56706d-59bb-4e83-a1d8-a39c61f50cc2"). InnerVolumeSpecName "kube-api-access-gq67p". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 13:39:47 crc kubenswrapper[4768]: I0217 13:39:47.638866 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gq67p\" (UniqueName: \"kubernetes.io/projected/1c56706d-59bb-4e83-a1d8-a39c61f50cc2-kube-api-access-gq67p\") on node \"crc\" DevicePath \"\"" Feb 17 13:39:47 crc kubenswrapper[4768]: I0217 13:39:47.638912 4768 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1c56706d-59bb-4e83-a1d8-a39c61f50cc2-utilities\") on node \"crc\" DevicePath \"\"" Feb 17 13:39:47 crc kubenswrapper[4768]: I0217 13:39:47.672757 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1c56706d-59bb-4e83-a1d8-a39c61f50cc2-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1c56706d-59bb-4e83-a1d8-a39c61f50cc2" (UID: "1c56706d-59bb-4e83-a1d8-a39c61f50cc2"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 13:39:47 crc kubenswrapper[4768]: I0217 13:39:47.740389 4768 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1c56706d-59bb-4e83-a1d8-a39c61f50cc2-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 17 13:39:47 crc kubenswrapper[4768]: I0217 13:39:47.805665 4768 generic.go:334] "Generic (PLEG): container finished" podID="1c56706d-59bb-4e83-a1d8-a39c61f50cc2" containerID="8d4a7aa1483123b2d17e6d3a8b1e0e63045a8478d2ac1c3c8d455394b2542187" exitCode=0 Feb 17 13:39:47 crc kubenswrapper[4768]: I0217 13:39:47.805692 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-s777d" Feb 17 13:39:47 crc kubenswrapper[4768]: I0217 13:39:47.805705 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-s777d" event={"ID":"1c56706d-59bb-4e83-a1d8-a39c61f50cc2","Type":"ContainerDied","Data":"8d4a7aa1483123b2d17e6d3a8b1e0e63045a8478d2ac1c3c8d455394b2542187"} Feb 17 13:39:47 crc kubenswrapper[4768]: I0217 13:39:47.805733 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-s777d" event={"ID":"1c56706d-59bb-4e83-a1d8-a39c61f50cc2","Type":"ContainerDied","Data":"bfa071440500237200d2db7c87e97d775c5f114b85c300627c60c31edf15193c"} Feb 17 13:39:47 crc kubenswrapper[4768]: I0217 13:39:47.805750 4768 scope.go:117] "RemoveContainer" containerID="8d4a7aa1483123b2d17e6d3a8b1e0e63045a8478d2ac1c3c8d455394b2542187" Feb 17 13:39:47 crc kubenswrapper[4768]: I0217 13:39:47.823382 4768 scope.go:117] "RemoveContainer" containerID="c363b98c16390bd7e98fd1623f76661d5b5588dea54247fd9d6323848a02a283" Feb 17 13:39:47 crc kubenswrapper[4768]: I0217 13:39:47.844761 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-s777d"] Feb 17 13:39:47 crc kubenswrapper[4768]: I0217 13:39:47.845387 4768 scope.go:117] "RemoveContainer" containerID="e71010aba13536ec436d60e28042aefb0c36e9e2433efeaba76c1ff8886e68bd" Feb 17 13:39:47 crc kubenswrapper[4768]: I0217 13:39:47.846603 4768 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-s777d"] Feb 17 13:39:47 crc kubenswrapper[4768]: I0217 13:39:47.863395 4768 scope.go:117] "RemoveContainer" containerID="8d4a7aa1483123b2d17e6d3a8b1e0e63045a8478d2ac1c3c8d455394b2542187" Feb 17 13:39:47 crc kubenswrapper[4768]: E0217 13:39:47.863769 4768 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8d4a7aa1483123b2d17e6d3a8b1e0e63045a8478d2ac1c3c8d455394b2542187\": container with ID starting with 8d4a7aa1483123b2d17e6d3a8b1e0e63045a8478d2ac1c3c8d455394b2542187 not found: ID does not exist" containerID="8d4a7aa1483123b2d17e6d3a8b1e0e63045a8478d2ac1c3c8d455394b2542187" Feb 17 13:39:47 crc kubenswrapper[4768]: I0217 13:39:47.863800 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8d4a7aa1483123b2d17e6d3a8b1e0e63045a8478d2ac1c3c8d455394b2542187"} err="failed to get container status \"8d4a7aa1483123b2d17e6d3a8b1e0e63045a8478d2ac1c3c8d455394b2542187\": rpc error: code = NotFound desc = could not find container \"8d4a7aa1483123b2d17e6d3a8b1e0e63045a8478d2ac1c3c8d455394b2542187\": container with ID starting with 8d4a7aa1483123b2d17e6d3a8b1e0e63045a8478d2ac1c3c8d455394b2542187 not found: ID does not exist" Feb 17 13:39:47 crc kubenswrapper[4768]: I0217 13:39:47.863826 4768 scope.go:117] "RemoveContainer" containerID="c363b98c16390bd7e98fd1623f76661d5b5588dea54247fd9d6323848a02a283" Feb 17 13:39:47 crc kubenswrapper[4768]: E0217 13:39:47.866786 4768 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c363b98c16390bd7e98fd1623f76661d5b5588dea54247fd9d6323848a02a283\": container with ID starting with c363b98c16390bd7e98fd1623f76661d5b5588dea54247fd9d6323848a02a283 not found: ID does not exist" containerID="c363b98c16390bd7e98fd1623f76661d5b5588dea54247fd9d6323848a02a283" Feb 17 13:39:47 crc kubenswrapper[4768]: I0217 13:39:47.866849 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c363b98c16390bd7e98fd1623f76661d5b5588dea54247fd9d6323848a02a283"} err="failed to get container status \"c363b98c16390bd7e98fd1623f76661d5b5588dea54247fd9d6323848a02a283\": rpc error: code = NotFound desc = could not find container \"c363b98c16390bd7e98fd1623f76661d5b5588dea54247fd9d6323848a02a283\": container with ID starting with c363b98c16390bd7e98fd1623f76661d5b5588dea54247fd9d6323848a02a283 not found: ID does not exist" Feb 17 13:39:47 crc kubenswrapper[4768]: I0217 13:39:47.866870 4768 scope.go:117] "RemoveContainer" containerID="e71010aba13536ec436d60e28042aefb0c36e9e2433efeaba76c1ff8886e68bd" Feb 17 13:39:47 crc kubenswrapper[4768]: E0217 13:39:47.867205 4768 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e71010aba13536ec436d60e28042aefb0c36e9e2433efeaba76c1ff8886e68bd\": container with ID starting with e71010aba13536ec436d60e28042aefb0c36e9e2433efeaba76c1ff8886e68bd not found: ID does not exist" containerID="e71010aba13536ec436d60e28042aefb0c36e9e2433efeaba76c1ff8886e68bd" Feb 17 13:39:47 crc kubenswrapper[4768]: I0217 13:39:47.867232 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e71010aba13536ec436d60e28042aefb0c36e9e2433efeaba76c1ff8886e68bd"} err="failed to get container status \"e71010aba13536ec436d60e28042aefb0c36e9e2433efeaba76c1ff8886e68bd\": rpc error: code = NotFound desc = could not find container \"e71010aba13536ec436d60e28042aefb0c36e9e2433efeaba76c1ff8886e68bd\": container with ID starting with e71010aba13536ec436d60e28042aefb0c36e9e2433efeaba76c1ff8886e68bd not found: ID does not exist" Feb 17 13:39:49 crc kubenswrapper[4768]: I0217 13:39:49.546802 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1c56706d-59bb-4e83-a1d8-a39c61f50cc2" path="/var/lib/kubelet/pods/1c56706d-59bb-4e83-a1d8-a39c61f50cc2/volumes" Feb 17 13:39:50 crc kubenswrapper[4768]: I0217 13:39:50.087935 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-pc222" Feb 17 13:39:50 crc kubenswrapper[4768]: I0217 13:39:50.088294 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-pc222" Feb 17 13:39:50 crc kubenswrapper[4768]: I0217 13:39:50.145973 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-pc222" Feb 17 13:39:50 crc kubenswrapper[4768]: I0217 13:39:50.268397 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-756dc596b5-xvlv5"] Feb 17 13:39:50 crc kubenswrapper[4768]: I0217 13:39:50.268595 4768 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-756dc596b5-xvlv5" podUID="e584c520-463b-4a51-8368-aeea8510245d" containerName="controller-manager" containerID="cri-o://924fa2955598bb3b610ba4f57564a6d425355e5a4084abdbff9ca2d542bfca08" gracePeriod=30 Feb 17 13:39:50 crc kubenswrapper[4768]: I0217 13:39:50.291673 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-58c549fbf6-h8dnn"] Feb 17 13:39:50 crc kubenswrapper[4768]: I0217 13:39:50.291869 4768 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-58c549fbf6-h8dnn" podUID="46f4979b-0975-4ba7-86b1-56ef16779495" containerName="route-controller-manager" containerID="cri-o://c534307b94e911788db3da3be38cc36c1359dcda729537f16c514cf54c2fad53" gracePeriod=30 Feb 17 13:39:50 crc kubenswrapper[4768]: I0217 13:39:50.728378 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-wsm7m" Feb 17 13:39:50 crc kubenswrapper[4768]: I0217 13:39:50.728437 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-wsm7m" Feb 17 13:39:50 crc kubenswrapper[4768]: I0217 13:39:50.770922 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-wsm7m" Feb 17 13:39:50 crc kubenswrapper[4768]: I0217 13:39:50.823334 4768 generic.go:334] "Generic (PLEG): container finished" podID="46f4979b-0975-4ba7-86b1-56ef16779495" containerID="c534307b94e911788db3da3be38cc36c1359dcda729537f16c514cf54c2fad53" exitCode=0 Feb 17 13:39:50 crc kubenswrapper[4768]: I0217 13:39:50.823447 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-58c549fbf6-h8dnn" event={"ID":"46f4979b-0975-4ba7-86b1-56ef16779495","Type":"ContainerDied","Data":"c534307b94e911788db3da3be38cc36c1359dcda729537f16c514cf54c2fad53"} Feb 17 13:39:50 crc kubenswrapper[4768]: I0217 13:39:50.844709 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-4tmgb" Feb 17 13:39:50 crc kubenswrapper[4768]: I0217 13:39:50.864530 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-wsm7m" Feb 17 13:39:50 crc kubenswrapper[4768]: I0217 13:39:50.874507 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-pc222" Feb 17 13:39:50 crc kubenswrapper[4768]: I0217 13:39:50.933334 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-9r46k" Feb 17 13:39:50 crc kubenswrapper[4768]: I0217 13:39:50.933380 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-9r46k" Feb 17 13:39:50 crc kubenswrapper[4768]: I0217 13:39:50.972941 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-9r46k" Feb 17 13:39:51 crc kubenswrapper[4768]: I0217 13:39:51.517521 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-58c549fbf6-h8dnn" Feb 17 13:39:51 crc kubenswrapper[4768]: I0217 13:39:51.546948 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6c9c747f-mj765"] Feb 17 13:39:51 crc kubenswrapper[4768]: E0217 13:39:51.547291 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="46f4979b-0975-4ba7-86b1-56ef16779495" containerName="route-controller-manager" Feb 17 13:39:51 crc kubenswrapper[4768]: I0217 13:39:51.547310 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="46f4979b-0975-4ba7-86b1-56ef16779495" containerName="route-controller-manager" Feb 17 13:39:51 crc kubenswrapper[4768]: E0217 13:39:51.547324 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1c56706d-59bb-4e83-a1d8-a39c61f50cc2" containerName="extract-content" Feb 17 13:39:51 crc kubenswrapper[4768]: I0217 13:39:51.547336 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="1c56706d-59bb-4e83-a1d8-a39c61f50cc2" containerName="extract-content" Feb 17 13:39:51 crc kubenswrapper[4768]: E0217 13:39:51.547357 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1c56706d-59bb-4e83-a1d8-a39c61f50cc2" containerName="registry-server" Feb 17 13:39:51 crc kubenswrapper[4768]: I0217 13:39:51.547368 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="1c56706d-59bb-4e83-a1d8-a39c61f50cc2" containerName="registry-server" Feb 17 13:39:51 crc kubenswrapper[4768]: E0217 13:39:51.547387 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1c56706d-59bb-4e83-a1d8-a39c61f50cc2" containerName="extract-utilities" Feb 17 13:39:51 crc kubenswrapper[4768]: I0217 13:39:51.547400 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="1c56706d-59bb-4e83-a1d8-a39c61f50cc2" containerName="extract-utilities" Feb 17 13:39:51 crc kubenswrapper[4768]: I0217 13:39:51.547584 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="46f4979b-0975-4ba7-86b1-56ef16779495" containerName="route-controller-manager" Feb 17 13:39:51 crc kubenswrapper[4768]: I0217 13:39:51.547601 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="1c56706d-59bb-4e83-a1d8-a39c61f50cc2" containerName="registry-server" Feb 17 13:39:51 crc kubenswrapper[4768]: I0217 13:39:51.548181 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6c9c747f-mj765" Feb 17 13:39:51 crc kubenswrapper[4768]: I0217 13:39:51.551780 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6c9c747f-mj765"] Feb 17 13:39:51 crc kubenswrapper[4768]: I0217 13:39:51.584211 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/46f4979b-0975-4ba7-86b1-56ef16779495-config\") pod \"46f4979b-0975-4ba7-86b1-56ef16779495\" (UID: \"46f4979b-0975-4ba7-86b1-56ef16779495\") " Feb 17 13:39:51 crc kubenswrapper[4768]: I0217 13:39:51.584282 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/46f4979b-0975-4ba7-86b1-56ef16779495-client-ca\") pod \"46f4979b-0975-4ba7-86b1-56ef16779495\" (UID: \"46f4979b-0975-4ba7-86b1-56ef16779495\") " Feb 17 13:39:51 crc kubenswrapper[4768]: I0217 13:39:51.584354 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/46f4979b-0975-4ba7-86b1-56ef16779495-serving-cert\") pod \"46f4979b-0975-4ba7-86b1-56ef16779495\" (UID: \"46f4979b-0975-4ba7-86b1-56ef16779495\") " Feb 17 13:39:51 crc kubenswrapper[4768]: I0217 13:39:51.584381 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xjrhf\" (UniqueName: \"kubernetes.io/projected/46f4979b-0975-4ba7-86b1-56ef16779495-kube-api-access-xjrhf\") pod \"46f4979b-0975-4ba7-86b1-56ef16779495\" (UID: \"46f4979b-0975-4ba7-86b1-56ef16779495\") " Feb 17 13:39:51 crc kubenswrapper[4768]: I0217 13:39:51.584521 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dbj95\" (UniqueName: \"kubernetes.io/projected/14124682-b6b8-4c21-a24f-f3368478a1d3-kube-api-access-dbj95\") pod \"route-controller-manager-6c9c747f-mj765\" (UID: \"14124682-b6b8-4c21-a24f-f3368478a1d3\") " pod="openshift-route-controller-manager/route-controller-manager-6c9c747f-mj765" Feb 17 13:39:51 crc kubenswrapper[4768]: I0217 13:39:51.584566 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/14124682-b6b8-4c21-a24f-f3368478a1d3-client-ca\") pod \"route-controller-manager-6c9c747f-mj765\" (UID: \"14124682-b6b8-4c21-a24f-f3368478a1d3\") " pod="openshift-route-controller-manager/route-controller-manager-6c9c747f-mj765" Feb 17 13:39:51 crc kubenswrapper[4768]: I0217 13:39:51.584613 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/14124682-b6b8-4c21-a24f-f3368478a1d3-serving-cert\") pod \"route-controller-manager-6c9c747f-mj765\" (UID: \"14124682-b6b8-4c21-a24f-f3368478a1d3\") " pod="openshift-route-controller-manager/route-controller-manager-6c9c747f-mj765" Feb 17 13:39:51 crc kubenswrapper[4768]: I0217 13:39:51.584631 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/14124682-b6b8-4c21-a24f-f3368478a1d3-config\") pod \"route-controller-manager-6c9c747f-mj765\" (UID: \"14124682-b6b8-4c21-a24f-f3368478a1d3\") " pod="openshift-route-controller-manager/route-controller-manager-6c9c747f-mj765" Feb 17 13:39:51 crc kubenswrapper[4768]: I0217 13:39:51.585154 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/46f4979b-0975-4ba7-86b1-56ef16779495-client-ca" (OuterVolumeSpecName: "client-ca") pod "46f4979b-0975-4ba7-86b1-56ef16779495" (UID: "46f4979b-0975-4ba7-86b1-56ef16779495"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 13:39:51 crc kubenswrapper[4768]: I0217 13:39:51.585661 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/46f4979b-0975-4ba7-86b1-56ef16779495-config" (OuterVolumeSpecName: "config") pod "46f4979b-0975-4ba7-86b1-56ef16779495" (UID: "46f4979b-0975-4ba7-86b1-56ef16779495"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 13:39:51 crc kubenswrapper[4768]: I0217 13:39:51.590785 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/46f4979b-0975-4ba7-86b1-56ef16779495-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "46f4979b-0975-4ba7-86b1-56ef16779495" (UID: "46f4979b-0975-4ba7-86b1-56ef16779495"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 13:39:51 crc kubenswrapper[4768]: I0217 13:39:51.595610 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/46f4979b-0975-4ba7-86b1-56ef16779495-kube-api-access-xjrhf" (OuterVolumeSpecName: "kube-api-access-xjrhf") pod "46f4979b-0975-4ba7-86b1-56ef16779495" (UID: "46f4979b-0975-4ba7-86b1-56ef16779495"). InnerVolumeSpecName "kube-api-access-xjrhf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 13:39:51 crc kubenswrapper[4768]: I0217 13:39:51.636296 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-756dc596b5-xvlv5" Feb 17 13:39:51 crc kubenswrapper[4768]: I0217 13:39:51.684903 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e584c520-463b-4a51-8368-aeea8510245d-serving-cert\") pod \"e584c520-463b-4a51-8368-aeea8510245d\" (UID: \"e584c520-463b-4a51-8368-aeea8510245d\") " Feb 17 13:39:51 crc kubenswrapper[4768]: I0217 13:39:51.684942 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e584c520-463b-4a51-8368-aeea8510245d-proxy-ca-bundles\") pod \"e584c520-463b-4a51-8368-aeea8510245d\" (UID: \"e584c520-463b-4a51-8368-aeea8510245d\") " Feb 17 13:39:51 crc kubenswrapper[4768]: I0217 13:39:51.684960 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mb95t\" (UniqueName: \"kubernetes.io/projected/e584c520-463b-4a51-8368-aeea8510245d-kube-api-access-mb95t\") pod \"e584c520-463b-4a51-8368-aeea8510245d\" (UID: \"e584c520-463b-4a51-8368-aeea8510245d\") " Feb 17 13:39:51 crc kubenswrapper[4768]: I0217 13:39:51.685041 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e584c520-463b-4a51-8368-aeea8510245d-client-ca\") pod \"e584c520-463b-4a51-8368-aeea8510245d\" (UID: \"e584c520-463b-4a51-8368-aeea8510245d\") " Feb 17 13:39:51 crc kubenswrapper[4768]: I0217 13:39:51.685080 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e584c520-463b-4a51-8368-aeea8510245d-config\") pod \"e584c520-463b-4a51-8368-aeea8510245d\" (UID: \"e584c520-463b-4a51-8368-aeea8510245d\") " Feb 17 13:39:51 crc kubenswrapper[4768]: I0217 13:39:51.685199 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dbj95\" (UniqueName: \"kubernetes.io/projected/14124682-b6b8-4c21-a24f-f3368478a1d3-kube-api-access-dbj95\") pod \"route-controller-manager-6c9c747f-mj765\" (UID: \"14124682-b6b8-4c21-a24f-f3368478a1d3\") " pod="openshift-route-controller-manager/route-controller-manager-6c9c747f-mj765" Feb 17 13:39:51 crc kubenswrapper[4768]: I0217 13:39:51.685481 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/14124682-b6b8-4c21-a24f-f3368478a1d3-client-ca\") pod \"route-controller-manager-6c9c747f-mj765\" (UID: \"14124682-b6b8-4c21-a24f-f3368478a1d3\") " pod="openshift-route-controller-manager/route-controller-manager-6c9c747f-mj765" Feb 17 13:39:51 crc kubenswrapper[4768]: I0217 13:39:51.685530 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/14124682-b6b8-4c21-a24f-f3368478a1d3-serving-cert\") pod \"route-controller-manager-6c9c747f-mj765\" (UID: \"14124682-b6b8-4c21-a24f-f3368478a1d3\") " pod="openshift-route-controller-manager/route-controller-manager-6c9c747f-mj765" Feb 17 13:39:51 crc kubenswrapper[4768]: I0217 13:39:51.685551 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/14124682-b6b8-4c21-a24f-f3368478a1d3-config\") pod \"route-controller-manager-6c9c747f-mj765\" (UID: \"14124682-b6b8-4c21-a24f-f3368478a1d3\") " pod="openshift-route-controller-manager/route-controller-manager-6c9c747f-mj765" Feb 17 13:39:51 crc kubenswrapper[4768]: I0217 13:39:51.685592 4768 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/46f4979b-0975-4ba7-86b1-56ef16779495-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 17 13:39:51 crc kubenswrapper[4768]: I0217 13:39:51.685604 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xjrhf\" (UniqueName: \"kubernetes.io/projected/46f4979b-0975-4ba7-86b1-56ef16779495-kube-api-access-xjrhf\") on node \"crc\" DevicePath \"\"" Feb 17 13:39:51 crc kubenswrapper[4768]: I0217 13:39:51.685613 4768 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/46f4979b-0975-4ba7-86b1-56ef16779495-config\") on node \"crc\" DevicePath \"\"" Feb 17 13:39:51 crc kubenswrapper[4768]: I0217 13:39:51.685621 4768 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/46f4979b-0975-4ba7-86b1-56ef16779495-client-ca\") on node \"crc\" DevicePath \"\"" Feb 17 13:39:51 crc kubenswrapper[4768]: I0217 13:39:51.687250 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/14124682-b6b8-4c21-a24f-f3368478a1d3-config\") pod \"route-controller-manager-6c9c747f-mj765\" (UID: \"14124682-b6b8-4c21-a24f-f3368478a1d3\") " pod="openshift-route-controller-manager/route-controller-manager-6c9c747f-mj765" Feb 17 13:39:51 crc kubenswrapper[4768]: I0217 13:39:51.688796 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e584c520-463b-4a51-8368-aeea8510245d-config" (OuterVolumeSpecName: "config") pod "e584c520-463b-4a51-8368-aeea8510245d" (UID: "e584c520-463b-4a51-8368-aeea8510245d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 13:39:51 crc kubenswrapper[4768]: I0217 13:39:51.689333 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e584c520-463b-4a51-8368-aeea8510245d-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "e584c520-463b-4a51-8368-aeea8510245d" (UID: "e584c520-463b-4a51-8368-aeea8510245d"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 13:39:51 crc kubenswrapper[4768]: I0217 13:39:51.689350 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e584c520-463b-4a51-8368-aeea8510245d-client-ca" (OuterVolumeSpecName: "client-ca") pod "e584c520-463b-4a51-8368-aeea8510245d" (UID: "e584c520-463b-4a51-8368-aeea8510245d"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 13:39:51 crc kubenswrapper[4768]: I0217 13:39:51.691089 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e584c520-463b-4a51-8368-aeea8510245d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e584c520-463b-4a51-8368-aeea8510245d" (UID: "e584c520-463b-4a51-8368-aeea8510245d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 13:39:51 crc kubenswrapper[4768]: I0217 13:39:51.692232 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/14124682-b6b8-4c21-a24f-f3368478a1d3-client-ca\") pod \"route-controller-manager-6c9c747f-mj765\" (UID: \"14124682-b6b8-4c21-a24f-f3368478a1d3\") " pod="openshift-route-controller-manager/route-controller-manager-6c9c747f-mj765" Feb 17 13:39:51 crc kubenswrapper[4768]: I0217 13:39:51.692929 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e584c520-463b-4a51-8368-aeea8510245d-kube-api-access-mb95t" (OuterVolumeSpecName: "kube-api-access-mb95t") pod "e584c520-463b-4a51-8368-aeea8510245d" (UID: "e584c520-463b-4a51-8368-aeea8510245d"). InnerVolumeSpecName "kube-api-access-mb95t". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 13:39:51 crc kubenswrapper[4768]: I0217 13:39:51.695475 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/14124682-b6b8-4c21-a24f-f3368478a1d3-serving-cert\") pod \"route-controller-manager-6c9c747f-mj765\" (UID: \"14124682-b6b8-4c21-a24f-f3368478a1d3\") " pod="openshift-route-controller-manager/route-controller-manager-6c9c747f-mj765" Feb 17 13:39:51 crc kubenswrapper[4768]: I0217 13:39:51.711741 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dbj95\" (UniqueName: \"kubernetes.io/projected/14124682-b6b8-4c21-a24f-f3368478a1d3-kube-api-access-dbj95\") pod \"route-controller-manager-6c9c747f-mj765\" (UID: \"14124682-b6b8-4c21-a24f-f3368478a1d3\") " pod="openshift-route-controller-manager/route-controller-manager-6c9c747f-mj765" Feb 17 13:39:51 crc kubenswrapper[4768]: I0217 13:39:51.786161 4768 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e584c520-463b-4a51-8368-aeea8510245d-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 17 13:39:51 crc kubenswrapper[4768]: I0217 13:39:51.786228 4768 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e584c520-463b-4a51-8368-aeea8510245d-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 17 13:39:51 crc kubenswrapper[4768]: I0217 13:39:51.786244 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mb95t\" (UniqueName: \"kubernetes.io/projected/e584c520-463b-4a51-8368-aeea8510245d-kube-api-access-mb95t\") on node \"crc\" DevicePath \"\"" Feb 17 13:39:51 crc kubenswrapper[4768]: I0217 13:39:51.786256 4768 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e584c520-463b-4a51-8368-aeea8510245d-client-ca\") on node \"crc\" DevicePath \"\"" Feb 17 13:39:51 crc kubenswrapper[4768]: I0217 13:39:51.786268 4768 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e584c520-463b-4a51-8368-aeea8510245d-config\") on node \"crc\" DevicePath \"\"" Feb 17 13:39:51 crc kubenswrapper[4768]: I0217 13:39:51.832763 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-58c549fbf6-h8dnn" Feb 17 13:39:51 crc kubenswrapper[4768]: I0217 13:39:51.833272 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-58c549fbf6-h8dnn" event={"ID":"46f4979b-0975-4ba7-86b1-56ef16779495","Type":"ContainerDied","Data":"8d1879c2d61526d2a90ec7c28bc5308e7c5b7f0899e50341bf4939cd89a341ee"} Feb 17 13:39:51 crc kubenswrapper[4768]: I0217 13:39:51.833349 4768 scope.go:117] "RemoveContainer" containerID="c534307b94e911788db3da3be38cc36c1359dcda729537f16c514cf54c2fad53" Feb 17 13:39:51 crc kubenswrapper[4768]: I0217 13:39:51.837889 4768 generic.go:334] "Generic (PLEG): container finished" podID="e584c520-463b-4a51-8368-aeea8510245d" containerID="924fa2955598bb3b610ba4f57564a6d425355e5a4084abdbff9ca2d542bfca08" exitCode=0 Feb 17 13:39:51 crc kubenswrapper[4768]: I0217 13:39:51.837942 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-756dc596b5-xvlv5" Feb 17 13:39:51 crc kubenswrapper[4768]: I0217 13:39:51.838042 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-756dc596b5-xvlv5" event={"ID":"e584c520-463b-4a51-8368-aeea8510245d","Type":"ContainerDied","Data":"924fa2955598bb3b610ba4f57564a6d425355e5a4084abdbff9ca2d542bfca08"} Feb 17 13:39:51 crc kubenswrapper[4768]: I0217 13:39:51.838153 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-756dc596b5-xvlv5" event={"ID":"e584c520-463b-4a51-8368-aeea8510245d","Type":"ContainerDied","Data":"4e42afd49347ceffd2290e86bc142f05b722ed839f49a5e9152adfc2e4bc9c5c"} Feb 17 13:39:51 crc kubenswrapper[4768]: I0217 13:39:51.855875 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-58c549fbf6-h8dnn"] Feb 17 13:39:51 crc kubenswrapper[4768]: I0217 13:39:51.858712 4768 scope.go:117] "RemoveContainer" containerID="924fa2955598bb3b610ba4f57564a6d425355e5a4084abdbff9ca2d542bfca08" Feb 17 13:39:51 crc kubenswrapper[4768]: I0217 13:39:51.858869 4768 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-58c549fbf6-h8dnn"] Feb 17 13:39:51 crc kubenswrapper[4768]: I0217 13:39:51.867141 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6c9c747f-mj765" Feb 17 13:39:51 crc kubenswrapper[4768]: I0217 13:39:51.876546 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-756dc596b5-xvlv5"] Feb 17 13:39:51 crc kubenswrapper[4768]: I0217 13:39:51.879774 4768 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-756dc596b5-xvlv5"] Feb 17 13:39:51 crc kubenswrapper[4768]: I0217 13:39:51.881562 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-9r46k" Feb 17 13:39:51 crc kubenswrapper[4768]: I0217 13:39:51.882812 4768 scope.go:117] "RemoveContainer" containerID="924fa2955598bb3b610ba4f57564a6d425355e5a4084abdbff9ca2d542bfca08" Feb 17 13:39:51 crc kubenswrapper[4768]: E0217 13:39:51.883399 4768 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"924fa2955598bb3b610ba4f57564a6d425355e5a4084abdbff9ca2d542bfca08\": container with ID starting with 924fa2955598bb3b610ba4f57564a6d425355e5a4084abdbff9ca2d542bfca08 not found: ID does not exist" containerID="924fa2955598bb3b610ba4f57564a6d425355e5a4084abdbff9ca2d542bfca08" Feb 17 13:39:51 crc kubenswrapper[4768]: I0217 13:39:51.883482 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"924fa2955598bb3b610ba4f57564a6d425355e5a4084abdbff9ca2d542bfca08"} err="failed to get container status \"924fa2955598bb3b610ba4f57564a6d425355e5a4084abdbff9ca2d542bfca08\": rpc error: code = NotFound desc = could not find container \"924fa2955598bb3b610ba4f57564a6d425355e5a4084abdbff9ca2d542bfca08\": container with ID starting with 924fa2955598bb3b610ba4f57564a6d425355e5a4084abdbff9ca2d542bfca08 not found: ID does not exist" Feb 17 13:39:52 crc kubenswrapper[4768]: I0217 13:39:52.117435 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-h77q6"] Feb 17 13:39:52 crc kubenswrapper[4768]: I0217 13:39:52.167889 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6c9c747f-mj765"] Feb 17 13:39:52 crc kubenswrapper[4768]: W0217 13:39:52.179185 4768 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod14124682_b6b8_4c21_a24f_f3368478a1d3.slice/crio-6f37f934269f4fcf35e709d41550a0f94f14435336a81eaab9dc7824dabc536c WatchSource:0}: Error finding container 6f37f934269f4fcf35e709d41550a0f94f14435336a81eaab9dc7824dabc536c: Status 404 returned error can't find the container with id 6f37f934269f4fcf35e709d41550a0f94f14435336a81eaab9dc7824dabc536c Feb 17 13:39:52 crc kubenswrapper[4768]: I0217 13:39:52.379290 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-ncxf5" Feb 17 13:39:52 crc kubenswrapper[4768]: I0217 13:39:52.379603 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-ncxf5" Feb 17 13:39:52 crc kubenswrapper[4768]: I0217 13:39:52.418390 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-ncxf5" Feb 17 13:39:52 crc kubenswrapper[4768]: I0217 13:39:52.661869 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-fbx28" Feb 17 13:39:52 crc kubenswrapper[4768]: I0217 13:39:52.661928 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-fbx28" Feb 17 13:39:52 crc kubenswrapper[4768]: I0217 13:39:52.703735 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-fbx28" Feb 17 13:39:52 crc kubenswrapper[4768]: I0217 13:39:52.779024 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-4tmgb"] Feb 17 13:39:52 crc kubenswrapper[4768]: I0217 13:39:52.779289 4768 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-4tmgb" podUID="b7de1a69-e892-4ca4-a61f-20a221ce38ba" containerName="registry-server" containerID="cri-o://71032de5b6c96e576123376dc42a0ecc62eee333dd0b5158c94d5c2bd0296832" gracePeriod=2 Feb 17 13:39:52 crc kubenswrapper[4768]: I0217 13:39:52.843389 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6c9c747f-mj765" event={"ID":"14124682-b6b8-4c21-a24f-f3368478a1d3","Type":"ContainerStarted","Data":"a50e473337b3212ccd7fdf64c666b2936d0f1ba9ec07d808971d138c0bd5b58c"} Feb 17 13:39:52 crc kubenswrapper[4768]: I0217 13:39:52.843445 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6c9c747f-mj765" event={"ID":"14124682-b6b8-4c21-a24f-f3368478a1d3","Type":"ContainerStarted","Data":"6f37f934269f4fcf35e709d41550a0f94f14435336a81eaab9dc7824dabc536c"} Feb 17 13:39:52 crc kubenswrapper[4768]: I0217 13:39:52.889439 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-fbx28" Feb 17 13:39:52 crc kubenswrapper[4768]: I0217 13:39:52.899741 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-ncxf5" Feb 17 13:39:53 crc kubenswrapper[4768]: I0217 13:39:53.387493 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-9r46k"] Feb 17 13:39:53 crc kubenswrapper[4768]: I0217 13:39:53.539850 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="46f4979b-0975-4ba7-86b1-56ef16779495" path="/var/lib/kubelet/pods/46f4979b-0975-4ba7-86b1-56ef16779495/volumes" Feb 17 13:39:53 crc kubenswrapper[4768]: I0217 13:39:53.540719 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e584c520-463b-4a51-8368-aeea8510245d" path="/var/lib/kubelet/pods/e584c520-463b-4a51-8368-aeea8510245d/volumes" Feb 17 13:39:53 crc kubenswrapper[4768]: I0217 13:39:53.553209 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-fg2l6" Feb 17 13:39:53 crc kubenswrapper[4768]: I0217 13:39:53.589814 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-fg2l6" Feb 17 13:39:53 crc kubenswrapper[4768]: I0217 13:39:53.854387 4768 generic.go:334] "Generic (PLEG): container finished" podID="b7de1a69-e892-4ca4-a61f-20a221ce38ba" containerID="71032de5b6c96e576123376dc42a0ecc62eee333dd0b5158c94d5c2bd0296832" exitCode=0 Feb 17 13:39:53 crc kubenswrapper[4768]: I0217 13:39:53.854520 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4tmgb" event={"ID":"b7de1a69-e892-4ca4-a61f-20a221ce38ba","Type":"ContainerDied","Data":"71032de5b6c96e576123376dc42a0ecc62eee333dd0b5158c94d5c2bd0296832"} Feb 17 13:39:53 crc kubenswrapper[4768]: I0217 13:39:53.854618 4768 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-9r46k" podUID="6a25c47f-7f1c-42ed-85bf-acfe8949338b" containerName="registry-server" containerID="cri-o://03d02b603bb5139c3ef49f573d035c9467063bdbf4f03b019b691b38c3cdbd6a" gracePeriod=2 Feb 17 13:39:53 crc kubenswrapper[4768]: I0217 13:39:53.880963 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6c9c747f-mj765" podStartSLOduration=3.880918918 podStartE2EDuration="3.880918918s" podCreationTimestamp="2026-02-17 13:39:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 13:39:53.876774597 +0000 UTC m=+213.156161039" watchObservedRunningTime="2026-02-17 13:39:53.880918918 +0000 UTC m=+213.160305370" Feb 17 13:39:54 crc kubenswrapper[4768]: I0217 13:39:54.123561 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-c67cb67f8-fprzn"] Feb 17 13:39:54 crc kubenswrapper[4768]: E0217 13:39:54.123806 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e584c520-463b-4a51-8368-aeea8510245d" containerName="controller-manager" Feb 17 13:39:54 crc kubenswrapper[4768]: I0217 13:39:54.123821 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="e584c520-463b-4a51-8368-aeea8510245d" containerName="controller-manager" Feb 17 13:39:54 crc kubenswrapper[4768]: I0217 13:39:54.123944 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="e584c520-463b-4a51-8368-aeea8510245d" containerName="controller-manager" Feb 17 13:39:54 crc kubenswrapper[4768]: I0217 13:39:54.124396 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-c67cb67f8-fprzn" Feb 17 13:39:54 crc kubenswrapper[4768]: I0217 13:39:54.134685 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 17 13:39:54 crc kubenswrapper[4768]: I0217 13:39:54.134701 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 17 13:39:54 crc kubenswrapper[4768]: I0217 13:39:54.134812 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 17 13:39:54 crc kubenswrapper[4768]: I0217 13:39:54.134828 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 17 13:39:54 crc kubenswrapper[4768]: I0217 13:39:54.134870 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 17 13:39:54 crc kubenswrapper[4768]: I0217 13:39:54.134939 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 17 13:39:54 crc kubenswrapper[4768]: I0217 13:39:54.142606 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 17 13:39:54 crc kubenswrapper[4768]: I0217 13:39:54.143340 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-c67cb67f8-fprzn"] Feb 17 13:39:54 crc kubenswrapper[4768]: I0217 13:39:54.316454 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e4790bc2-efc0-4c08-ba24-285ca999d0c0-client-ca\") pod \"controller-manager-c67cb67f8-fprzn\" (UID: \"e4790bc2-efc0-4c08-ba24-285ca999d0c0\") " pod="openshift-controller-manager/controller-manager-c67cb67f8-fprzn" Feb 17 13:39:54 crc kubenswrapper[4768]: I0217 13:39:54.316518 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e4790bc2-efc0-4c08-ba24-285ca999d0c0-proxy-ca-bundles\") pod \"controller-manager-c67cb67f8-fprzn\" (UID: \"e4790bc2-efc0-4c08-ba24-285ca999d0c0\") " pod="openshift-controller-manager/controller-manager-c67cb67f8-fprzn" Feb 17 13:39:54 crc kubenswrapper[4768]: I0217 13:39:54.316550 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e4790bc2-efc0-4c08-ba24-285ca999d0c0-serving-cert\") pod \"controller-manager-c67cb67f8-fprzn\" (UID: \"e4790bc2-efc0-4c08-ba24-285ca999d0c0\") " pod="openshift-controller-manager/controller-manager-c67cb67f8-fprzn" Feb 17 13:39:54 crc kubenswrapper[4768]: I0217 13:39:54.316587 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nj6b4\" (UniqueName: \"kubernetes.io/projected/e4790bc2-efc0-4c08-ba24-285ca999d0c0-kube-api-access-nj6b4\") pod \"controller-manager-c67cb67f8-fprzn\" (UID: \"e4790bc2-efc0-4c08-ba24-285ca999d0c0\") " pod="openshift-controller-manager/controller-manager-c67cb67f8-fprzn" Feb 17 13:39:54 crc kubenswrapper[4768]: I0217 13:39:54.316615 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e4790bc2-efc0-4c08-ba24-285ca999d0c0-config\") pod \"controller-manager-c67cb67f8-fprzn\" (UID: \"e4790bc2-efc0-4c08-ba24-285ca999d0c0\") " pod="openshift-controller-manager/controller-manager-c67cb67f8-fprzn" Feb 17 13:39:54 crc kubenswrapper[4768]: I0217 13:39:54.365664 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4tmgb" Feb 17 13:39:54 crc kubenswrapper[4768]: I0217 13:39:54.417806 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e4790bc2-efc0-4c08-ba24-285ca999d0c0-client-ca\") pod \"controller-manager-c67cb67f8-fprzn\" (UID: \"e4790bc2-efc0-4c08-ba24-285ca999d0c0\") " pod="openshift-controller-manager/controller-manager-c67cb67f8-fprzn" Feb 17 13:39:54 crc kubenswrapper[4768]: I0217 13:39:54.417866 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e4790bc2-efc0-4c08-ba24-285ca999d0c0-proxy-ca-bundles\") pod \"controller-manager-c67cb67f8-fprzn\" (UID: \"e4790bc2-efc0-4c08-ba24-285ca999d0c0\") " pod="openshift-controller-manager/controller-manager-c67cb67f8-fprzn" Feb 17 13:39:54 crc kubenswrapper[4768]: I0217 13:39:54.417896 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e4790bc2-efc0-4c08-ba24-285ca999d0c0-serving-cert\") pod \"controller-manager-c67cb67f8-fprzn\" (UID: \"e4790bc2-efc0-4c08-ba24-285ca999d0c0\") " pod="openshift-controller-manager/controller-manager-c67cb67f8-fprzn" Feb 17 13:39:54 crc kubenswrapper[4768]: I0217 13:39:54.417930 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nj6b4\" (UniqueName: \"kubernetes.io/projected/e4790bc2-efc0-4c08-ba24-285ca999d0c0-kube-api-access-nj6b4\") pod \"controller-manager-c67cb67f8-fprzn\" (UID: \"e4790bc2-efc0-4c08-ba24-285ca999d0c0\") " pod="openshift-controller-manager/controller-manager-c67cb67f8-fprzn" Feb 17 13:39:54 crc kubenswrapper[4768]: I0217 13:39:54.417952 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e4790bc2-efc0-4c08-ba24-285ca999d0c0-config\") pod \"controller-manager-c67cb67f8-fprzn\" (UID: \"e4790bc2-efc0-4c08-ba24-285ca999d0c0\") " pod="openshift-controller-manager/controller-manager-c67cb67f8-fprzn" Feb 17 13:39:54 crc kubenswrapper[4768]: I0217 13:39:54.418886 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e4790bc2-efc0-4c08-ba24-285ca999d0c0-client-ca\") pod \"controller-manager-c67cb67f8-fprzn\" (UID: \"e4790bc2-efc0-4c08-ba24-285ca999d0c0\") " pod="openshift-controller-manager/controller-manager-c67cb67f8-fprzn" Feb 17 13:39:54 crc kubenswrapper[4768]: I0217 13:39:54.418952 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e4790bc2-efc0-4c08-ba24-285ca999d0c0-proxy-ca-bundles\") pod \"controller-manager-c67cb67f8-fprzn\" (UID: \"e4790bc2-efc0-4c08-ba24-285ca999d0c0\") " pod="openshift-controller-manager/controller-manager-c67cb67f8-fprzn" Feb 17 13:39:54 crc kubenswrapper[4768]: I0217 13:39:54.420611 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e4790bc2-efc0-4c08-ba24-285ca999d0c0-config\") pod \"controller-manager-c67cb67f8-fprzn\" (UID: \"e4790bc2-efc0-4c08-ba24-285ca999d0c0\") " pod="openshift-controller-manager/controller-manager-c67cb67f8-fprzn" Feb 17 13:39:54 crc kubenswrapper[4768]: I0217 13:39:54.426933 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e4790bc2-efc0-4c08-ba24-285ca999d0c0-serving-cert\") pod \"controller-manager-c67cb67f8-fprzn\" (UID: \"e4790bc2-efc0-4c08-ba24-285ca999d0c0\") " pod="openshift-controller-manager/controller-manager-c67cb67f8-fprzn" Feb 17 13:39:54 crc kubenswrapper[4768]: I0217 13:39:54.431977 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nj6b4\" (UniqueName: \"kubernetes.io/projected/e4790bc2-efc0-4c08-ba24-285ca999d0c0-kube-api-access-nj6b4\") pod \"controller-manager-c67cb67f8-fprzn\" (UID: \"e4790bc2-efc0-4c08-ba24-285ca999d0c0\") " pod="openshift-controller-manager/controller-manager-c67cb67f8-fprzn" Feb 17 13:39:54 crc kubenswrapper[4768]: I0217 13:39:54.453356 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-c67cb67f8-fprzn" Feb 17 13:39:54 crc kubenswrapper[4768]: I0217 13:39:54.518500 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4wvbd\" (UniqueName: \"kubernetes.io/projected/b7de1a69-e892-4ca4-a61f-20a221ce38ba-kube-api-access-4wvbd\") pod \"b7de1a69-e892-4ca4-a61f-20a221ce38ba\" (UID: \"b7de1a69-e892-4ca4-a61f-20a221ce38ba\") " Feb 17 13:39:54 crc kubenswrapper[4768]: I0217 13:39:54.518549 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b7de1a69-e892-4ca4-a61f-20a221ce38ba-utilities\") pod \"b7de1a69-e892-4ca4-a61f-20a221ce38ba\" (UID: \"b7de1a69-e892-4ca4-a61f-20a221ce38ba\") " Feb 17 13:39:54 crc kubenswrapper[4768]: I0217 13:39:54.518606 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b7de1a69-e892-4ca4-a61f-20a221ce38ba-catalog-content\") pod \"b7de1a69-e892-4ca4-a61f-20a221ce38ba\" (UID: \"b7de1a69-e892-4ca4-a61f-20a221ce38ba\") " Feb 17 13:39:54 crc kubenswrapper[4768]: I0217 13:39:54.520504 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b7de1a69-e892-4ca4-a61f-20a221ce38ba-utilities" (OuterVolumeSpecName: "utilities") pod "b7de1a69-e892-4ca4-a61f-20a221ce38ba" (UID: "b7de1a69-e892-4ca4-a61f-20a221ce38ba"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 13:39:54 crc kubenswrapper[4768]: I0217 13:39:54.521197 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b7de1a69-e892-4ca4-a61f-20a221ce38ba-kube-api-access-4wvbd" (OuterVolumeSpecName: "kube-api-access-4wvbd") pod "b7de1a69-e892-4ca4-a61f-20a221ce38ba" (UID: "b7de1a69-e892-4ca4-a61f-20a221ce38ba"). InnerVolumeSpecName "kube-api-access-4wvbd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 13:39:54 crc kubenswrapper[4768]: I0217 13:39:54.595999 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b7de1a69-e892-4ca4-a61f-20a221ce38ba-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b7de1a69-e892-4ca4-a61f-20a221ce38ba" (UID: "b7de1a69-e892-4ca4-a61f-20a221ce38ba"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 13:39:54 crc kubenswrapper[4768]: I0217 13:39:54.619833 4768 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b7de1a69-e892-4ca4-a61f-20a221ce38ba-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 17 13:39:54 crc kubenswrapper[4768]: I0217 13:39:54.619916 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4wvbd\" (UniqueName: \"kubernetes.io/projected/b7de1a69-e892-4ca4-a61f-20a221ce38ba-kube-api-access-4wvbd\") on node \"crc\" DevicePath \"\"" Feb 17 13:39:54 crc kubenswrapper[4768]: I0217 13:39:54.619933 4768 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b7de1a69-e892-4ca4-a61f-20a221ce38ba-utilities\") on node \"crc\" DevicePath \"\"" Feb 17 13:39:54 crc kubenswrapper[4768]: I0217 13:39:54.844964 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-c67cb67f8-fprzn"] Feb 17 13:39:54 crc kubenswrapper[4768]: W0217 13:39:54.850135 4768 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode4790bc2_efc0_4c08_ba24_285ca999d0c0.slice/crio-a7bd91e62ebb5648cc7d43316020da5502f4a0dfb3676cf5ee4f32ceb330fa28 WatchSource:0}: Error finding container a7bd91e62ebb5648cc7d43316020da5502f4a0dfb3676cf5ee4f32ceb330fa28: Status 404 returned error can't find the container with id a7bd91e62ebb5648cc7d43316020da5502f4a0dfb3676cf5ee4f32ceb330fa28 Feb 17 13:39:54 crc kubenswrapper[4768]: I0217 13:39:54.860897 4768 generic.go:334] "Generic (PLEG): container finished" podID="6a25c47f-7f1c-42ed-85bf-acfe8949338b" containerID="03d02b603bb5139c3ef49f573d035c9467063bdbf4f03b019b691b38c3cdbd6a" exitCode=0 Feb 17 13:39:54 crc kubenswrapper[4768]: I0217 13:39:54.860968 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9r46k" event={"ID":"6a25c47f-7f1c-42ed-85bf-acfe8949338b","Type":"ContainerDied","Data":"03d02b603bb5139c3ef49f573d035c9467063bdbf4f03b019b691b38c3cdbd6a"} Feb 17 13:39:54 crc kubenswrapper[4768]: I0217 13:39:54.862237 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-c67cb67f8-fprzn" event={"ID":"e4790bc2-efc0-4c08-ba24-285ca999d0c0","Type":"ContainerStarted","Data":"a7bd91e62ebb5648cc7d43316020da5502f4a0dfb3676cf5ee4f32ceb330fa28"} Feb 17 13:39:54 crc kubenswrapper[4768]: I0217 13:39:54.864086 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4tmgb" event={"ID":"b7de1a69-e892-4ca4-a61f-20a221ce38ba","Type":"ContainerDied","Data":"93c5a7bb556cb317acdd1e795a04c07ecd06768393a332b995cb23e5e67db69d"} Feb 17 13:39:54 crc kubenswrapper[4768]: I0217 13:39:54.864130 4768 scope.go:117] "RemoveContainer" containerID="71032de5b6c96e576123376dc42a0ecc62eee333dd0b5158c94d5c2bd0296832" Feb 17 13:39:54 crc kubenswrapper[4768]: I0217 13:39:54.864202 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4tmgb" Feb 17 13:39:54 crc kubenswrapper[4768]: I0217 13:39:54.882751 4768 scope.go:117] "RemoveContainer" containerID="2e8e516ccec666d0db47e7e57894b43c5d307b9ebaa45fe6a75acb61c381244c" Feb 17 13:39:54 crc kubenswrapper[4768]: I0217 13:39:54.899122 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-4tmgb"] Feb 17 13:39:54 crc kubenswrapper[4768]: I0217 13:39:54.901945 4768 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-4tmgb"] Feb 17 13:39:54 crc kubenswrapper[4768]: I0217 13:39:54.914232 4768 scope.go:117] "RemoveContainer" containerID="8c1edb7bbf59f6c069d4628f9c53364b649843a0212906e1d1b5db40b37b695c" Feb 17 13:39:55 crc kubenswrapper[4768]: I0217 13:39:55.176296 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-fbx28"] Feb 17 13:39:55 crc kubenswrapper[4768]: I0217 13:39:55.176571 4768 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-fbx28" podUID="e826302e-7052-4a6e-a626-93b7b433096a" containerName="registry-server" containerID="cri-o://338245eb2569d04c0e8f6ab4a8e4e89319d363ca04bb08c5fb71337c4fa68ffa" gracePeriod=2 Feb 17 13:39:55 crc kubenswrapper[4768]: I0217 13:39:55.473620 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9r46k" Feb 17 13:39:55 crc kubenswrapper[4768]: I0217 13:39:55.540590 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b7de1a69-e892-4ca4-a61f-20a221ce38ba" path="/var/lib/kubelet/pods/b7de1a69-e892-4ca4-a61f-20a221ce38ba/volumes" Feb 17 13:39:55 crc kubenswrapper[4768]: I0217 13:39:55.553330 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-fbx28" Feb 17 13:39:55 crc kubenswrapper[4768]: I0217 13:39:55.639355 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6a25c47f-7f1c-42ed-85bf-acfe8949338b-utilities\") pod \"6a25c47f-7f1c-42ed-85bf-acfe8949338b\" (UID: \"6a25c47f-7f1c-42ed-85bf-acfe8949338b\") " Feb 17 13:39:55 crc kubenswrapper[4768]: I0217 13:39:55.639436 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7hvg2\" (UniqueName: \"kubernetes.io/projected/6a25c47f-7f1c-42ed-85bf-acfe8949338b-kube-api-access-7hvg2\") pod \"6a25c47f-7f1c-42ed-85bf-acfe8949338b\" (UID: \"6a25c47f-7f1c-42ed-85bf-acfe8949338b\") " Feb 17 13:39:55 crc kubenswrapper[4768]: I0217 13:39:55.639488 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6a25c47f-7f1c-42ed-85bf-acfe8949338b-catalog-content\") pod \"6a25c47f-7f1c-42ed-85bf-acfe8949338b\" (UID: \"6a25c47f-7f1c-42ed-85bf-acfe8949338b\") " Feb 17 13:39:55 crc kubenswrapper[4768]: I0217 13:39:55.641194 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6a25c47f-7f1c-42ed-85bf-acfe8949338b-utilities" (OuterVolumeSpecName: "utilities") pod "6a25c47f-7f1c-42ed-85bf-acfe8949338b" (UID: "6a25c47f-7f1c-42ed-85bf-acfe8949338b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 13:39:55 crc kubenswrapper[4768]: I0217 13:39:55.648557 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6a25c47f-7f1c-42ed-85bf-acfe8949338b-kube-api-access-7hvg2" (OuterVolumeSpecName: "kube-api-access-7hvg2") pod "6a25c47f-7f1c-42ed-85bf-acfe8949338b" (UID: "6a25c47f-7f1c-42ed-85bf-acfe8949338b"). InnerVolumeSpecName "kube-api-access-7hvg2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 13:39:55 crc kubenswrapper[4768]: I0217 13:39:55.706351 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6a25c47f-7f1c-42ed-85bf-acfe8949338b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "6a25c47f-7f1c-42ed-85bf-acfe8949338b" (UID: "6a25c47f-7f1c-42ed-85bf-acfe8949338b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 13:39:55 crc kubenswrapper[4768]: I0217 13:39:55.740986 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e826302e-7052-4a6e-a626-93b7b433096a-utilities\") pod \"e826302e-7052-4a6e-a626-93b7b433096a\" (UID: \"e826302e-7052-4a6e-a626-93b7b433096a\") " Feb 17 13:39:55 crc kubenswrapper[4768]: I0217 13:39:55.741128 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e826302e-7052-4a6e-a626-93b7b433096a-catalog-content\") pod \"e826302e-7052-4a6e-a626-93b7b433096a\" (UID: \"e826302e-7052-4a6e-a626-93b7b433096a\") " Feb 17 13:39:55 crc kubenswrapper[4768]: I0217 13:39:55.741673 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e826302e-7052-4a6e-a626-93b7b433096a-utilities" (OuterVolumeSpecName: "utilities") pod "e826302e-7052-4a6e-a626-93b7b433096a" (UID: "e826302e-7052-4a6e-a626-93b7b433096a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 13:39:55 crc kubenswrapper[4768]: I0217 13:39:55.747233 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sh9d2\" (UniqueName: \"kubernetes.io/projected/e826302e-7052-4a6e-a626-93b7b433096a-kube-api-access-sh9d2\") pod \"e826302e-7052-4a6e-a626-93b7b433096a\" (UID: \"e826302e-7052-4a6e-a626-93b7b433096a\") " Feb 17 13:39:55 crc kubenswrapper[4768]: I0217 13:39:55.747483 4768 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e826302e-7052-4a6e-a626-93b7b433096a-utilities\") on node \"crc\" DevicePath \"\"" Feb 17 13:39:55 crc kubenswrapper[4768]: I0217 13:39:55.747501 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7hvg2\" (UniqueName: \"kubernetes.io/projected/6a25c47f-7f1c-42ed-85bf-acfe8949338b-kube-api-access-7hvg2\") on node \"crc\" DevicePath \"\"" Feb 17 13:39:55 crc kubenswrapper[4768]: I0217 13:39:55.747512 4768 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6a25c47f-7f1c-42ed-85bf-acfe8949338b-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 17 13:39:55 crc kubenswrapper[4768]: I0217 13:39:55.747522 4768 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6a25c47f-7f1c-42ed-85bf-acfe8949338b-utilities\") on node \"crc\" DevicePath \"\"" Feb 17 13:39:55 crc kubenswrapper[4768]: I0217 13:39:55.751634 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e826302e-7052-4a6e-a626-93b7b433096a-kube-api-access-sh9d2" (OuterVolumeSpecName: "kube-api-access-sh9d2") pod "e826302e-7052-4a6e-a626-93b7b433096a" (UID: "e826302e-7052-4a6e-a626-93b7b433096a"). InnerVolumeSpecName "kube-api-access-sh9d2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 13:39:55 crc kubenswrapper[4768]: I0217 13:39:55.765028 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e826302e-7052-4a6e-a626-93b7b433096a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e826302e-7052-4a6e-a626-93b7b433096a" (UID: "e826302e-7052-4a6e-a626-93b7b433096a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 13:39:55 crc kubenswrapper[4768]: I0217 13:39:55.848119 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sh9d2\" (UniqueName: \"kubernetes.io/projected/e826302e-7052-4a6e-a626-93b7b433096a-kube-api-access-sh9d2\") on node \"crc\" DevicePath \"\"" Feb 17 13:39:55 crc kubenswrapper[4768]: I0217 13:39:55.848146 4768 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e826302e-7052-4a6e-a626-93b7b433096a-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 17 13:39:55 crc kubenswrapper[4768]: I0217 13:39:55.872231 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9r46k" event={"ID":"6a25c47f-7f1c-42ed-85bf-acfe8949338b","Type":"ContainerDied","Data":"d80785c03391662c9f34ed3e05d403bbf46f8c6b15a459593193e6e372efa0d2"} Feb 17 13:39:55 crc kubenswrapper[4768]: I0217 13:39:55.872268 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9r46k" Feb 17 13:39:55 crc kubenswrapper[4768]: I0217 13:39:55.872339 4768 scope.go:117] "RemoveContainer" containerID="03d02b603bb5139c3ef49f573d035c9467063bdbf4f03b019b691b38c3cdbd6a" Feb 17 13:39:55 crc kubenswrapper[4768]: I0217 13:39:55.873755 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-c67cb67f8-fprzn" event={"ID":"e4790bc2-efc0-4c08-ba24-285ca999d0c0","Type":"ContainerStarted","Data":"7c1fdecc3cd0c115a2c40cd4e215305c414fa83ac3d774fb8dca6c2edb3d1c07"} Feb 17 13:39:55 crc kubenswrapper[4768]: I0217 13:39:55.874018 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-c67cb67f8-fprzn" Feb 17 13:39:55 crc kubenswrapper[4768]: I0217 13:39:55.876666 4768 generic.go:334] "Generic (PLEG): container finished" podID="e826302e-7052-4a6e-a626-93b7b433096a" containerID="338245eb2569d04c0e8f6ab4a8e4e89319d363ca04bb08c5fb71337c4fa68ffa" exitCode=0 Feb 17 13:39:55 crc kubenswrapper[4768]: I0217 13:39:55.876712 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fbx28" event={"ID":"e826302e-7052-4a6e-a626-93b7b433096a","Type":"ContainerDied","Data":"338245eb2569d04c0e8f6ab4a8e4e89319d363ca04bb08c5fb71337c4fa68ffa"} Feb 17 13:39:55 crc kubenswrapper[4768]: I0217 13:39:55.876730 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fbx28" event={"ID":"e826302e-7052-4a6e-a626-93b7b433096a","Type":"ContainerDied","Data":"8e94a8d68f3d4f0f21c79d09a09891ba67811ff6ebd97fe4a6cc7806f1915e53"} Feb 17 13:39:55 crc kubenswrapper[4768]: I0217 13:39:55.876779 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-fbx28" Feb 17 13:39:55 crc kubenswrapper[4768]: I0217 13:39:55.879057 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-c67cb67f8-fprzn" Feb 17 13:39:55 crc kubenswrapper[4768]: I0217 13:39:55.888629 4768 scope.go:117] "RemoveContainer" containerID="1d0eabb20024b09a164aa7eb0f24b373aa36e888cc131dfa1ebafaaadde57c60" Feb 17 13:39:55 crc kubenswrapper[4768]: I0217 13:39:55.906890 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-c67cb67f8-fprzn" podStartSLOduration=5.906873399 podStartE2EDuration="5.906873399s" podCreationTimestamp="2026-02-17 13:39:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 13:39:55.905693654 +0000 UTC m=+215.185080106" watchObservedRunningTime="2026-02-17 13:39:55.906873399 +0000 UTC m=+215.186259861" Feb 17 13:39:55 crc kubenswrapper[4768]: I0217 13:39:55.925975 4768 scope.go:117] "RemoveContainer" containerID="31b098a6831bc783c28ca6a8b854b3d0d0d61267e51480482a65db81ec03bea7" Feb 17 13:39:55 crc kubenswrapper[4768]: I0217 13:39:55.925976 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-9r46k"] Feb 17 13:39:55 crc kubenswrapper[4768]: I0217 13:39:55.942524 4768 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-9r46k"] Feb 17 13:39:55 crc kubenswrapper[4768]: I0217 13:39:55.944300 4768 scope.go:117] "RemoveContainer" containerID="338245eb2569d04c0e8f6ab4a8e4e89319d363ca04bb08c5fb71337c4fa68ffa" Feb 17 13:39:55 crc kubenswrapper[4768]: I0217 13:39:55.952576 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-fbx28"] Feb 17 13:39:55 crc kubenswrapper[4768]: I0217 13:39:55.956615 4768 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-fbx28"] Feb 17 13:39:55 crc kubenswrapper[4768]: I0217 13:39:55.958833 4768 scope.go:117] "RemoveContainer" containerID="0bd11d1036a60c7713d23152ceb0b98b9cec201933764b0d444dcc751c2f9f06" Feb 17 13:39:55 crc kubenswrapper[4768]: I0217 13:39:55.973327 4768 scope.go:117] "RemoveContainer" containerID="62cfd214a35ffcdcc96a8852a2c79c7be4c47a8220438d6ce5e177817f2f84df" Feb 17 13:39:55 crc kubenswrapper[4768]: I0217 13:39:55.988128 4768 scope.go:117] "RemoveContainer" containerID="338245eb2569d04c0e8f6ab4a8e4e89319d363ca04bb08c5fb71337c4fa68ffa" Feb 17 13:39:55 crc kubenswrapper[4768]: E0217 13:39:55.988512 4768 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"338245eb2569d04c0e8f6ab4a8e4e89319d363ca04bb08c5fb71337c4fa68ffa\": container with ID starting with 338245eb2569d04c0e8f6ab4a8e4e89319d363ca04bb08c5fb71337c4fa68ffa not found: ID does not exist" containerID="338245eb2569d04c0e8f6ab4a8e4e89319d363ca04bb08c5fb71337c4fa68ffa" Feb 17 13:39:55 crc kubenswrapper[4768]: I0217 13:39:55.988562 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"338245eb2569d04c0e8f6ab4a8e4e89319d363ca04bb08c5fb71337c4fa68ffa"} err="failed to get container status \"338245eb2569d04c0e8f6ab4a8e4e89319d363ca04bb08c5fb71337c4fa68ffa\": rpc error: code = NotFound desc = could not find container \"338245eb2569d04c0e8f6ab4a8e4e89319d363ca04bb08c5fb71337c4fa68ffa\": container with ID starting with 338245eb2569d04c0e8f6ab4a8e4e89319d363ca04bb08c5fb71337c4fa68ffa not found: ID does not exist" Feb 17 13:39:55 crc kubenswrapper[4768]: I0217 13:39:55.988584 4768 scope.go:117] "RemoveContainer" containerID="0bd11d1036a60c7713d23152ceb0b98b9cec201933764b0d444dcc751c2f9f06" Feb 17 13:39:55 crc kubenswrapper[4768]: E0217 13:39:55.988943 4768 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0bd11d1036a60c7713d23152ceb0b98b9cec201933764b0d444dcc751c2f9f06\": container with ID starting with 0bd11d1036a60c7713d23152ceb0b98b9cec201933764b0d444dcc751c2f9f06 not found: ID does not exist" containerID="0bd11d1036a60c7713d23152ceb0b98b9cec201933764b0d444dcc751c2f9f06" Feb 17 13:39:55 crc kubenswrapper[4768]: I0217 13:39:55.988989 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0bd11d1036a60c7713d23152ceb0b98b9cec201933764b0d444dcc751c2f9f06"} err="failed to get container status \"0bd11d1036a60c7713d23152ceb0b98b9cec201933764b0d444dcc751c2f9f06\": rpc error: code = NotFound desc = could not find container \"0bd11d1036a60c7713d23152ceb0b98b9cec201933764b0d444dcc751c2f9f06\": container with ID starting with 0bd11d1036a60c7713d23152ceb0b98b9cec201933764b0d444dcc751c2f9f06 not found: ID does not exist" Feb 17 13:39:55 crc kubenswrapper[4768]: I0217 13:39:55.989021 4768 scope.go:117] "RemoveContainer" containerID="62cfd214a35ffcdcc96a8852a2c79c7be4c47a8220438d6ce5e177817f2f84df" Feb 17 13:39:55 crc kubenswrapper[4768]: E0217 13:39:55.989307 4768 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"62cfd214a35ffcdcc96a8852a2c79c7be4c47a8220438d6ce5e177817f2f84df\": container with ID starting with 62cfd214a35ffcdcc96a8852a2c79c7be4c47a8220438d6ce5e177817f2f84df not found: ID does not exist" containerID="62cfd214a35ffcdcc96a8852a2c79c7be4c47a8220438d6ce5e177817f2f84df" Feb 17 13:39:55 crc kubenswrapper[4768]: I0217 13:39:55.989329 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"62cfd214a35ffcdcc96a8852a2c79c7be4c47a8220438d6ce5e177817f2f84df"} err="failed to get container status \"62cfd214a35ffcdcc96a8852a2c79c7be4c47a8220438d6ce5e177817f2f84df\": rpc error: code = NotFound desc = could not find container \"62cfd214a35ffcdcc96a8852a2c79c7be4c47a8220438d6ce5e177817f2f84df\": container with ID starting with 62cfd214a35ffcdcc96a8852a2c79c7be4c47a8220438d6ce5e177817f2f84df not found: ID does not exist" Feb 17 13:39:57 crc kubenswrapper[4768]: I0217 13:39:57.544349 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6a25c47f-7f1c-42ed-85bf-acfe8949338b" path="/var/lib/kubelet/pods/6a25c47f-7f1c-42ed-85bf-acfe8949338b/volumes" Feb 17 13:39:57 crc kubenswrapper[4768]: I0217 13:39:57.545754 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e826302e-7052-4a6e-a626-93b7b433096a" path="/var/lib/kubelet/pods/e826302e-7052-4a6e-a626-93b7b433096a/volumes" Feb 17 13:39:58 crc kubenswrapper[4768]: I0217 13:39:58.060560 4768 patch_prober.go:28] interesting pod/machine-config-daemon-p97z4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 13:39:58 crc kubenswrapper[4768]: I0217 13:39:58.060647 4768 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-p97z4" podUID="10c685ba-8fe0-425c-958c-3fb6754d3d84" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 13:39:58 crc kubenswrapper[4768]: I0217 13:39:58.060700 4768 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-p97z4" Feb 17 13:39:58 crc kubenswrapper[4768]: I0217 13:39:58.061310 4768 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"69b662556c0f67ed5b73c21d8dda6de43cd7b0e7c3dc7a57578f3ce94a4be7cd"} pod="openshift-machine-config-operator/machine-config-daemon-p97z4" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 17 13:39:58 crc kubenswrapper[4768]: I0217 13:39:58.061377 4768 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-p97z4" podUID="10c685ba-8fe0-425c-958c-3fb6754d3d84" containerName="machine-config-daemon" containerID="cri-o://69b662556c0f67ed5b73c21d8dda6de43cd7b0e7c3dc7a57578f3ce94a4be7cd" gracePeriod=600 Feb 17 13:39:58 crc kubenswrapper[4768]: I0217 13:39:58.913428 4768 generic.go:334] "Generic (PLEG): container finished" podID="10c685ba-8fe0-425c-958c-3fb6754d3d84" containerID="69b662556c0f67ed5b73c21d8dda6de43cd7b0e7c3dc7a57578f3ce94a4be7cd" exitCode=0 Feb 17 13:39:58 crc kubenswrapper[4768]: I0217 13:39:58.913533 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-p97z4" event={"ID":"10c685ba-8fe0-425c-958c-3fb6754d3d84","Type":"ContainerDied","Data":"69b662556c0f67ed5b73c21d8dda6de43cd7b0e7c3dc7a57578f3ce94a4be7cd"} Feb 17 13:39:58 crc kubenswrapper[4768]: I0217 13:39:58.913976 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-p97z4" event={"ID":"10c685ba-8fe0-425c-958c-3fb6754d3d84","Type":"ContainerStarted","Data":"5027bbc0d5015c18de153045ec7b4f54fa804d4c644f283923fd2686e923444b"} Feb 17 13:40:01 crc kubenswrapper[4768]: I0217 13:40:01.868074 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6c9c747f-mj765" Feb 17 13:40:01 crc kubenswrapper[4768]: I0217 13:40:01.874208 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6c9c747f-mj765" Feb 17 13:40:10 crc kubenswrapper[4768]: I0217 13:40:10.290917 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-c67cb67f8-fprzn"] Feb 17 13:40:10 crc kubenswrapper[4768]: I0217 13:40:10.292561 4768 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-c67cb67f8-fprzn" podUID="e4790bc2-efc0-4c08-ba24-285ca999d0c0" containerName="controller-manager" containerID="cri-o://7c1fdecc3cd0c115a2c40cd4e215305c414fa83ac3d774fb8dca6c2edb3d1c07" gracePeriod=30 Feb 17 13:40:10 crc kubenswrapper[4768]: I0217 13:40:10.376619 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6c9c747f-mj765"] Feb 17 13:40:10 crc kubenswrapper[4768]: I0217 13:40:10.376871 4768 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6c9c747f-mj765" podUID="14124682-b6b8-4c21-a24f-f3368478a1d3" containerName="route-controller-manager" containerID="cri-o://a50e473337b3212ccd7fdf64c666b2936d0f1ba9ec07d808971d138c0bd5b58c" gracePeriod=30 Feb 17 13:40:10 crc kubenswrapper[4768]: I0217 13:40:10.868905 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6c9c747f-mj765" Feb 17 13:40:10 crc kubenswrapper[4768]: I0217 13:40:10.911556 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-c67cb67f8-fprzn" Feb 17 13:40:11 crc kubenswrapper[4768]: I0217 13:40:10.997906 4768 generic.go:334] "Generic (PLEG): container finished" podID="e4790bc2-efc0-4c08-ba24-285ca999d0c0" containerID="7c1fdecc3cd0c115a2c40cd4e215305c414fa83ac3d774fb8dca6c2edb3d1c07" exitCode=0 Feb 17 13:40:11 crc kubenswrapper[4768]: I0217 13:40:10.997993 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-c67cb67f8-fprzn" Feb 17 13:40:11 crc kubenswrapper[4768]: I0217 13:40:10.998019 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-c67cb67f8-fprzn" event={"ID":"e4790bc2-efc0-4c08-ba24-285ca999d0c0","Type":"ContainerDied","Data":"7c1fdecc3cd0c115a2c40cd4e215305c414fa83ac3d774fb8dca6c2edb3d1c07"} Feb 17 13:40:11 crc kubenswrapper[4768]: I0217 13:40:10.998050 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-c67cb67f8-fprzn" event={"ID":"e4790bc2-efc0-4c08-ba24-285ca999d0c0","Type":"ContainerDied","Data":"a7bd91e62ebb5648cc7d43316020da5502f4a0dfb3676cf5ee4f32ceb330fa28"} Feb 17 13:40:11 crc kubenswrapper[4768]: I0217 13:40:10.998069 4768 scope.go:117] "RemoveContainer" containerID="7c1fdecc3cd0c115a2c40cd4e215305c414fa83ac3d774fb8dca6c2edb3d1c07" Feb 17 13:40:11 crc kubenswrapper[4768]: I0217 13:40:10.999986 4768 generic.go:334] "Generic (PLEG): container finished" podID="14124682-b6b8-4c21-a24f-f3368478a1d3" containerID="a50e473337b3212ccd7fdf64c666b2936d0f1ba9ec07d808971d138c0bd5b58c" exitCode=0 Feb 17 13:40:11 crc kubenswrapper[4768]: I0217 13:40:11.000019 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6c9c747f-mj765" event={"ID":"14124682-b6b8-4c21-a24f-f3368478a1d3","Type":"ContainerDied","Data":"a50e473337b3212ccd7fdf64c666b2936d0f1ba9ec07d808971d138c0bd5b58c"} Feb 17 13:40:11 crc kubenswrapper[4768]: I0217 13:40:11.000031 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6c9c747f-mj765" Feb 17 13:40:11 crc kubenswrapper[4768]: I0217 13:40:11.000045 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6c9c747f-mj765" event={"ID":"14124682-b6b8-4c21-a24f-f3368478a1d3","Type":"ContainerDied","Data":"6f37f934269f4fcf35e709d41550a0f94f14435336a81eaab9dc7824dabc536c"} Feb 17 13:40:11 crc kubenswrapper[4768]: I0217 13:40:11.012117 4768 scope.go:117] "RemoveContainer" containerID="7c1fdecc3cd0c115a2c40cd4e215305c414fa83ac3d774fb8dca6c2edb3d1c07" Feb 17 13:40:11 crc kubenswrapper[4768]: E0217 13:40:11.012543 4768 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7c1fdecc3cd0c115a2c40cd4e215305c414fa83ac3d774fb8dca6c2edb3d1c07\": container with ID starting with 7c1fdecc3cd0c115a2c40cd4e215305c414fa83ac3d774fb8dca6c2edb3d1c07 not found: ID does not exist" containerID="7c1fdecc3cd0c115a2c40cd4e215305c414fa83ac3d774fb8dca6c2edb3d1c07" Feb 17 13:40:11 crc kubenswrapper[4768]: I0217 13:40:11.012575 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7c1fdecc3cd0c115a2c40cd4e215305c414fa83ac3d774fb8dca6c2edb3d1c07"} err="failed to get container status \"7c1fdecc3cd0c115a2c40cd4e215305c414fa83ac3d774fb8dca6c2edb3d1c07\": rpc error: code = NotFound desc = could not find container \"7c1fdecc3cd0c115a2c40cd4e215305c414fa83ac3d774fb8dca6c2edb3d1c07\": container with ID starting with 7c1fdecc3cd0c115a2c40cd4e215305c414fa83ac3d774fb8dca6c2edb3d1c07 not found: ID does not exist" Feb 17 13:40:11 crc kubenswrapper[4768]: I0217 13:40:11.012591 4768 scope.go:117] "RemoveContainer" containerID="a50e473337b3212ccd7fdf64c666b2936d0f1ba9ec07d808971d138c0bd5b58c" Feb 17 13:40:11 crc kubenswrapper[4768]: I0217 13:40:11.026005 4768 scope.go:117] "RemoveContainer" containerID="a50e473337b3212ccd7fdf64c666b2936d0f1ba9ec07d808971d138c0bd5b58c" Feb 17 13:40:11 crc kubenswrapper[4768]: I0217 13:40:11.026038 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nj6b4\" (UniqueName: \"kubernetes.io/projected/e4790bc2-efc0-4c08-ba24-285ca999d0c0-kube-api-access-nj6b4\") pod \"e4790bc2-efc0-4c08-ba24-285ca999d0c0\" (UID: \"e4790bc2-efc0-4c08-ba24-285ca999d0c0\") " Feb 17 13:40:11 crc kubenswrapper[4768]: I0217 13:40:11.026086 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e4790bc2-efc0-4c08-ba24-285ca999d0c0-client-ca\") pod \"e4790bc2-efc0-4c08-ba24-285ca999d0c0\" (UID: \"e4790bc2-efc0-4c08-ba24-285ca999d0c0\") " Feb 17 13:40:11 crc kubenswrapper[4768]: I0217 13:40:11.026159 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/14124682-b6b8-4c21-a24f-f3368478a1d3-serving-cert\") pod \"14124682-b6b8-4c21-a24f-f3368478a1d3\" (UID: \"14124682-b6b8-4c21-a24f-f3368478a1d3\") " Feb 17 13:40:11 crc kubenswrapper[4768]: I0217 13:40:11.026180 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e4790bc2-efc0-4c08-ba24-285ca999d0c0-serving-cert\") pod \"e4790bc2-efc0-4c08-ba24-285ca999d0c0\" (UID: \"e4790bc2-efc0-4c08-ba24-285ca999d0c0\") " Feb 17 13:40:11 crc kubenswrapper[4768]: I0217 13:40:11.026219 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e4790bc2-efc0-4c08-ba24-285ca999d0c0-proxy-ca-bundles\") pod \"e4790bc2-efc0-4c08-ba24-285ca999d0c0\" (UID: \"e4790bc2-efc0-4c08-ba24-285ca999d0c0\") " Feb 17 13:40:11 crc kubenswrapper[4768]: I0217 13:40:11.026241 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/14124682-b6b8-4c21-a24f-f3368478a1d3-client-ca\") pod \"14124682-b6b8-4c21-a24f-f3368478a1d3\" (UID: \"14124682-b6b8-4c21-a24f-f3368478a1d3\") " Feb 17 13:40:11 crc kubenswrapper[4768]: I0217 13:40:11.026259 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/14124682-b6b8-4c21-a24f-f3368478a1d3-config\") pod \"14124682-b6b8-4c21-a24f-f3368478a1d3\" (UID: \"14124682-b6b8-4c21-a24f-f3368478a1d3\") " Feb 17 13:40:11 crc kubenswrapper[4768]: I0217 13:40:11.026284 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dbj95\" (UniqueName: \"kubernetes.io/projected/14124682-b6b8-4c21-a24f-f3368478a1d3-kube-api-access-dbj95\") pod \"14124682-b6b8-4c21-a24f-f3368478a1d3\" (UID: \"14124682-b6b8-4c21-a24f-f3368478a1d3\") " Feb 17 13:40:11 crc kubenswrapper[4768]: I0217 13:40:11.026310 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e4790bc2-efc0-4c08-ba24-285ca999d0c0-config\") pod \"e4790bc2-efc0-4c08-ba24-285ca999d0c0\" (UID: \"e4790bc2-efc0-4c08-ba24-285ca999d0c0\") " Feb 17 13:40:11 crc kubenswrapper[4768]: E0217 13:40:11.026539 4768 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a50e473337b3212ccd7fdf64c666b2936d0f1ba9ec07d808971d138c0bd5b58c\": container with ID starting with a50e473337b3212ccd7fdf64c666b2936d0f1ba9ec07d808971d138c0bd5b58c not found: ID does not exist" containerID="a50e473337b3212ccd7fdf64c666b2936d0f1ba9ec07d808971d138c0bd5b58c" Feb 17 13:40:11 crc kubenswrapper[4768]: I0217 13:40:11.026579 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a50e473337b3212ccd7fdf64c666b2936d0f1ba9ec07d808971d138c0bd5b58c"} err="failed to get container status \"a50e473337b3212ccd7fdf64c666b2936d0f1ba9ec07d808971d138c0bd5b58c\": rpc error: code = NotFound desc = could not find container \"a50e473337b3212ccd7fdf64c666b2936d0f1ba9ec07d808971d138c0bd5b58c\": container with ID starting with a50e473337b3212ccd7fdf64c666b2936d0f1ba9ec07d808971d138c0bd5b58c not found: ID does not exist" Feb 17 13:40:11 crc kubenswrapper[4768]: I0217 13:40:11.027461 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/14124682-b6b8-4c21-a24f-f3368478a1d3-client-ca" (OuterVolumeSpecName: "client-ca") pod "14124682-b6b8-4c21-a24f-f3368478a1d3" (UID: "14124682-b6b8-4c21-a24f-f3368478a1d3"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 13:40:11 crc kubenswrapper[4768]: I0217 13:40:11.027470 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/14124682-b6b8-4c21-a24f-f3368478a1d3-config" (OuterVolumeSpecName: "config") pod "14124682-b6b8-4c21-a24f-f3368478a1d3" (UID: "14124682-b6b8-4c21-a24f-f3368478a1d3"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 13:40:11 crc kubenswrapper[4768]: I0217 13:40:11.027548 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e4790bc2-efc0-4c08-ba24-285ca999d0c0-client-ca" (OuterVolumeSpecName: "client-ca") pod "e4790bc2-efc0-4c08-ba24-285ca999d0c0" (UID: "e4790bc2-efc0-4c08-ba24-285ca999d0c0"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 13:40:11 crc kubenswrapper[4768]: I0217 13:40:11.027775 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e4790bc2-efc0-4c08-ba24-285ca999d0c0-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "e4790bc2-efc0-4c08-ba24-285ca999d0c0" (UID: "e4790bc2-efc0-4c08-ba24-285ca999d0c0"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 13:40:11 crc kubenswrapper[4768]: I0217 13:40:11.028074 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e4790bc2-efc0-4c08-ba24-285ca999d0c0-config" (OuterVolumeSpecName: "config") pod "e4790bc2-efc0-4c08-ba24-285ca999d0c0" (UID: "e4790bc2-efc0-4c08-ba24-285ca999d0c0"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 13:40:11 crc kubenswrapper[4768]: I0217 13:40:11.031200 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/14124682-b6b8-4c21-a24f-f3368478a1d3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "14124682-b6b8-4c21-a24f-f3368478a1d3" (UID: "14124682-b6b8-4c21-a24f-f3368478a1d3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 13:40:11 crc kubenswrapper[4768]: I0217 13:40:11.031199 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/14124682-b6b8-4c21-a24f-f3368478a1d3-kube-api-access-dbj95" (OuterVolumeSpecName: "kube-api-access-dbj95") pod "14124682-b6b8-4c21-a24f-f3368478a1d3" (UID: "14124682-b6b8-4c21-a24f-f3368478a1d3"). InnerVolumeSpecName "kube-api-access-dbj95". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 13:40:11 crc kubenswrapper[4768]: I0217 13:40:11.031225 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e4790bc2-efc0-4c08-ba24-285ca999d0c0-kube-api-access-nj6b4" (OuterVolumeSpecName: "kube-api-access-nj6b4") pod "e4790bc2-efc0-4c08-ba24-285ca999d0c0" (UID: "e4790bc2-efc0-4c08-ba24-285ca999d0c0"). InnerVolumeSpecName "kube-api-access-nj6b4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 13:40:11 crc kubenswrapper[4768]: I0217 13:40:11.033147 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e4790bc2-efc0-4c08-ba24-285ca999d0c0-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e4790bc2-efc0-4c08-ba24-285ca999d0c0" (UID: "e4790bc2-efc0-4c08-ba24-285ca999d0c0"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 13:40:11 crc kubenswrapper[4768]: I0217 13:40:11.127282 4768 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e4790bc2-efc0-4c08-ba24-285ca999d0c0-config\") on node \"crc\" DevicePath \"\"" Feb 17 13:40:11 crc kubenswrapper[4768]: I0217 13:40:11.127521 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nj6b4\" (UniqueName: \"kubernetes.io/projected/e4790bc2-efc0-4c08-ba24-285ca999d0c0-kube-api-access-nj6b4\") on node \"crc\" DevicePath \"\"" Feb 17 13:40:11 crc kubenswrapper[4768]: I0217 13:40:11.127532 4768 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e4790bc2-efc0-4c08-ba24-285ca999d0c0-client-ca\") on node \"crc\" DevicePath \"\"" Feb 17 13:40:11 crc kubenswrapper[4768]: I0217 13:40:11.127543 4768 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/14124682-b6b8-4c21-a24f-f3368478a1d3-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 17 13:40:11 crc kubenswrapper[4768]: I0217 13:40:11.127551 4768 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e4790bc2-efc0-4c08-ba24-285ca999d0c0-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 17 13:40:11 crc kubenswrapper[4768]: I0217 13:40:11.127559 4768 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e4790bc2-efc0-4c08-ba24-285ca999d0c0-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 17 13:40:11 crc kubenswrapper[4768]: I0217 13:40:11.127568 4768 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/14124682-b6b8-4c21-a24f-f3368478a1d3-client-ca\") on node \"crc\" DevicePath \"\"" Feb 17 13:40:11 crc kubenswrapper[4768]: I0217 13:40:11.127577 4768 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/14124682-b6b8-4c21-a24f-f3368478a1d3-config\") on node \"crc\" DevicePath \"\"" Feb 17 13:40:11 crc kubenswrapper[4768]: I0217 13:40:11.127585 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dbj95\" (UniqueName: \"kubernetes.io/projected/14124682-b6b8-4c21-a24f-f3368478a1d3-kube-api-access-dbj95\") on node \"crc\" DevicePath \"\"" Feb 17 13:40:11 crc kubenswrapper[4768]: I0217 13:40:11.341828 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-c67cb67f8-fprzn"] Feb 17 13:40:11 crc kubenswrapper[4768]: I0217 13:40:11.350396 4768 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-c67cb67f8-fprzn"] Feb 17 13:40:11 crc kubenswrapper[4768]: I0217 13:40:11.354594 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6c9c747f-mj765"] Feb 17 13:40:11 crc kubenswrapper[4768]: I0217 13:40:11.358551 4768 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6c9c747f-mj765"] Feb 17 13:40:11 crc kubenswrapper[4768]: I0217 13:40:11.542402 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="14124682-b6b8-4c21-a24f-f3368478a1d3" path="/var/lib/kubelet/pods/14124682-b6b8-4c21-a24f-f3368478a1d3/volumes" Feb 17 13:40:11 crc kubenswrapper[4768]: I0217 13:40:11.542887 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e4790bc2-efc0-4c08-ba24-285ca999d0c0" path="/var/lib/kubelet/pods/e4790bc2-efc0-4c08-ba24-285ca999d0c0/volumes" Feb 17 13:40:12 crc kubenswrapper[4768]: I0217 13:40:12.150093 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-6c5898848c-6kjk2"] Feb 17 13:40:12 crc kubenswrapper[4768]: E0217 13:40:12.150371 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e826302e-7052-4a6e-a626-93b7b433096a" containerName="registry-server" Feb 17 13:40:12 crc kubenswrapper[4768]: I0217 13:40:12.150386 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="e826302e-7052-4a6e-a626-93b7b433096a" containerName="registry-server" Feb 17 13:40:12 crc kubenswrapper[4768]: E0217 13:40:12.150411 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b7de1a69-e892-4ca4-a61f-20a221ce38ba" containerName="extract-utilities" Feb 17 13:40:12 crc kubenswrapper[4768]: I0217 13:40:12.150421 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="b7de1a69-e892-4ca4-a61f-20a221ce38ba" containerName="extract-utilities" Feb 17 13:40:12 crc kubenswrapper[4768]: E0217 13:40:12.150434 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b7de1a69-e892-4ca4-a61f-20a221ce38ba" containerName="registry-server" Feb 17 13:40:12 crc kubenswrapper[4768]: I0217 13:40:12.150443 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="b7de1a69-e892-4ca4-a61f-20a221ce38ba" containerName="registry-server" Feb 17 13:40:12 crc kubenswrapper[4768]: E0217 13:40:12.150455 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e4790bc2-efc0-4c08-ba24-285ca999d0c0" containerName="controller-manager" Feb 17 13:40:12 crc kubenswrapper[4768]: I0217 13:40:12.150463 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="e4790bc2-efc0-4c08-ba24-285ca999d0c0" containerName="controller-manager" Feb 17 13:40:12 crc kubenswrapper[4768]: E0217 13:40:12.150474 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6a25c47f-7f1c-42ed-85bf-acfe8949338b" containerName="registry-server" Feb 17 13:40:12 crc kubenswrapper[4768]: I0217 13:40:12.150482 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="6a25c47f-7f1c-42ed-85bf-acfe8949338b" containerName="registry-server" Feb 17 13:40:12 crc kubenswrapper[4768]: E0217 13:40:12.150493 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6a25c47f-7f1c-42ed-85bf-acfe8949338b" containerName="extract-content" Feb 17 13:40:12 crc kubenswrapper[4768]: I0217 13:40:12.150502 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="6a25c47f-7f1c-42ed-85bf-acfe8949338b" containerName="extract-content" Feb 17 13:40:12 crc kubenswrapper[4768]: E0217 13:40:12.150519 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="14124682-b6b8-4c21-a24f-f3368478a1d3" containerName="route-controller-manager" Feb 17 13:40:12 crc kubenswrapper[4768]: I0217 13:40:12.150527 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="14124682-b6b8-4c21-a24f-f3368478a1d3" containerName="route-controller-manager" Feb 17 13:40:12 crc kubenswrapper[4768]: E0217 13:40:12.150538 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e826302e-7052-4a6e-a626-93b7b433096a" containerName="extract-utilities" Feb 17 13:40:12 crc kubenswrapper[4768]: I0217 13:40:12.150548 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="e826302e-7052-4a6e-a626-93b7b433096a" containerName="extract-utilities" Feb 17 13:40:12 crc kubenswrapper[4768]: E0217 13:40:12.150558 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b7de1a69-e892-4ca4-a61f-20a221ce38ba" containerName="extract-content" Feb 17 13:40:12 crc kubenswrapper[4768]: I0217 13:40:12.150567 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="b7de1a69-e892-4ca4-a61f-20a221ce38ba" containerName="extract-content" Feb 17 13:40:12 crc kubenswrapper[4768]: E0217 13:40:12.150576 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6a25c47f-7f1c-42ed-85bf-acfe8949338b" containerName="extract-utilities" Feb 17 13:40:12 crc kubenswrapper[4768]: I0217 13:40:12.150584 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="6a25c47f-7f1c-42ed-85bf-acfe8949338b" containerName="extract-utilities" Feb 17 13:40:12 crc kubenswrapper[4768]: E0217 13:40:12.150596 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e826302e-7052-4a6e-a626-93b7b433096a" containerName="extract-content" Feb 17 13:40:12 crc kubenswrapper[4768]: I0217 13:40:12.150604 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="e826302e-7052-4a6e-a626-93b7b433096a" containerName="extract-content" Feb 17 13:40:12 crc kubenswrapper[4768]: I0217 13:40:12.150715 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="6a25c47f-7f1c-42ed-85bf-acfe8949338b" containerName="registry-server" Feb 17 13:40:12 crc kubenswrapper[4768]: I0217 13:40:12.150731 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="14124682-b6b8-4c21-a24f-f3368478a1d3" containerName="route-controller-manager" Feb 17 13:40:12 crc kubenswrapper[4768]: I0217 13:40:12.150755 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="e826302e-7052-4a6e-a626-93b7b433096a" containerName="registry-server" Feb 17 13:40:12 crc kubenswrapper[4768]: I0217 13:40:12.150778 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="b7de1a69-e892-4ca4-a61f-20a221ce38ba" containerName="registry-server" Feb 17 13:40:12 crc kubenswrapper[4768]: I0217 13:40:12.150794 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="e4790bc2-efc0-4c08-ba24-285ca999d0c0" containerName="controller-manager" Feb 17 13:40:12 crc kubenswrapper[4768]: I0217 13:40:12.151280 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6c5898848c-6kjk2" Feb 17 13:40:12 crc kubenswrapper[4768]: I0217 13:40:12.157305 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 17 13:40:12 crc kubenswrapper[4768]: I0217 13:40:12.157793 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 17 13:40:12 crc kubenswrapper[4768]: I0217 13:40:12.157891 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 17 13:40:12 crc kubenswrapper[4768]: I0217 13:40:12.158156 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 17 13:40:12 crc kubenswrapper[4768]: I0217 13:40:12.158167 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 17 13:40:12 crc kubenswrapper[4768]: I0217 13:40:12.158743 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 17 13:40:12 crc kubenswrapper[4768]: I0217 13:40:12.168248 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 17 13:40:12 crc kubenswrapper[4768]: I0217 13:40:12.170600 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7d57b47777-xv8m4"] Feb 17 13:40:12 crc kubenswrapper[4768]: I0217 13:40:12.172692 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7d57b47777-xv8m4" Feb 17 13:40:12 crc kubenswrapper[4768]: I0217 13:40:12.175887 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-6c5898848c-6kjk2"] Feb 17 13:40:12 crc kubenswrapper[4768]: I0217 13:40:12.179858 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 17 13:40:12 crc kubenswrapper[4768]: I0217 13:40:12.183118 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 17 13:40:12 crc kubenswrapper[4768]: I0217 13:40:12.183361 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 17 13:40:12 crc kubenswrapper[4768]: I0217 13:40:12.183587 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Feb 17 13:40:12 crc kubenswrapper[4768]: I0217 13:40:12.183870 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 17 13:40:12 crc kubenswrapper[4768]: I0217 13:40:12.184222 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7d57b47777-xv8m4"] Feb 17 13:40:12 crc kubenswrapper[4768]: I0217 13:40:12.184402 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 17 13:40:12 crc kubenswrapper[4768]: I0217 13:40:12.240423 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/877609b5-77a4-465e-a34f-97021c422a3e-config\") pod \"route-controller-manager-7d57b47777-xv8m4\" (UID: \"877609b5-77a4-465e-a34f-97021c422a3e\") " pod="openshift-route-controller-manager/route-controller-manager-7d57b47777-xv8m4" Feb 17 13:40:12 crc kubenswrapper[4768]: I0217 13:40:12.240531 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4c5bffa1-7812-400f-82bb-808b666d6f45-config\") pod \"controller-manager-6c5898848c-6kjk2\" (UID: \"4c5bffa1-7812-400f-82bb-808b666d6f45\") " pod="openshift-controller-manager/controller-manager-6c5898848c-6kjk2" Feb 17 13:40:12 crc kubenswrapper[4768]: I0217 13:40:12.240556 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4c5bffa1-7812-400f-82bb-808b666d6f45-serving-cert\") pod \"controller-manager-6c5898848c-6kjk2\" (UID: \"4c5bffa1-7812-400f-82bb-808b666d6f45\") " pod="openshift-controller-manager/controller-manager-6c5898848c-6kjk2" Feb 17 13:40:12 crc kubenswrapper[4768]: I0217 13:40:12.240684 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/877609b5-77a4-465e-a34f-97021c422a3e-serving-cert\") pod \"route-controller-manager-7d57b47777-xv8m4\" (UID: \"877609b5-77a4-465e-a34f-97021c422a3e\") " pod="openshift-route-controller-manager/route-controller-manager-7d57b47777-xv8m4" Feb 17 13:40:12 crc kubenswrapper[4768]: I0217 13:40:12.240760 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/877609b5-77a4-465e-a34f-97021c422a3e-client-ca\") pod \"route-controller-manager-7d57b47777-xv8m4\" (UID: \"877609b5-77a4-465e-a34f-97021c422a3e\") " pod="openshift-route-controller-manager/route-controller-manager-7d57b47777-xv8m4" Feb 17 13:40:12 crc kubenswrapper[4768]: I0217 13:40:12.240792 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gxkfh\" (UniqueName: \"kubernetes.io/projected/4c5bffa1-7812-400f-82bb-808b666d6f45-kube-api-access-gxkfh\") pod \"controller-manager-6c5898848c-6kjk2\" (UID: \"4c5bffa1-7812-400f-82bb-808b666d6f45\") " pod="openshift-controller-manager/controller-manager-6c5898848c-6kjk2" Feb 17 13:40:12 crc kubenswrapper[4768]: I0217 13:40:12.240868 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4c5bffa1-7812-400f-82bb-808b666d6f45-client-ca\") pod \"controller-manager-6c5898848c-6kjk2\" (UID: \"4c5bffa1-7812-400f-82bb-808b666d6f45\") " pod="openshift-controller-manager/controller-manager-6c5898848c-6kjk2" Feb 17 13:40:12 crc kubenswrapper[4768]: I0217 13:40:12.240910 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s8k8v\" (UniqueName: \"kubernetes.io/projected/877609b5-77a4-465e-a34f-97021c422a3e-kube-api-access-s8k8v\") pod \"route-controller-manager-7d57b47777-xv8m4\" (UID: \"877609b5-77a4-465e-a34f-97021c422a3e\") " pod="openshift-route-controller-manager/route-controller-manager-7d57b47777-xv8m4" Feb 17 13:40:12 crc kubenswrapper[4768]: I0217 13:40:12.240989 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/4c5bffa1-7812-400f-82bb-808b666d6f45-proxy-ca-bundles\") pod \"controller-manager-6c5898848c-6kjk2\" (UID: \"4c5bffa1-7812-400f-82bb-808b666d6f45\") " pod="openshift-controller-manager/controller-manager-6c5898848c-6kjk2" Feb 17 13:40:12 crc kubenswrapper[4768]: I0217 13:40:12.342344 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4c5bffa1-7812-400f-82bb-808b666d6f45-config\") pod \"controller-manager-6c5898848c-6kjk2\" (UID: \"4c5bffa1-7812-400f-82bb-808b666d6f45\") " pod="openshift-controller-manager/controller-manager-6c5898848c-6kjk2" Feb 17 13:40:12 crc kubenswrapper[4768]: I0217 13:40:12.342420 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4c5bffa1-7812-400f-82bb-808b666d6f45-serving-cert\") pod \"controller-manager-6c5898848c-6kjk2\" (UID: \"4c5bffa1-7812-400f-82bb-808b666d6f45\") " pod="openshift-controller-manager/controller-manager-6c5898848c-6kjk2" Feb 17 13:40:12 crc kubenswrapper[4768]: I0217 13:40:12.342465 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/877609b5-77a4-465e-a34f-97021c422a3e-serving-cert\") pod \"route-controller-manager-7d57b47777-xv8m4\" (UID: \"877609b5-77a4-465e-a34f-97021c422a3e\") " pod="openshift-route-controller-manager/route-controller-manager-7d57b47777-xv8m4" Feb 17 13:40:12 crc kubenswrapper[4768]: I0217 13:40:12.342512 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/877609b5-77a4-465e-a34f-97021c422a3e-client-ca\") pod \"route-controller-manager-7d57b47777-xv8m4\" (UID: \"877609b5-77a4-465e-a34f-97021c422a3e\") " pod="openshift-route-controller-manager/route-controller-manager-7d57b47777-xv8m4" Feb 17 13:40:12 crc kubenswrapper[4768]: I0217 13:40:12.342552 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gxkfh\" (UniqueName: \"kubernetes.io/projected/4c5bffa1-7812-400f-82bb-808b666d6f45-kube-api-access-gxkfh\") pod \"controller-manager-6c5898848c-6kjk2\" (UID: \"4c5bffa1-7812-400f-82bb-808b666d6f45\") " pod="openshift-controller-manager/controller-manager-6c5898848c-6kjk2" Feb 17 13:40:12 crc kubenswrapper[4768]: I0217 13:40:12.342610 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4c5bffa1-7812-400f-82bb-808b666d6f45-client-ca\") pod \"controller-manager-6c5898848c-6kjk2\" (UID: \"4c5bffa1-7812-400f-82bb-808b666d6f45\") " pod="openshift-controller-manager/controller-manager-6c5898848c-6kjk2" Feb 17 13:40:12 crc kubenswrapper[4768]: I0217 13:40:12.342651 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s8k8v\" (UniqueName: \"kubernetes.io/projected/877609b5-77a4-465e-a34f-97021c422a3e-kube-api-access-s8k8v\") pod \"route-controller-manager-7d57b47777-xv8m4\" (UID: \"877609b5-77a4-465e-a34f-97021c422a3e\") " pod="openshift-route-controller-manager/route-controller-manager-7d57b47777-xv8m4" Feb 17 13:40:12 crc kubenswrapper[4768]: I0217 13:40:12.342706 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/4c5bffa1-7812-400f-82bb-808b666d6f45-proxy-ca-bundles\") pod \"controller-manager-6c5898848c-6kjk2\" (UID: \"4c5bffa1-7812-400f-82bb-808b666d6f45\") " pod="openshift-controller-manager/controller-manager-6c5898848c-6kjk2" Feb 17 13:40:12 crc kubenswrapper[4768]: I0217 13:40:12.342769 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/877609b5-77a4-465e-a34f-97021c422a3e-config\") pod \"route-controller-manager-7d57b47777-xv8m4\" (UID: \"877609b5-77a4-465e-a34f-97021c422a3e\") " pod="openshift-route-controller-manager/route-controller-manager-7d57b47777-xv8m4" Feb 17 13:40:12 crc kubenswrapper[4768]: I0217 13:40:12.344171 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/877609b5-77a4-465e-a34f-97021c422a3e-client-ca\") pod \"route-controller-manager-7d57b47777-xv8m4\" (UID: \"877609b5-77a4-465e-a34f-97021c422a3e\") " pod="openshift-route-controller-manager/route-controller-manager-7d57b47777-xv8m4" Feb 17 13:40:12 crc kubenswrapper[4768]: I0217 13:40:12.344560 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/877609b5-77a4-465e-a34f-97021c422a3e-config\") pod \"route-controller-manager-7d57b47777-xv8m4\" (UID: \"877609b5-77a4-465e-a34f-97021c422a3e\") " pod="openshift-route-controller-manager/route-controller-manager-7d57b47777-xv8m4" Feb 17 13:40:12 crc kubenswrapper[4768]: I0217 13:40:12.345161 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4c5bffa1-7812-400f-82bb-808b666d6f45-config\") pod \"controller-manager-6c5898848c-6kjk2\" (UID: \"4c5bffa1-7812-400f-82bb-808b666d6f45\") " pod="openshift-controller-manager/controller-manager-6c5898848c-6kjk2" Feb 17 13:40:12 crc kubenswrapper[4768]: I0217 13:40:12.345196 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4c5bffa1-7812-400f-82bb-808b666d6f45-client-ca\") pod \"controller-manager-6c5898848c-6kjk2\" (UID: \"4c5bffa1-7812-400f-82bb-808b666d6f45\") " pod="openshift-controller-manager/controller-manager-6c5898848c-6kjk2" Feb 17 13:40:12 crc kubenswrapper[4768]: I0217 13:40:12.348907 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/4c5bffa1-7812-400f-82bb-808b666d6f45-proxy-ca-bundles\") pod \"controller-manager-6c5898848c-6kjk2\" (UID: \"4c5bffa1-7812-400f-82bb-808b666d6f45\") " pod="openshift-controller-manager/controller-manager-6c5898848c-6kjk2" Feb 17 13:40:12 crc kubenswrapper[4768]: I0217 13:40:12.356537 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/877609b5-77a4-465e-a34f-97021c422a3e-serving-cert\") pod \"route-controller-manager-7d57b47777-xv8m4\" (UID: \"877609b5-77a4-465e-a34f-97021c422a3e\") " pod="openshift-route-controller-manager/route-controller-manager-7d57b47777-xv8m4" Feb 17 13:40:12 crc kubenswrapper[4768]: I0217 13:40:12.356582 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4c5bffa1-7812-400f-82bb-808b666d6f45-serving-cert\") pod \"controller-manager-6c5898848c-6kjk2\" (UID: \"4c5bffa1-7812-400f-82bb-808b666d6f45\") " pod="openshift-controller-manager/controller-manager-6c5898848c-6kjk2" Feb 17 13:40:12 crc kubenswrapper[4768]: I0217 13:40:12.375896 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s8k8v\" (UniqueName: \"kubernetes.io/projected/877609b5-77a4-465e-a34f-97021c422a3e-kube-api-access-s8k8v\") pod \"route-controller-manager-7d57b47777-xv8m4\" (UID: \"877609b5-77a4-465e-a34f-97021c422a3e\") " pod="openshift-route-controller-manager/route-controller-manager-7d57b47777-xv8m4" Feb 17 13:40:12 crc kubenswrapper[4768]: I0217 13:40:12.386748 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gxkfh\" (UniqueName: \"kubernetes.io/projected/4c5bffa1-7812-400f-82bb-808b666d6f45-kube-api-access-gxkfh\") pod \"controller-manager-6c5898848c-6kjk2\" (UID: \"4c5bffa1-7812-400f-82bb-808b666d6f45\") " pod="openshift-controller-manager/controller-manager-6c5898848c-6kjk2" Feb 17 13:40:12 crc kubenswrapper[4768]: I0217 13:40:12.495641 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6c5898848c-6kjk2" Feb 17 13:40:12 crc kubenswrapper[4768]: I0217 13:40:12.518552 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7d57b47777-xv8m4" Feb 17 13:40:12 crc kubenswrapper[4768]: I0217 13:40:12.897915 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7d57b47777-xv8m4"] Feb 17 13:40:12 crc kubenswrapper[4768]: I0217 13:40:12.900375 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-6c5898848c-6kjk2"] Feb 17 13:40:12 crc kubenswrapper[4768]: W0217 13:40:12.908417 4768 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod877609b5_77a4_465e_a34f_97021c422a3e.slice/crio-ee581256b03c8ca0d33a0a787d916ee2a0a17405df3d49e7b0744d2c8f681ab8 WatchSource:0}: Error finding container ee581256b03c8ca0d33a0a787d916ee2a0a17405df3d49e7b0744d2c8f681ab8: Status 404 returned error can't find the container with id ee581256b03c8ca0d33a0a787d916ee2a0a17405df3d49e7b0744d2c8f681ab8 Feb 17 13:40:12 crc kubenswrapper[4768]: W0217 13:40:12.911326 4768 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4c5bffa1_7812_400f_82bb_808b666d6f45.slice/crio-92a811e889910a745f773566f6353752de6272b3d93a2f5dee6edee8246ac36d WatchSource:0}: Error finding container 92a811e889910a745f773566f6353752de6272b3d93a2f5dee6edee8246ac36d: Status 404 returned error can't find the container with id 92a811e889910a745f773566f6353752de6272b3d93a2f5dee6edee8246ac36d Feb 17 13:40:13 crc kubenswrapper[4768]: I0217 13:40:13.019388 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6c5898848c-6kjk2" event={"ID":"4c5bffa1-7812-400f-82bb-808b666d6f45","Type":"ContainerStarted","Data":"92a811e889910a745f773566f6353752de6272b3d93a2f5dee6edee8246ac36d"} Feb 17 13:40:13 crc kubenswrapper[4768]: I0217 13:40:13.020810 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7d57b47777-xv8m4" event={"ID":"877609b5-77a4-465e-a34f-97021c422a3e","Type":"ContainerStarted","Data":"ee581256b03c8ca0d33a0a787d916ee2a0a17405df3d49e7b0744d2c8f681ab8"} Feb 17 13:40:14 crc kubenswrapper[4768]: I0217 13:40:14.027948 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7d57b47777-xv8m4" event={"ID":"877609b5-77a4-465e-a34f-97021c422a3e","Type":"ContainerStarted","Data":"595e1b64c334160cdaa7ed01db6928ac434aacee0b415d3b51f9ec973aabc3cb"} Feb 17 13:40:14 crc kubenswrapper[4768]: I0217 13:40:14.028316 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-7d57b47777-xv8m4" Feb 17 13:40:14 crc kubenswrapper[4768]: I0217 13:40:14.030164 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6c5898848c-6kjk2" event={"ID":"4c5bffa1-7812-400f-82bb-808b666d6f45","Type":"ContainerStarted","Data":"f10838d85a6238ebc6b7dea18cdf12bedfc3649ef1aeb9a7120856235ad86a19"} Feb 17 13:40:14 crc kubenswrapper[4768]: I0217 13:40:14.030365 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-6c5898848c-6kjk2" Feb 17 13:40:14 crc kubenswrapper[4768]: I0217 13:40:14.035478 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-7d57b47777-xv8m4" Feb 17 13:40:14 crc kubenswrapper[4768]: I0217 13:40:14.035788 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-6c5898848c-6kjk2" Feb 17 13:40:14 crc kubenswrapper[4768]: I0217 13:40:14.050869 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-7d57b47777-xv8m4" podStartSLOduration=4.050849385 podStartE2EDuration="4.050849385s" podCreationTimestamp="2026-02-17 13:40:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 13:40:14.049669601 +0000 UTC m=+233.329056043" watchObservedRunningTime="2026-02-17 13:40:14.050849385 +0000 UTC m=+233.330235827" Feb 17 13:40:14 crc kubenswrapper[4768]: I0217 13:40:14.094471 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-6c5898848c-6kjk2" podStartSLOduration=4.094449033 podStartE2EDuration="4.094449033s" podCreationTimestamp="2026-02-17 13:40:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 13:40:14.090677172 +0000 UTC m=+233.370063644" watchObservedRunningTime="2026-02-17 13:40:14.094449033 +0000 UTC m=+233.373835485" Feb 17 13:40:16 crc kubenswrapper[4768]: I0217 13:40:16.402476 4768 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Feb 17 13:40:16 crc kubenswrapper[4768]: I0217 13:40:16.403072 4768 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" containerID="cri-o://1dd90501c9cf3ad0b27fefe4999afdfeaf0a8262728c8d47cd4714b0ef71d340" gracePeriod=15 Feb 17 13:40:16 crc kubenswrapper[4768]: I0217 13:40:16.403247 4768 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" containerID="cri-o://1670eca5569373c8bcdf22d19faf8794c43207f7711d502eeb2ade60d643f5dd" gracePeriod=15 Feb 17 13:40:16 crc kubenswrapper[4768]: I0217 13:40:16.403294 4768 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" containerID="cri-o://52dd97c46a2e371a68dcd9558be0f94d3bdd20c7a6ba6a65178fc09cdceea903" gracePeriod=15 Feb 17 13:40:16 crc kubenswrapper[4768]: I0217 13:40:16.403318 4768 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://7641c2023d65064decc69606567dff6eff07116a6cc62f9d1b1cddb34a39a407" gracePeriod=15 Feb 17 13:40:16 crc kubenswrapper[4768]: I0217 13:40:16.403481 4768 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" containerID="cri-o://c4f0320f35632987c0e1f7eb6655c7221d7c159f1473088fa143a7c69a60f5b0" gracePeriod=15 Feb 17 13:40:16 crc kubenswrapper[4768]: I0217 13:40:16.403973 4768 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Feb 17 13:40:16 crc kubenswrapper[4768]: E0217 13:40:16.404255 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Feb 17 13:40:16 crc kubenswrapper[4768]: I0217 13:40:16.404269 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Feb 17 13:40:16 crc kubenswrapper[4768]: E0217 13:40:16.404283 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Feb 17 13:40:16 crc kubenswrapper[4768]: I0217 13:40:16.404291 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Feb 17 13:40:16 crc kubenswrapper[4768]: E0217 13:40:16.404299 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Feb 17 13:40:16 crc kubenswrapper[4768]: I0217 13:40:16.404307 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Feb 17 13:40:16 crc kubenswrapper[4768]: E0217 13:40:16.404319 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 17 13:40:16 crc kubenswrapper[4768]: I0217 13:40:16.404326 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 17 13:40:16 crc kubenswrapper[4768]: E0217 13:40:16.404545 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Feb 17 13:40:16 crc kubenswrapper[4768]: I0217 13:40:16.404551 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Feb 17 13:40:16 crc kubenswrapper[4768]: E0217 13:40:16.404562 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 17 13:40:16 crc kubenswrapper[4768]: I0217 13:40:16.404570 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 17 13:40:16 crc kubenswrapper[4768]: E0217 13:40:16.404593 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Feb 17 13:40:16 crc kubenswrapper[4768]: I0217 13:40:16.404600 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Feb 17 13:40:16 crc kubenswrapper[4768]: I0217 13:40:16.404711 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 17 13:40:16 crc kubenswrapper[4768]: I0217 13:40:16.404722 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 17 13:40:16 crc kubenswrapper[4768]: I0217 13:40:16.404731 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Feb 17 13:40:16 crc kubenswrapper[4768]: I0217 13:40:16.404751 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Feb 17 13:40:16 crc kubenswrapper[4768]: I0217 13:40:16.404762 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Feb 17 13:40:16 crc kubenswrapper[4768]: I0217 13:40:16.404775 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Feb 17 13:40:16 crc kubenswrapper[4768]: I0217 13:40:16.404787 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 17 13:40:16 crc kubenswrapper[4768]: E0217 13:40:16.404917 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 17 13:40:16 crc kubenswrapper[4768]: I0217 13:40:16.404928 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 17 13:40:16 crc kubenswrapper[4768]: I0217 13:40:16.406459 4768 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Feb 17 13:40:16 crc kubenswrapper[4768]: I0217 13:40:16.406875 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 17 13:40:16 crc kubenswrapper[4768]: I0217 13:40:16.411728 4768 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="f4b27818a5e8e43d0dc095d08835c792" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" Feb 17 13:40:16 crc kubenswrapper[4768]: I0217 13:40:16.597118 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 17 13:40:16 crc kubenswrapper[4768]: I0217 13:40:16.597798 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 17 13:40:16 crc kubenswrapper[4768]: I0217 13:40:16.597930 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 17 13:40:16 crc kubenswrapper[4768]: I0217 13:40:16.598029 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 17 13:40:16 crc kubenswrapper[4768]: I0217 13:40:16.598074 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 13:40:16 crc kubenswrapper[4768]: I0217 13:40:16.598201 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 13:40:16 crc kubenswrapper[4768]: I0217 13:40:16.598306 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 13:40:16 crc kubenswrapper[4768]: I0217 13:40:16.598338 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 17 13:40:16 crc kubenswrapper[4768]: I0217 13:40:16.699248 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 17 13:40:16 crc kubenswrapper[4768]: I0217 13:40:16.699312 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 17 13:40:16 crc kubenswrapper[4768]: I0217 13:40:16.699329 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 17 13:40:16 crc kubenswrapper[4768]: I0217 13:40:16.699344 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 17 13:40:16 crc kubenswrapper[4768]: I0217 13:40:16.699363 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 13:40:16 crc kubenswrapper[4768]: I0217 13:40:16.699402 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 13:40:16 crc kubenswrapper[4768]: I0217 13:40:16.699401 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 17 13:40:16 crc kubenswrapper[4768]: I0217 13:40:16.699430 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 17 13:40:16 crc kubenswrapper[4768]: I0217 13:40:16.699426 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 13:40:16 crc kubenswrapper[4768]: I0217 13:40:16.699447 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 13:40:16 crc kubenswrapper[4768]: I0217 13:40:16.699464 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 13:40:16 crc kubenswrapper[4768]: I0217 13:40:16.699358 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 17 13:40:16 crc kubenswrapper[4768]: I0217 13:40:16.699485 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 17 13:40:16 crc kubenswrapper[4768]: I0217 13:40:16.699541 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 13:40:16 crc kubenswrapper[4768]: I0217 13:40:16.699592 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 17 13:40:16 crc kubenswrapper[4768]: I0217 13:40:16.699576 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 17 13:40:17 crc kubenswrapper[4768]: I0217 13:40:17.054673 4768 generic.go:334] "Generic (PLEG): container finished" podID="81f5c792-d672-4014-84b4-c7b05fbb1139" containerID="e5f4cec90449264cb08f44fc8d7c61f6443966fa498283ba332f6f62f8e55442" exitCode=0 Feb 17 13:40:17 crc kubenswrapper[4768]: I0217 13:40:17.054762 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"81f5c792-d672-4014-84b4-c7b05fbb1139","Type":"ContainerDied","Data":"e5f4cec90449264cb08f44fc8d7c61f6443966fa498283ba332f6f62f8e55442"} Feb 17 13:40:17 crc kubenswrapper[4768]: I0217 13:40:17.055720 4768 status_manager.go:851] "Failed to get status for pod" podUID="81f5c792-d672-4014-84b4-c7b05fbb1139" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.36:6443: connect: connection refused" Feb 17 13:40:17 crc kubenswrapper[4768]: I0217 13:40:17.057762 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Feb 17 13:40:17 crc kubenswrapper[4768]: I0217 13:40:17.059627 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Feb 17 13:40:17 crc kubenswrapper[4768]: I0217 13:40:17.060696 4768 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="1670eca5569373c8bcdf22d19faf8794c43207f7711d502eeb2ade60d643f5dd" exitCode=0 Feb 17 13:40:17 crc kubenswrapper[4768]: I0217 13:40:17.060741 4768 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="7641c2023d65064decc69606567dff6eff07116a6cc62f9d1b1cddb34a39a407" exitCode=0 Feb 17 13:40:17 crc kubenswrapper[4768]: I0217 13:40:17.060762 4768 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="52dd97c46a2e371a68dcd9558be0f94d3bdd20c7a6ba6a65178fc09cdceea903" exitCode=0 Feb 17 13:40:17 crc kubenswrapper[4768]: I0217 13:40:17.060783 4768 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="c4f0320f35632987c0e1f7eb6655c7221d7c159f1473088fa143a7c69a60f5b0" exitCode=2 Feb 17 13:40:17 crc kubenswrapper[4768]: I0217 13:40:17.060799 4768 scope.go:117] "RemoveContainer" containerID="9ecebe409455190192bdd157c1aa1c3eaec5d7930ccfaa6c456a1d04be7fff2f" Feb 17 13:40:17 crc kubenswrapper[4768]: I0217 13:40:17.153229 4768 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-558db77b4-h77q6" podUID="b7837593-1275-40cb-820f-afe9cb13fad4" containerName="oauth-openshift" containerID="cri-o://2e8d591d0568d9dddb8d23c2d1b01f96de95451697c8b04ea96b7905823fb8ae" gracePeriod=15 Feb 17 13:40:17 crc kubenswrapper[4768]: I0217 13:40:17.592826 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-h77q6" Feb 17 13:40:17 crc kubenswrapper[4768]: I0217 13:40:17.593898 4768 status_manager.go:851] "Failed to get status for pod" podUID="81f5c792-d672-4014-84b4-c7b05fbb1139" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.36:6443: connect: connection refused" Feb 17 13:40:17 crc kubenswrapper[4768]: I0217 13:40:17.594265 4768 status_manager.go:851] "Failed to get status for pod" podUID="b7837593-1275-40cb-820f-afe9cb13fad4" pod="openshift-authentication/oauth-openshift-558db77b4-h77q6" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-h77q6\": dial tcp 38.102.83.36:6443: connect: connection refused" Feb 17 13:40:17 crc kubenswrapper[4768]: I0217 13:40:17.711076 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/b7837593-1275-40cb-820f-afe9cb13fad4-v4-0-config-system-router-certs\") pod \"b7837593-1275-40cb-820f-afe9cb13fad4\" (UID: \"b7837593-1275-40cb-820f-afe9cb13fad4\") " Feb 17 13:40:17 crc kubenswrapper[4768]: I0217 13:40:17.711241 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/b7837593-1275-40cb-820f-afe9cb13fad4-v4-0-config-system-serving-cert\") pod \"b7837593-1275-40cb-820f-afe9cb13fad4\" (UID: \"b7837593-1275-40cb-820f-afe9cb13fad4\") " Feb 17 13:40:17 crc kubenswrapper[4768]: I0217 13:40:17.711304 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/b7837593-1275-40cb-820f-afe9cb13fad4-v4-0-config-user-template-login\") pod \"b7837593-1275-40cb-820f-afe9cb13fad4\" (UID: \"b7837593-1275-40cb-820f-afe9cb13fad4\") " Feb 17 13:40:17 crc kubenswrapper[4768]: I0217 13:40:17.711345 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b7837593-1275-40cb-820f-afe9cb13fad4-v4-0-config-system-trusted-ca-bundle\") pod \"b7837593-1275-40cb-820f-afe9cb13fad4\" (UID: \"b7837593-1275-40cb-820f-afe9cb13fad4\") " Feb 17 13:40:17 crc kubenswrapper[4768]: I0217 13:40:17.711392 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/b7837593-1275-40cb-820f-afe9cb13fad4-v4-0-config-system-service-ca\") pod \"b7837593-1275-40cb-820f-afe9cb13fad4\" (UID: \"b7837593-1275-40cb-820f-afe9cb13fad4\") " Feb 17 13:40:17 crc kubenswrapper[4768]: I0217 13:40:17.711458 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/b7837593-1275-40cb-820f-afe9cb13fad4-v4-0-config-user-template-provider-selection\") pod \"b7837593-1275-40cb-820f-afe9cb13fad4\" (UID: \"b7837593-1275-40cb-820f-afe9cb13fad4\") " Feb 17 13:40:17 crc kubenswrapper[4768]: I0217 13:40:17.711495 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/b7837593-1275-40cb-820f-afe9cb13fad4-audit-policies\") pod \"b7837593-1275-40cb-820f-afe9cb13fad4\" (UID: \"b7837593-1275-40cb-820f-afe9cb13fad4\") " Feb 17 13:40:17 crc kubenswrapper[4768]: I0217 13:40:17.711539 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/b7837593-1275-40cb-820f-afe9cb13fad4-v4-0-config-system-cliconfig\") pod \"b7837593-1275-40cb-820f-afe9cb13fad4\" (UID: \"b7837593-1275-40cb-820f-afe9cb13fad4\") " Feb 17 13:40:17 crc kubenswrapper[4768]: I0217 13:40:17.711569 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/b7837593-1275-40cb-820f-afe9cb13fad4-v4-0-config-user-idp-0-file-data\") pod \"b7837593-1275-40cb-820f-afe9cb13fad4\" (UID: \"b7837593-1275-40cb-820f-afe9cb13fad4\") " Feb 17 13:40:17 crc kubenswrapper[4768]: I0217 13:40:17.711592 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/b7837593-1275-40cb-820f-afe9cb13fad4-audit-dir\") pod \"b7837593-1275-40cb-820f-afe9cb13fad4\" (UID: \"b7837593-1275-40cb-820f-afe9cb13fad4\") " Feb 17 13:40:17 crc kubenswrapper[4768]: I0217 13:40:17.711766 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b7837593-1275-40cb-820f-afe9cb13fad4-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "b7837593-1275-40cb-820f-afe9cb13fad4" (UID: "b7837593-1275-40cb-820f-afe9cb13fad4"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 13:40:17 crc kubenswrapper[4768]: I0217 13:40:17.712165 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/b7837593-1275-40cb-820f-afe9cb13fad4-v4-0-config-system-session\") pod \"b7837593-1275-40cb-820f-afe9cb13fad4\" (UID: \"b7837593-1275-40cb-820f-afe9cb13fad4\") " Feb 17 13:40:17 crc kubenswrapper[4768]: I0217 13:40:17.712416 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b7837593-1275-40cb-820f-afe9cb13fad4-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "b7837593-1275-40cb-820f-afe9cb13fad4" (UID: "b7837593-1275-40cb-820f-afe9cb13fad4"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 13:40:17 crc kubenswrapper[4768]: I0217 13:40:17.712442 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b7837593-1275-40cb-820f-afe9cb13fad4-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "b7837593-1275-40cb-820f-afe9cb13fad4" (UID: "b7837593-1275-40cb-820f-afe9cb13fad4"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 13:40:17 crc kubenswrapper[4768]: I0217 13:40:17.712518 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b7837593-1275-40cb-820f-afe9cb13fad4-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "b7837593-1275-40cb-820f-afe9cb13fad4" (UID: "b7837593-1275-40cb-820f-afe9cb13fad4"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 13:40:17 crc kubenswrapper[4768]: I0217 13:40:17.712561 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b7837593-1275-40cb-820f-afe9cb13fad4-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "b7837593-1275-40cb-820f-afe9cb13fad4" (UID: "b7837593-1275-40cb-820f-afe9cb13fad4"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 13:40:17 crc kubenswrapper[4768]: I0217 13:40:17.712585 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/b7837593-1275-40cb-820f-afe9cb13fad4-v4-0-config-user-template-error\") pod \"b7837593-1275-40cb-820f-afe9cb13fad4\" (UID: \"b7837593-1275-40cb-820f-afe9cb13fad4\") " Feb 17 13:40:17 crc kubenswrapper[4768]: I0217 13:40:17.712705 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/b7837593-1275-40cb-820f-afe9cb13fad4-v4-0-config-system-ocp-branding-template\") pod \"b7837593-1275-40cb-820f-afe9cb13fad4\" (UID: \"b7837593-1275-40cb-820f-afe9cb13fad4\") " Feb 17 13:40:17 crc kubenswrapper[4768]: I0217 13:40:17.712779 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dr2bq\" (UniqueName: \"kubernetes.io/projected/b7837593-1275-40cb-820f-afe9cb13fad4-kube-api-access-dr2bq\") pod \"b7837593-1275-40cb-820f-afe9cb13fad4\" (UID: \"b7837593-1275-40cb-820f-afe9cb13fad4\") " Feb 17 13:40:17 crc kubenswrapper[4768]: I0217 13:40:17.713434 4768 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b7837593-1275-40cb-820f-afe9cb13fad4-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 13:40:17 crc kubenswrapper[4768]: I0217 13:40:17.713484 4768 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/b7837593-1275-40cb-820f-afe9cb13fad4-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Feb 17 13:40:17 crc kubenswrapper[4768]: I0217 13:40:17.713513 4768 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/b7837593-1275-40cb-820f-afe9cb13fad4-audit-policies\") on node \"crc\" DevicePath \"\"" Feb 17 13:40:17 crc kubenswrapper[4768]: I0217 13:40:17.713543 4768 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/b7837593-1275-40cb-820f-afe9cb13fad4-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Feb 17 13:40:17 crc kubenswrapper[4768]: I0217 13:40:17.713583 4768 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/b7837593-1275-40cb-820f-afe9cb13fad4-audit-dir\") on node \"crc\" DevicePath \"\"" Feb 17 13:40:17 crc kubenswrapper[4768]: I0217 13:40:17.717578 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b7837593-1275-40cb-820f-afe9cb13fad4-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "b7837593-1275-40cb-820f-afe9cb13fad4" (UID: "b7837593-1275-40cb-820f-afe9cb13fad4"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 13:40:17 crc kubenswrapper[4768]: I0217 13:40:17.718130 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b7837593-1275-40cb-820f-afe9cb13fad4-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "b7837593-1275-40cb-820f-afe9cb13fad4" (UID: "b7837593-1275-40cb-820f-afe9cb13fad4"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 13:40:17 crc kubenswrapper[4768]: I0217 13:40:17.719387 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b7837593-1275-40cb-820f-afe9cb13fad4-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "b7837593-1275-40cb-820f-afe9cb13fad4" (UID: "b7837593-1275-40cb-820f-afe9cb13fad4"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 13:40:17 crc kubenswrapper[4768]: I0217 13:40:17.723776 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b7837593-1275-40cb-820f-afe9cb13fad4-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "b7837593-1275-40cb-820f-afe9cb13fad4" (UID: "b7837593-1275-40cb-820f-afe9cb13fad4"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 13:40:17 crc kubenswrapper[4768]: I0217 13:40:17.724058 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b7837593-1275-40cb-820f-afe9cb13fad4-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "b7837593-1275-40cb-820f-afe9cb13fad4" (UID: "b7837593-1275-40cb-820f-afe9cb13fad4"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 13:40:17 crc kubenswrapper[4768]: I0217 13:40:17.724781 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b7837593-1275-40cb-820f-afe9cb13fad4-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "b7837593-1275-40cb-820f-afe9cb13fad4" (UID: "b7837593-1275-40cb-820f-afe9cb13fad4"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 13:40:17 crc kubenswrapper[4768]: I0217 13:40:17.724999 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b7837593-1275-40cb-820f-afe9cb13fad4-kube-api-access-dr2bq" (OuterVolumeSpecName: "kube-api-access-dr2bq") pod "b7837593-1275-40cb-820f-afe9cb13fad4" (UID: "b7837593-1275-40cb-820f-afe9cb13fad4"). InnerVolumeSpecName "kube-api-access-dr2bq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 13:40:17 crc kubenswrapper[4768]: I0217 13:40:17.725349 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b7837593-1275-40cb-820f-afe9cb13fad4-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "b7837593-1275-40cb-820f-afe9cb13fad4" (UID: "b7837593-1275-40cb-820f-afe9cb13fad4"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 13:40:17 crc kubenswrapper[4768]: I0217 13:40:17.726175 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b7837593-1275-40cb-820f-afe9cb13fad4-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "b7837593-1275-40cb-820f-afe9cb13fad4" (UID: "b7837593-1275-40cb-820f-afe9cb13fad4"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 13:40:17 crc kubenswrapper[4768]: I0217 13:40:17.814773 4768 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/b7837593-1275-40cb-820f-afe9cb13fad4-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 17 13:40:17 crc kubenswrapper[4768]: I0217 13:40:17.814816 4768 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/b7837593-1275-40cb-820f-afe9cb13fad4-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Feb 17 13:40:17 crc kubenswrapper[4768]: I0217 13:40:17.814831 4768 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/b7837593-1275-40cb-820f-afe9cb13fad4-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Feb 17 13:40:17 crc kubenswrapper[4768]: I0217 13:40:17.814847 4768 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/b7837593-1275-40cb-820f-afe9cb13fad4-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Feb 17 13:40:17 crc kubenswrapper[4768]: I0217 13:40:17.814860 4768 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/b7837593-1275-40cb-820f-afe9cb13fad4-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Feb 17 13:40:17 crc kubenswrapper[4768]: I0217 13:40:17.814872 4768 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/b7837593-1275-40cb-820f-afe9cb13fad4-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Feb 17 13:40:17 crc kubenswrapper[4768]: I0217 13:40:17.814885 4768 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/b7837593-1275-40cb-820f-afe9cb13fad4-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Feb 17 13:40:17 crc kubenswrapper[4768]: I0217 13:40:17.814897 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dr2bq\" (UniqueName: \"kubernetes.io/projected/b7837593-1275-40cb-820f-afe9cb13fad4-kube-api-access-dr2bq\") on node \"crc\" DevicePath \"\"" Feb 17 13:40:17 crc kubenswrapper[4768]: I0217 13:40:17.814909 4768 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/b7837593-1275-40cb-820f-afe9cb13fad4-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Feb 17 13:40:18 crc kubenswrapper[4768]: I0217 13:40:18.081674 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Feb 17 13:40:18 crc kubenswrapper[4768]: I0217 13:40:18.084900 4768 generic.go:334] "Generic (PLEG): container finished" podID="b7837593-1275-40cb-820f-afe9cb13fad4" containerID="2e8d591d0568d9dddb8d23c2d1b01f96de95451697c8b04ea96b7905823fb8ae" exitCode=0 Feb 17 13:40:18 crc kubenswrapper[4768]: I0217 13:40:18.085148 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-h77q6" Feb 17 13:40:18 crc kubenswrapper[4768]: I0217 13:40:18.088165 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-h77q6" event={"ID":"b7837593-1275-40cb-820f-afe9cb13fad4","Type":"ContainerDied","Data":"2e8d591d0568d9dddb8d23c2d1b01f96de95451697c8b04ea96b7905823fb8ae"} Feb 17 13:40:18 crc kubenswrapper[4768]: I0217 13:40:18.088211 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-h77q6" event={"ID":"b7837593-1275-40cb-820f-afe9cb13fad4","Type":"ContainerDied","Data":"ea207b861d45db8aeb43f91c93ccb6d52238a7cffe4a80ef5a245306657ce11f"} Feb 17 13:40:18 crc kubenswrapper[4768]: I0217 13:40:18.088233 4768 scope.go:117] "RemoveContainer" containerID="2e8d591d0568d9dddb8d23c2d1b01f96de95451697c8b04ea96b7905823fb8ae" Feb 17 13:40:18 crc kubenswrapper[4768]: I0217 13:40:18.089040 4768 status_manager.go:851] "Failed to get status for pod" podUID="b7837593-1275-40cb-820f-afe9cb13fad4" pod="openshift-authentication/oauth-openshift-558db77b4-h77q6" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-h77q6\": dial tcp 38.102.83.36:6443: connect: connection refused" Feb 17 13:40:18 crc kubenswrapper[4768]: I0217 13:40:18.089353 4768 status_manager.go:851] "Failed to get status for pod" podUID="81f5c792-d672-4014-84b4-c7b05fbb1139" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.36:6443: connect: connection refused" Feb 17 13:40:18 crc kubenswrapper[4768]: I0217 13:40:18.148699 4768 status_manager.go:851] "Failed to get status for pod" podUID="b7837593-1275-40cb-820f-afe9cb13fad4" pod="openshift-authentication/oauth-openshift-558db77b4-h77q6" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-h77q6\": dial tcp 38.102.83.36:6443: connect: connection refused" Feb 17 13:40:18 crc kubenswrapper[4768]: I0217 13:40:18.149661 4768 status_manager.go:851] "Failed to get status for pod" podUID="81f5c792-d672-4014-84b4-c7b05fbb1139" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.36:6443: connect: connection refused" Feb 17 13:40:18 crc kubenswrapper[4768]: I0217 13:40:18.157604 4768 scope.go:117] "RemoveContainer" containerID="2e8d591d0568d9dddb8d23c2d1b01f96de95451697c8b04ea96b7905823fb8ae" Feb 17 13:40:18 crc kubenswrapper[4768]: E0217 13:40:18.158210 4768 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2e8d591d0568d9dddb8d23c2d1b01f96de95451697c8b04ea96b7905823fb8ae\": container with ID starting with 2e8d591d0568d9dddb8d23c2d1b01f96de95451697c8b04ea96b7905823fb8ae not found: ID does not exist" containerID="2e8d591d0568d9dddb8d23c2d1b01f96de95451697c8b04ea96b7905823fb8ae" Feb 17 13:40:18 crc kubenswrapper[4768]: I0217 13:40:18.158255 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2e8d591d0568d9dddb8d23c2d1b01f96de95451697c8b04ea96b7905823fb8ae"} err="failed to get container status \"2e8d591d0568d9dddb8d23c2d1b01f96de95451697c8b04ea96b7905823fb8ae\": rpc error: code = NotFound desc = could not find container \"2e8d591d0568d9dddb8d23c2d1b01f96de95451697c8b04ea96b7905823fb8ae\": container with ID starting with 2e8d591d0568d9dddb8d23c2d1b01f96de95451697c8b04ea96b7905823fb8ae not found: ID does not exist" Feb 17 13:40:18 crc kubenswrapper[4768]: I0217 13:40:18.416001 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Feb 17 13:40:18 crc kubenswrapper[4768]: I0217 13:40:18.416699 4768 status_manager.go:851] "Failed to get status for pod" podUID="81f5c792-d672-4014-84b4-c7b05fbb1139" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.36:6443: connect: connection refused" Feb 17 13:40:18 crc kubenswrapper[4768]: I0217 13:40:18.417229 4768 status_manager.go:851] "Failed to get status for pod" podUID="b7837593-1275-40cb-820f-afe9cb13fad4" pod="openshift-authentication/oauth-openshift-558db77b4-h77q6" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-h77q6\": dial tcp 38.102.83.36:6443: connect: connection refused" Feb 17 13:40:18 crc kubenswrapper[4768]: I0217 13:40:18.421738 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/81f5c792-d672-4014-84b4-c7b05fbb1139-kube-api-access\") pod \"81f5c792-d672-4014-84b4-c7b05fbb1139\" (UID: \"81f5c792-d672-4014-84b4-c7b05fbb1139\") " Feb 17 13:40:18 crc kubenswrapper[4768]: I0217 13:40:18.421863 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/81f5c792-d672-4014-84b4-c7b05fbb1139-kubelet-dir\") pod \"81f5c792-d672-4014-84b4-c7b05fbb1139\" (UID: \"81f5c792-d672-4014-84b4-c7b05fbb1139\") " Feb 17 13:40:18 crc kubenswrapper[4768]: I0217 13:40:18.421951 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/81f5c792-d672-4014-84b4-c7b05fbb1139-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "81f5c792-d672-4014-84b4-c7b05fbb1139" (UID: "81f5c792-d672-4014-84b4-c7b05fbb1139"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 13:40:18 crc kubenswrapper[4768]: I0217 13:40:18.422019 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/81f5c792-d672-4014-84b4-c7b05fbb1139-var-lock\") pod \"81f5c792-d672-4014-84b4-c7b05fbb1139\" (UID: \"81f5c792-d672-4014-84b4-c7b05fbb1139\") " Feb 17 13:40:18 crc kubenswrapper[4768]: I0217 13:40:18.422090 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/81f5c792-d672-4014-84b4-c7b05fbb1139-var-lock" (OuterVolumeSpecName: "var-lock") pod "81f5c792-d672-4014-84b4-c7b05fbb1139" (UID: "81f5c792-d672-4014-84b4-c7b05fbb1139"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 13:40:18 crc kubenswrapper[4768]: I0217 13:40:18.422358 4768 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/81f5c792-d672-4014-84b4-c7b05fbb1139-kubelet-dir\") on node \"crc\" DevicePath \"\"" Feb 17 13:40:18 crc kubenswrapper[4768]: I0217 13:40:18.422407 4768 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/81f5c792-d672-4014-84b4-c7b05fbb1139-var-lock\") on node \"crc\" DevicePath \"\"" Feb 17 13:40:18 crc kubenswrapper[4768]: I0217 13:40:18.426172 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/81f5c792-d672-4014-84b4-c7b05fbb1139-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "81f5c792-d672-4014-84b4-c7b05fbb1139" (UID: "81f5c792-d672-4014-84b4-c7b05fbb1139"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 13:40:18 crc kubenswrapper[4768]: I0217 13:40:18.523677 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/81f5c792-d672-4014-84b4-c7b05fbb1139-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 17 13:40:18 crc kubenswrapper[4768]: I0217 13:40:18.776360 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Feb 17 13:40:18 crc kubenswrapper[4768]: I0217 13:40:18.778226 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 13:40:18 crc kubenswrapper[4768]: I0217 13:40:18.778752 4768 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.36:6443: connect: connection refused" Feb 17 13:40:18 crc kubenswrapper[4768]: I0217 13:40:18.778995 4768 status_manager.go:851] "Failed to get status for pod" podUID="81f5c792-d672-4014-84b4-c7b05fbb1139" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.36:6443: connect: connection refused" Feb 17 13:40:18 crc kubenswrapper[4768]: I0217 13:40:18.779342 4768 status_manager.go:851] "Failed to get status for pod" podUID="b7837593-1275-40cb-820f-afe9cb13fad4" pod="openshift-authentication/oauth-openshift-558db77b4-h77q6" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-h77q6\": dial tcp 38.102.83.36:6443: connect: connection refused" Feb 17 13:40:18 crc kubenswrapper[4768]: I0217 13:40:18.827498 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Feb 17 13:40:18 crc kubenswrapper[4768]: I0217 13:40:18.827554 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Feb 17 13:40:18 crc kubenswrapper[4768]: I0217 13:40:18.827577 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Feb 17 13:40:18 crc kubenswrapper[4768]: I0217 13:40:18.827628 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 13:40:18 crc kubenswrapper[4768]: I0217 13:40:18.827642 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 13:40:18 crc kubenswrapper[4768]: I0217 13:40:18.827766 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 13:40:18 crc kubenswrapper[4768]: I0217 13:40:18.827975 4768 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") on node \"crc\" DevicePath \"\"" Feb 17 13:40:18 crc kubenswrapper[4768]: I0217 13:40:18.827986 4768 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") on node \"crc\" DevicePath \"\"" Feb 17 13:40:18 crc kubenswrapper[4768]: I0217 13:40:18.827995 4768 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") on node \"crc\" DevicePath \"\"" Feb 17 13:40:19 crc kubenswrapper[4768]: I0217 13:40:19.101045 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Feb 17 13:40:19 crc kubenswrapper[4768]: I0217 13:40:19.101782 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"81f5c792-d672-4014-84b4-c7b05fbb1139","Type":"ContainerDied","Data":"64202ad4659f8938866d6c0d4534b4c9d815740fa76dc37600bfe19ea425aac3"} Feb 17 13:40:19 crc kubenswrapper[4768]: I0217 13:40:19.101837 4768 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="64202ad4659f8938866d6c0d4534b4c9d815740fa76dc37600bfe19ea425aac3" Feb 17 13:40:19 crc kubenswrapper[4768]: I0217 13:40:19.105267 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Feb 17 13:40:19 crc kubenswrapper[4768]: I0217 13:40:19.106081 4768 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="1dd90501c9cf3ad0b27fefe4999afdfeaf0a8262728c8d47cd4714b0ef71d340" exitCode=0 Feb 17 13:40:19 crc kubenswrapper[4768]: I0217 13:40:19.106199 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 13:40:19 crc kubenswrapper[4768]: I0217 13:40:19.106203 4768 scope.go:117] "RemoveContainer" containerID="1670eca5569373c8bcdf22d19faf8794c43207f7711d502eeb2ade60d643f5dd" Feb 17 13:40:19 crc kubenswrapper[4768]: I0217 13:40:19.122395 4768 scope.go:117] "RemoveContainer" containerID="7641c2023d65064decc69606567dff6eff07116a6cc62f9d1b1cddb34a39a407" Feb 17 13:40:19 crc kubenswrapper[4768]: I0217 13:40:19.123826 4768 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.36:6443: connect: connection refused" Feb 17 13:40:19 crc kubenswrapper[4768]: I0217 13:40:19.124243 4768 status_manager.go:851] "Failed to get status for pod" podUID="81f5c792-d672-4014-84b4-c7b05fbb1139" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.36:6443: connect: connection refused" Feb 17 13:40:19 crc kubenswrapper[4768]: I0217 13:40:19.124531 4768 status_manager.go:851] "Failed to get status for pod" podUID="b7837593-1275-40cb-820f-afe9cb13fad4" pod="openshift-authentication/oauth-openshift-558db77b4-h77q6" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-h77q6\": dial tcp 38.102.83.36:6443: connect: connection refused" Feb 17 13:40:19 crc kubenswrapper[4768]: I0217 13:40:19.124862 4768 status_manager.go:851] "Failed to get status for pod" podUID="81f5c792-d672-4014-84b4-c7b05fbb1139" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.36:6443: connect: connection refused" Feb 17 13:40:19 crc kubenswrapper[4768]: I0217 13:40:19.125123 4768 status_manager.go:851] "Failed to get status for pod" podUID="b7837593-1275-40cb-820f-afe9cb13fad4" pod="openshift-authentication/oauth-openshift-558db77b4-h77q6" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-h77q6\": dial tcp 38.102.83.36:6443: connect: connection refused" Feb 17 13:40:19 crc kubenswrapper[4768]: I0217 13:40:19.125367 4768 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.36:6443: connect: connection refused" Feb 17 13:40:19 crc kubenswrapper[4768]: I0217 13:40:19.134197 4768 scope.go:117] "RemoveContainer" containerID="52dd97c46a2e371a68dcd9558be0f94d3bdd20c7a6ba6a65178fc09cdceea903" Feb 17 13:40:19 crc kubenswrapper[4768]: I0217 13:40:19.145421 4768 scope.go:117] "RemoveContainer" containerID="c4f0320f35632987c0e1f7eb6655c7221d7c159f1473088fa143a7c69a60f5b0" Feb 17 13:40:19 crc kubenswrapper[4768]: I0217 13:40:19.154828 4768 scope.go:117] "RemoveContainer" containerID="1dd90501c9cf3ad0b27fefe4999afdfeaf0a8262728c8d47cd4714b0ef71d340" Feb 17 13:40:19 crc kubenswrapper[4768]: I0217 13:40:19.165374 4768 scope.go:117] "RemoveContainer" containerID="9dffd0f3cc3b89ababe812ea2e550f319242913da2908f466cbb8d9ac541e4b9" Feb 17 13:40:19 crc kubenswrapper[4768]: I0217 13:40:19.184178 4768 scope.go:117] "RemoveContainer" containerID="1670eca5569373c8bcdf22d19faf8794c43207f7711d502eeb2ade60d643f5dd" Feb 17 13:40:19 crc kubenswrapper[4768]: E0217 13:40:19.184661 4768 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1670eca5569373c8bcdf22d19faf8794c43207f7711d502eeb2ade60d643f5dd\": container with ID starting with 1670eca5569373c8bcdf22d19faf8794c43207f7711d502eeb2ade60d643f5dd not found: ID does not exist" containerID="1670eca5569373c8bcdf22d19faf8794c43207f7711d502eeb2ade60d643f5dd" Feb 17 13:40:19 crc kubenswrapper[4768]: I0217 13:40:19.184707 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1670eca5569373c8bcdf22d19faf8794c43207f7711d502eeb2ade60d643f5dd"} err="failed to get container status \"1670eca5569373c8bcdf22d19faf8794c43207f7711d502eeb2ade60d643f5dd\": rpc error: code = NotFound desc = could not find container \"1670eca5569373c8bcdf22d19faf8794c43207f7711d502eeb2ade60d643f5dd\": container with ID starting with 1670eca5569373c8bcdf22d19faf8794c43207f7711d502eeb2ade60d643f5dd not found: ID does not exist" Feb 17 13:40:19 crc kubenswrapper[4768]: I0217 13:40:19.184738 4768 scope.go:117] "RemoveContainer" containerID="7641c2023d65064decc69606567dff6eff07116a6cc62f9d1b1cddb34a39a407" Feb 17 13:40:19 crc kubenswrapper[4768]: E0217 13:40:19.185115 4768 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7641c2023d65064decc69606567dff6eff07116a6cc62f9d1b1cddb34a39a407\": container with ID starting with 7641c2023d65064decc69606567dff6eff07116a6cc62f9d1b1cddb34a39a407 not found: ID does not exist" containerID="7641c2023d65064decc69606567dff6eff07116a6cc62f9d1b1cddb34a39a407" Feb 17 13:40:19 crc kubenswrapper[4768]: I0217 13:40:19.185172 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7641c2023d65064decc69606567dff6eff07116a6cc62f9d1b1cddb34a39a407"} err="failed to get container status \"7641c2023d65064decc69606567dff6eff07116a6cc62f9d1b1cddb34a39a407\": rpc error: code = NotFound desc = could not find container \"7641c2023d65064decc69606567dff6eff07116a6cc62f9d1b1cddb34a39a407\": container with ID starting with 7641c2023d65064decc69606567dff6eff07116a6cc62f9d1b1cddb34a39a407 not found: ID does not exist" Feb 17 13:40:19 crc kubenswrapper[4768]: I0217 13:40:19.185210 4768 scope.go:117] "RemoveContainer" containerID="52dd97c46a2e371a68dcd9558be0f94d3bdd20c7a6ba6a65178fc09cdceea903" Feb 17 13:40:19 crc kubenswrapper[4768]: E0217 13:40:19.186042 4768 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"52dd97c46a2e371a68dcd9558be0f94d3bdd20c7a6ba6a65178fc09cdceea903\": container with ID starting with 52dd97c46a2e371a68dcd9558be0f94d3bdd20c7a6ba6a65178fc09cdceea903 not found: ID does not exist" containerID="52dd97c46a2e371a68dcd9558be0f94d3bdd20c7a6ba6a65178fc09cdceea903" Feb 17 13:40:19 crc kubenswrapper[4768]: I0217 13:40:19.186067 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"52dd97c46a2e371a68dcd9558be0f94d3bdd20c7a6ba6a65178fc09cdceea903"} err="failed to get container status \"52dd97c46a2e371a68dcd9558be0f94d3bdd20c7a6ba6a65178fc09cdceea903\": rpc error: code = NotFound desc = could not find container \"52dd97c46a2e371a68dcd9558be0f94d3bdd20c7a6ba6a65178fc09cdceea903\": container with ID starting with 52dd97c46a2e371a68dcd9558be0f94d3bdd20c7a6ba6a65178fc09cdceea903 not found: ID does not exist" Feb 17 13:40:19 crc kubenswrapper[4768]: I0217 13:40:19.186080 4768 scope.go:117] "RemoveContainer" containerID="c4f0320f35632987c0e1f7eb6655c7221d7c159f1473088fa143a7c69a60f5b0" Feb 17 13:40:19 crc kubenswrapper[4768]: E0217 13:40:19.186501 4768 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c4f0320f35632987c0e1f7eb6655c7221d7c159f1473088fa143a7c69a60f5b0\": container with ID starting with c4f0320f35632987c0e1f7eb6655c7221d7c159f1473088fa143a7c69a60f5b0 not found: ID does not exist" containerID="c4f0320f35632987c0e1f7eb6655c7221d7c159f1473088fa143a7c69a60f5b0" Feb 17 13:40:19 crc kubenswrapper[4768]: I0217 13:40:19.186563 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c4f0320f35632987c0e1f7eb6655c7221d7c159f1473088fa143a7c69a60f5b0"} err="failed to get container status \"c4f0320f35632987c0e1f7eb6655c7221d7c159f1473088fa143a7c69a60f5b0\": rpc error: code = NotFound desc = could not find container \"c4f0320f35632987c0e1f7eb6655c7221d7c159f1473088fa143a7c69a60f5b0\": container with ID starting with c4f0320f35632987c0e1f7eb6655c7221d7c159f1473088fa143a7c69a60f5b0 not found: ID does not exist" Feb 17 13:40:19 crc kubenswrapper[4768]: I0217 13:40:19.186591 4768 scope.go:117] "RemoveContainer" containerID="1dd90501c9cf3ad0b27fefe4999afdfeaf0a8262728c8d47cd4714b0ef71d340" Feb 17 13:40:19 crc kubenswrapper[4768]: E0217 13:40:19.186955 4768 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1dd90501c9cf3ad0b27fefe4999afdfeaf0a8262728c8d47cd4714b0ef71d340\": container with ID starting with 1dd90501c9cf3ad0b27fefe4999afdfeaf0a8262728c8d47cd4714b0ef71d340 not found: ID does not exist" containerID="1dd90501c9cf3ad0b27fefe4999afdfeaf0a8262728c8d47cd4714b0ef71d340" Feb 17 13:40:19 crc kubenswrapper[4768]: I0217 13:40:19.187002 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1dd90501c9cf3ad0b27fefe4999afdfeaf0a8262728c8d47cd4714b0ef71d340"} err="failed to get container status \"1dd90501c9cf3ad0b27fefe4999afdfeaf0a8262728c8d47cd4714b0ef71d340\": rpc error: code = NotFound desc = could not find container \"1dd90501c9cf3ad0b27fefe4999afdfeaf0a8262728c8d47cd4714b0ef71d340\": container with ID starting with 1dd90501c9cf3ad0b27fefe4999afdfeaf0a8262728c8d47cd4714b0ef71d340 not found: ID does not exist" Feb 17 13:40:19 crc kubenswrapper[4768]: I0217 13:40:19.187025 4768 scope.go:117] "RemoveContainer" containerID="9dffd0f3cc3b89ababe812ea2e550f319242913da2908f466cbb8d9ac541e4b9" Feb 17 13:40:19 crc kubenswrapper[4768]: E0217 13:40:19.187461 4768 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9dffd0f3cc3b89ababe812ea2e550f319242913da2908f466cbb8d9ac541e4b9\": container with ID starting with 9dffd0f3cc3b89ababe812ea2e550f319242913da2908f466cbb8d9ac541e4b9 not found: ID does not exist" containerID="9dffd0f3cc3b89ababe812ea2e550f319242913da2908f466cbb8d9ac541e4b9" Feb 17 13:40:19 crc kubenswrapper[4768]: I0217 13:40:19.187485 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9dffd0f3cc3b89ababe812ea2e550f319242913da2908f466cbb8d9ac541e4b9"} err="failed to get container status \"9dffd0f3cc3b89ababe812ea2e550f319242913da2908f466cbb8d9ac541e4b9\": rpc error: code = NotFound desc = could not find container \"9dffd0f3cc3b89ababe812ea2e550f319242913da2908f466cbb8d9ac541e4b9\": container with ID starting with 9dffd0f3cc3b89ababe812ea2e550f319242913da2908f466cbb8d9ac541e4b9 not found: ID does not exist" Feb 17 13:40:19 crc kubenswrapper[4768]: I0217 13:40:19.548647 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4b27818a5e8e43d0dc095d08835c792" path="/var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/volumes" Feb 17 13:40:21 crc kubenswrapper[4768]: E0217 13:40:21.441354 4768 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.102.83.36:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 17 13:40:21 crc kubenswrapper[4768]: I0217 13:40:21.444174 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 17 13:40:21 crc kubenswrapper[4768]: W0217 13:40:21.483045 4768 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf85e55b1a89d02b0cb034b1ea31ed45a.slice/crio-6890bb4fff04561f00e900359819b8480a99f044ba714399905493555b6ae881 WatchSource:0}: Error finding container 6890bb4fff04561f00e900359819b8480a99f044ba714399905493555b6ae881: Status 404 returned error can't find the container with id 6890bb4fff04561f00e900359819b8480a99f044ba714399905493555b6ae881 Feb 17 13:40:21 crc kubenswrapper[4768]: E0217 13:40:21.486493 4768 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.36:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.18950c5ce0a9eb8e openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-17 13:40:21.485915022 +0000 UTC m=+240.765301454,LastTimestamp:2026-02-17 13:40:21.485915022 +0000 UTC m=+240.765301454,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 17 13:40:21 crc kubenswrapper[4768]: I0217 13:40:21.537652 4768 status_manager.go:851] "Failed to get status for pod" podUID="b7837593-1275-40cb-820f-afe9cb13fad4" pod="openshift-authentication/oauth-openshift-558db77b4-h77q6" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-h77q6\": dial tcp 38.102.83.36:6443: connect: connection refused" Feb 17 13:40:21 crc kubenswrapper[4768]: I0217 13:40:21.537860 4768 status_manager.go:851] "Failed to get status for pod" podUID="81f5c792-d672-4014-84b4-c7b05fbb1139" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.36:6443: connect: connection refused" Feb 17 13:40:22 crc kubenswrapper[4768]: I0217 13:40:22.128535 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"47ab018dc21478799cd53fdc410f689574630af9510f05e80a5bf5673d7b24ab"} Feb 17 13:40:22 crc kubenswrapper[4768]: I0217 13:40:22.128771 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"6890bb4fff04561f00e900359819b8480a99f044ba714399905493555b6ae881"} Feb 17 13:40:22 crc kubenswrapper[4768]: I0217 13:40:22.129310 4768 status_manager.go:851] "Failed to get status for pod" podUID="81f5c792-d672-4014-84b4-c7b05fbb1139" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.36:6443: connect: connection refused" Feb 17 13:40:22 crc kubenswrapper[4768]: E0217 13:40:22.129426 4768 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.102.83.36:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 17 13:40:22 crc kubenswrapper[4768]: I0217 13:40:22.129587 4768 status_manager.go:851] "Failed to get status for pod" podUID="b7837593-1275-40cb-820f-afe9cb13fad4" pod="openshift-authentication/oauth-openshift-558db77b4-h77q6" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-h77q6\": dial tcp 38.102.83.36:6443: connect: connection refused" Feb 17 13:40:22 crc kubenswrapper[4768]: E0217 13:40:22.634566 4768 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.36:6443: connect: connection refused" Feb 17 13:40:22 crc kubenswrapper[4768]: E0217 13:40:22.635256 4768 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.36:6443: connect: connection refused" Feb 17 13:40:22 crc kubenswrapper[4768]: E0217 13:40:22.635678 4768 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.36:6443: connect: connection refused" Feb 17 13:40:22 crc kubenswrapper[4768]: E0217 13:40:22.636199 4768 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.36:6443: connect: connection refused" Feb 17 13:40:22 crc kubenswrapper[4768]: E0217 13:40:22.636583 4768 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.36:6443: connect: connection refused" Feb 17 13:40:22 crc kubenswrapper[4768]: I0217 13:40:22.636615 4768 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Feb 17 13:40:22 crc kubenswrapper[4768]: E0217 13:40:22.636945 4768 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.36:6443: connect: connection refused" interval="200ms" Feb 17 13:40:22 crc kubenswrapper[4768]: E0217 13:40:22.837717 4768 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.36:6443: connect: connection refused" interval="400ms" Feb 17 13:40:23 crc kubenswrapper[4768]: E0217 13:40:23.239067 4768 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.36:6443: connect: connection refused" interval="800ms" Feb 17 13:40:24 crc kubenswrapper[4768]: E0217 13:40:24.040562 4768 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.36:6443: connect: connection refused" interval="1.6s" Feb 17 13:40:25 crc kubenswrapper[4768]: E0217 13:40:25.606555 4768 desired_state_of_world_populator.go:312] "Error processing volume" err="error processing PVC openshift-image-registry/crc-image-registry-storage: failed to fetch PVC from API server: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/persistentvolumeclaims/crc-image-registry-storage\": dial tcp 38.102.83.36:6443: connect: connection refused" pod="openshift-image-registry/image-registry-697d97f7c8-vfpbq" volumeName="registry-storage" Feb 17 13:40:25 crc kubenswrapper[4768]: E0217 13:40:25.641319 4768 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.36:6443: connect: connection refused" interval="3.2s" Feb 17 13:40:27 crc kubenswrapper[4768]: E0217 13:40:27.488600 4768 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.36:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.18950c5ce0a9eb8e openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-17 13:40:21.485915022 +0000 UTC m=+240.765301454,LastTimestamp:2026-02-17 13:40:21.485915022 +0000 UTC m=+240.765301454,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 17 13:40:28 crc kubenswrapper[4768]: E0217 13:40:28.842816 4768 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.36:6443: connect: connection refused" interval="6.4s" Feb 17 13:40:30 crc kubenswrapper[4768]: I0217 13:40:30.534206 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 13:40:30 crc kubenswrapper[4768]: I0217 13:40:30.535721 4768 status_manager.go:851] "Failed to get status for pod" podUID="81f5c792-d672-4014-84b4-c7b05fbb1139" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.36:6443: connect: connection refused" Feb 17 13:40:30 crc kubenswrapper[4768]: I0217 13:40:30.536608 4768 status_manager.go:851] "Failed to get status for pod" podUID="b7837593-1275-40cb-820f-afe9cb13fad4" pod="openshift-authentication/oauth-openshift-558db77b4-h77q6" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-h77q6\": dial tcp 38.102.83.36:6443: connect: connection refused" Feb 17 13:40:30 crc kubenswrapper[4768]: I0217 13:40:30.562067 4768 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="88441663-1346-4328-a433-63cb4d9b6722" Feb 17 13:40:30 crc kubenswrapper[4768]: I0217 13:40:30.562142 4768 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="88441663-1346-4328-a433-63cb4d9b6722" Feb 17 13:40:30 crc kubenswrapper[4768]: E0217 13:40:30.562807 4768 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.36:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 13:40:30 crc kubenswrapper[4768]: I0217 13:40:30.563518 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 13:40:31 crc kubenswrapper[4768]: I0217 13:40:31.188058 4768 generic.go:334] "Generic (PLEG): container finished" podID="71bb4a3aecc4ba5b26c4b7318770ce13" containerID="3049b2fc32803fd261176fab3808d7dae4f06126fbd96a621b46e1b577a7331b" exitCode=0 Feb 17 13:40:31 crc kubenswrapper[4768]: I0217 13:40:31.188183 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerDied","Data":"3049b2fc32803fd261176fab3808d7dae4f06126fbd96a621b46e1b577a7331b"} Feb 17 13:40:31 crc kubenswrapper[4768]: I0217 13:40:31.188537 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"1d57a1217a31d61623466d62ec046fda900055ef4e36ea5d46ce3a49e81ea24a"} Feb 17 13:40:31 crc kubenswrapper[4768]: I0217 13:40:31.188930 4768 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="88441663-1346-4328-a433-63cb4d9b6722" Feb 17 13:40:31 crc kubenswrapper[4768]: I0217 13:40:31.188961 4768 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="88441663-1346-4328-a433-63cb4d9b6722" Feb 17 13:40:31 crc kubenswrapper[4768]: E0217 13:40:31.189522 4768 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.36:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 13:40:31 crc kubenswrapper[4768]: I0217 13:40:31.190083 4768 status_manager.go:851] "Failed to get status for pod" podUID="b7837593-1275-40cb-820f-afe9cb13fad4" pod="openshift-authentication/oauth-openshift-558db77b4-h77q6" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-h77q6\": dial tcp 38.102.83.36:6443: connect: connection refused" Feb 17 13:40:31 crc kubenswrapper[4768]: I0217 13:40:31.190725 4768 status_manager.go:851] "Failed to get status for pod" podUID="81f5c792-d672-4014-84b4-c7b05fbb1139" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.36:6443: connect: connection refused" Feb 17 13:40:31 crc kubenswrapper[4768]: I0217 13:40:31.192611 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Feb 17 13:40:31 crc kubenswrapper[4768]: I0217 13:40:31.192669 4768 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="47db5ef94c07ecce7b368bb809a736c9532fe3e47081cfa7e248b8b40d1d243f" exitCode=1 Feb 17 13:40:31 crc kubenswrapper[4768]: I0217 13:40:31.192701 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"47db5ef94c07ecce7b368bb809a736c9532fe3e47081cfa7e248b8b40d1d243f"} Feb 17 13:40:31 crc kubenswrapper[4768]: I0217 13:40:31.193254 4768 scope.go:117] "RemoveContainer" containerID="47db5ef94c07ecce7b368bb809a736c9532fe3e47081cfa7e248b8b40d1d243f" Feb 17 13:40:31 crc kubenswrapper[4768]: I0217 13:40:31.193718 4768 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.36:6443: connect: connection refused" Feb 17 13:40:31 crc kubenswrapper[4768]: I0217 13:40:31.194251 4768 status_manager.go:851] "Failed to get status for pod" podUID="b7837593-1275-40cb-820f-afe9cb13fad4" pod="openshift-authentication/oauth-openshift-558db77b4-h77q6" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-h77q6\": dial tcp 38.102.83.36:6443: connect: connection refused" Feb 17 13:40:31 crc kubenswrapper[4768]: I0217 13:40:31.194749 4768 status_manager.go:851] "Failed to get status for pod" podUID="81f5c792-d672-4014-84b4-c7b05fbb1139" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.36:6443: connect: connection refused" Feb 17 13:40:31 crc kubenswrapper[4768]: I0217 13:40:31.420002 4768 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 17 13:40:31 crc kubenswrapper[4768]: I0217 13:40:31.540337 4768 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.36:6443: connect: connection refused" Feb 17 13:40:31 crc kubenswrapper[4768]: I0217 13:40:31.541148 4768 status_manager.go:851] "Failed to get status for pod" podUID="b7837593-1275-40cb-820f-afe9cb13fad4" pod="openshift-authentication/oauth-openshift-558db77b4-h77q6" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-h77q6\": dial tcp 38.102.83.36:6443: connect: connection refused" Feb 17 13:40:31 crc kubenswrapper[4768]: I0217 13:40:31.541776 4768 status_manager.go:851] "Failed to get status for pod" podUID="81f5c792-d672-4014-84b4-c7b05fbb1139" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.36:6443: connect: connection refused" Feb 17 13:40:31 crc kubenswrapper[4768]: I0217 13:40:31.542549 4768 status_manager.go:851] "Failed to get status for pod" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.36:6443: connect: connection refused" Feb 17 13:40:32 crc kubenswrapper[4768]: I0217 13:40:32.209056 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"d2bb7c5eb0cd3a05c8d176abda0ed2daca2f39ff425005ea6de5a1d9aab6ceb2"} Feb 17 13:40:32 crc kubenswrapper[4768]: I0217 13:40:32.209403 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"ac3435ac0de1d6981711ae2a3a99fd687e723cfb93f5eff79a6b1f6f307cfa69"} Feb 17 13:40:32 crc kubenswrapper[4768]: I0217 13:40:32.209414 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"90769cd87a10b871a661d0bfdc9c97e93d433264c30b28f1088edfbf54beae3a"} Feb 17 13:40:32 crc kubenswrapper[4768]: I0217 13:40:32.216773 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Feb 17 13:40:32 crc kubenswrapper[4768]: I0217 13:40:32.216854 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"c59351f99949503ebd21a6c4fc57c2fde5638524a08b29e95edeb4d4972022eb"} Feb 17 13:40:33 crc kubenswrapper[4768]: I0217 13:40:33.224774 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"403adf82b2bd15831c1bbe07955166b719119e8e72de77dc83f209f64e54e74d"} Feb 17 13:40:33 crc kubenswrapper[4768]: I0217 13:40:33.224837 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"6f00cd56fd22d4127513941743234454e117203a0d898d05af17b72bc2446b82"} Feb 17 13:40:33 crc kubenswrapper[4768]: I0217 13:40:33.224995 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 13:40:33 crc kubenswrapper[4768]: I0217 13:40:33.225180 4768 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="88441663-1346-4328-a433-63cb4d9b6722" Feb 17 13:40:33 crc kubenswrapper[4768]: I0217 13:40:33.225205 4768 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="88441663-1346-4328-a433-63cb4d9b6722" Feb 17 13:40:34 crc kubenswrapper[4768]: I0217 13:40:34.683550 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 17 13:40:35 crc kubenswrapper[4768]: I0217 13:40:35.564523 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 13:40:35 crc kubenswrapper[4768]: I0217 13:40:35.564584 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 13:40:35 crc kubenswrapper[4768]: I0217 13:40:35.571667 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 13:40:37 crc kubenswrapper[4768]: I0217 13:40:37.825739 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 17 13:40:37 crc kubenswrapper[4768]: I0217 13:40:37.831087 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 17 13:40:38 crc kubenswrapper[4768]: I0217 13:40:38.241252 4768 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 13:40:38 crc kubenswrapper[4768]: I0217 13:40:38.248197 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"88441663-1346-4328-a433-63cb4d9b6722\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:40:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:40:31Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:40:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver kube-apiserver-cert-syncer kube-apiserver-cert-regeneration-controller kube-apiserver-insecure-readyz kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T13:40:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver kube-apiserver-cert-syncer kube-apiserver-cert-regeneration-controller kube-apiserver-insecure-readyz kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3049b2fc32803fd261176fab3808d7dae4f06126fbd96a621b46e1b577a7331b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3049b2fc32803fd261176fab3808d7dae4f06126fbd96a621b46e1b577a7331b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T13:40:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T13:40:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Pending\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": pods \"kube-apiserver-crc\" not found" Feb 17 13:40:39 crc kubenswrapper[4768]: I0217 13:40:39.259557 4768 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="88441663-1346-4328-a433-63cb4d9b6722" Feb 17 13:40:39 crc kubenswrapper[4768]: I0217 13:40:39.259583 4768 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="88441663-1346-4328-a433-63cb4d9b6722" Feb 17 13:40:39 crc kubenswrapper[4768]: I0217 13:40:39.267461 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 13:40:39 crc kubenswrapper[4768]: I0217 13:40:39.270409 4768 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="8ca6466e-2c3a-480f-97ee-64b4f2742c7b" Feb 17 13:40:40 crc kubenswrapper[4768]: I0217 13:40:40.269119 4768 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="88441663-1346-4328-a433-63cb4d9b6722" Feb 17 13:40:40 crc kubenswrapper[4768]: I0217 13:40:40.269553 4768 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="88441663-1346-4328-a433-63cb4d9b6722" Feb 17 13:40:40 crc kubenswrapper[4768]: I0217 13:40:40.273663 4768 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="8ca6466e-2c3a-480f-97ee-64b4f2742c7b" Feb 17 13:40:44 crc kubenswrapper[4768]: I0217 13:40:44.687681 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 17 13:40:48 crc kubenswrapper[4768]: I0217 13:40:48.034606 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Feb 17 13:40:48 crc kubenswrapper[4768]: I0217 13:40:48.071499 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Feb 17 13:40:48 crc kubenswrapper[4768]: I0217 13:40:48.182759 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Feb 17 13:40:48 crc kubenswrapper[4768]: I0217 13:40:48.363355 4768 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Feb 17 13:40:48 crc kubenswrapper[4768]: I0217 13:40:48.545808 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 17 13:40:48 crc kubenswrapper[4768]: I0217 13:40:48.829671 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Feb 17 13:40:48 crc kubenswrapper[4768]: I0217 13:40:48.847611 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Feb 17 13:40:49 crc kubenswrapper[4768]: I0217 13:40:49.008633 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Feb 17 13:40:49 crc kubenswrapper[4768]: I0217 13:40:49.182020 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 17 13:40:49 crc kubenswrapper[4768]: I0217 13:40:49.182504 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Feb 17 13:40:49 crc kubenswrapper[4768]: I0217 13:40:49.435676 4768 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Feb 17 13:40:49 crc kubenswrapper[4768]: I0217 13:40:49.696621 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Feb 17 13:40:50 crc kubenswrapper[4768]: I0217 13:40:50.081985 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Feb 17 13:40:50 crc kubenswrapper[4768]: I0217 13:40:50.089587 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Feb 17 13:40:50 crc kubenswrapper[4768]: I0217 13:40:50.133652 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Feb 17 13:40:50 crc kubenswrapper[4768]: I0217 13:40:50.146017 4768 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Feb 17 13:40:50 crc kubenswrapper[4768]: I0217 13:40:50.171136 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Feb 17 13:40:50 crc kubenswrapper[4768]: I0217 13:40:50.740933 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Feb 17 13:40:50 crc kubenswrapper[4768]: I0217 13:40:50.763756 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Feb 17 13:40:50 crc kubenswrapper[4768]: I0217 13:40:50.815249 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Feb 17 13:40:50 crc kubenswrapper[4768]: I0217 13:40:50.896095 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Feb 17 13:40:51 crc kubenswrapper[4768]: I0217 13:40:51.004815 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Feb 17 13:40:51 crc kubenswrapper[4768]: I0217 13:40:51.032067 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Feb 17 13:40:51 crc kubenswrapper[4768]: I0217 13:40:51.204902 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Feb 17 13:40:51 crc kubenswrapper[4768]: I0217 13:40:51.400444 4768 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Feb 17 13:40:51 crc kubenswrapper[4768]: I0217 13:40:51.405646 4768 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-h77q6","openshift-kube-apiserver/kube-apiserver-crc"] Feb 17 13:40:51 crc kubenswrapper[4768]: I0217 13:40:51.405709 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Feb 17 13:40:51 crc kubenswrapper[4768]: I0217 13:40:51.410071 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 13:40:51 crc kubenswrapper[4768]: I0217 13:40:51.428089 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=13.428070989 podStartE2EDuration="13.428070989s" podCreationTimestamp="2026-02-17 13:40:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 13:40:51.424367296 +0000 UTC m=+270.703753748" watchObservedRunningTime="2026-02-17 13:40:51.428070989 +0000 UTC m=+270.707457451" Feb 17 13:40:51 crc kubenswrapper[4768]: I0217 13:40:51.451691 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Feb 17 13:40:51 crc kubenswrapper[4768]: I0217 13:40:51.488292 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Feb 17 13:40:51 crc kubenswrapper[4768]: I0217 13:40:51.540857 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b7837593-1275-40cb-820f-afe9cb13fad4" path="/var/lib/kubelet/pods/b7837593-1275-40cb-820f-afe9cb13fad4/volumes" Feb 17 13:40:51 crc kubenswrapper[4768]: I0217 13:40:51.572903 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Feb 17 13:40:51 crc kubenswrapper[4768]: I0217 13:40:51.680260 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Feb 17 13:40:51 crc kubenswrapper[4768]: I0217 13:40:51.715634 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Feb 17 13:40:51 crc kubenswrapper[4768]: I0217 13:40:51.734383 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Feb 17 13:40:51 crc kubenswrapper[4768]: I0217 13:40:51.812480 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Feb 17 13:40:51 crc kubenswrapper[4768]: I0217 13:40:51.815937 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Feb 17 13:40:51 crc kubenswrapper[4768]: I0217 13:40:51.830605 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Feb 17 13:40:52 crc kubenswrapper[4768]: I0217 13:40:52.035835 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Feb 17 13:40:52 crc kubenswrapper[4768]: I0217 13:40:52.126235 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Feb 17 13:40:52 crc kubenswrapper[4768]: I0217 13:40:52.147152 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Feb 17 13:40:52 crc kubenswrapper[4768]: I0217 13:40:52.234526 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Feb 17 13:40:52 crc kubenswrapper[4768]: I0217 13:40:52.241897 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 17 13:40:52 crc kubenswrapper[4768]: I0217 13:40:52.265524 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Feb 17 13:40:52 crc kubenswrapper[4768]: I0217 13:40:52.266229 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Feb 17 13:40:52 crc kubenswrapper[4768]: I0217 13:40:52.356991 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Feb 17 13:40:52 crc kubenswrapper[4768]: I0217 13:40:52.374223 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Feb 17 13:40:52 crc kubenswrapper[4768]: I0217 13:40:52.661971 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Feb 17 13:40:52 crc kubenswrapper[4768]: I0217 13:40:52.671464 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-556766588f-d8d64"] Feb 17 13:40:52 crc kubenswrapper[4768]: E0217 13:40:52.671671 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="81f5c792-d672-4014-84b4-c7b05fbb1139" containerName="installer" Feb 17 13:40:52 crc kubenswrapper[4768]: I0217 13:40:52.671683 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="81f5c792-d672-4014-84b4-c7b05fbb1139" containerName="installer" Feb 17 13:40:52 crc kubenswrapper[4768]: E0217 13:40:52.671693 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b7837593-1275-40cb-820f-afe9cb13fad4" containerName="oauth-openshift" Feb 17 13:40:52 crc kubenswrapper[4768]: I0217 13:40:52.671701 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="b7837593-1275-40cb-820f-afe9cb13fad4" containerName="oauth-openshift" Feb 17 13:40:52 crc kubenswrapper[4768]: I0217 13:40:52.671804 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="b7837593-1275-40cb-820f-afe9cb13fad4" containerName="oauth-openshift" Feb 17 13:40:52 crc kubenswrapper[4768]: I0217 13:40:52.671818 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="81f5c792-d672-4014-84b4-c7b05fbb1139" containerName="installer" Feb 17 13:40:52 crc kubenswrapper[4768]: I0217 13:40:52.672224 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-556766588f-d8d64" Feb 17 13:40:52 crc kubenswrapper[4768]: I0217 13:40:52.678241 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Feb 17 13:40:52 crc kubenswrapper[4768]: I0217 13:40:52.678300 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Feb 17 13:40:52 crc kubenswrapper[4768]: I0217 13:40:52.678436 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Feb 17 13:40:52 crc kubenswrapper[4768]: I0217 13:40:52.678491 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Feb 17 13:40:52 crc kubenswrapper[4768]: I0217 13:40:52.678651 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Feb 17 13:40:52 crc kubenswrapper[4768]: I0217 13:40:52.678884 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Feb 17 13:40:52 crc kubenswrapper[4768]: I0217 13:40:52.679133 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Feb 17 13:40:52 crc kubenswrapper[4768]: I0217 13:40:52.679278 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Feb 17 13:40:52 crc kubenswrapper[4768]: I0217 13:40:52.679526 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Feb 17 13:40:52 crc kubenswrapper[4768]: I0217 13:40:52.679821 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Feb 17 13:40:52 crc kubenswrapper[4768]: I0217 13:40:52.680932 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Feb 17 13:40:52 crc kubenswrapper[4768]: I0217 13:40:52.685921 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Feb 17 13:40:52 crc kubenswrapper[4768]: I0217 13:40:52.687933 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Feb 17 13:40:52 crc kubenswrapper[4768]: I0217 13:40:52.692494 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Feb 17 13:40:52 crc kubenswrapper[4768]: I0217 13:40:52.695615 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Feb 17 13:40:52 crc kubenswrapper[4768]: I0217 13:40:52.709712 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Feb 17 13:40:52 crc kubenswrapper[4768]: I0217 13:40:52.767050 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Feb 17 13:40:52 crc kubenswrapper[4768]: I0217 13:40:52.779744 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Feb 17 13:40:52 crc kubenswrapper[4768]: I0217 13:40:52.849044 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/d032f2fe-bb01-492a-9581-834a5c4238f0-v4-0-config-user-template-login\") pod \"oauth-openshift-556766588f-d8d64\" (UID: \"d032f2fe-bb01-492a-9581-834a5c4238f0\") " pod="openshift-authentication/oauth-openshift-556766588f-d8d64" Feb 17 13:40:52 crc kubenswrapper[4768]: I0217 13:40:52.849143 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/d032f2fe-bb01-492a-9581-834a5c4238f0-v4-0-config-system-serving-cert\") pod \"oauth-openshift-556766588f-d8d64\" (UID: \"d032f2fe-bb01-492a-9581-834a5c4238f0\") " pod="openshift-authentication/oauth-openshift-556766588f-d8d64" Feb 17 13:40:52 crc kubenswrapper[4768]: I0217 13:40:52.849190 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/d032f2fe-bb01-492a-9581-834a5c4238f0-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-556766588f-d8d64\" (UID: \"d032f2fe-bb01-492a-9581-834a5c4238f0\") " pod="openshift-authentication/oauth-openshift-556766588f-d8d64" Feb 17 13:40:52 crc kubenswrapper[4768]: I0217 13:40:52.849237 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/d032f2fe-bb01-492a-9581-834a5c4238f0-v4-0-config-user-template-error\") pod \"oauth-openshift-556766588f-d8d64\" (UID: \"d032f2fe-bb01-492a-9581-834a5c4238f0\") " pod="openshift-authentication/oauth-openshift-556766588f-d8d64" Feb 17 13:40:52 crc kubenswrapper[4768]: I0217 13:40:52.849269 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/d032f2fe-bb01-492a-9581-834a5c4238f0-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-556766588f-d8d64\" (UID: \"d032f2fe-bb01-492a-9581-834a5c4238f0\") " pod="openshift-authentication/oauth-openshift-556766588f-d8d64" Feb 17 13:40:52 crc kubenswrapper[4768]: I0217 13:40:52.849304 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d032f2fe-bb01-492a-9581-834a5c4238f0-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-556766588f-d8d64\" (UID: \"d032f2fe-bb01-492a-9581-834a5c4238f0\") " pod="openshift-authentication/oauth-openshift-556766588f-d8d64" Feb 17 13:40:52 crc kubenswrapper[4768]: I0217 13:40:52.849336 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bp8hz\" (UniqueName: \"kubernetes.io/projected/d032f2fe-bb01-492a-9581-834a5c4238f0-kube-api-access-bp8hz\") pod \"oauth-openshift-556766588f-d8d64\" (UID: \"d032f2fe-bb01-492a-9581-834a5c4238f0\") " pod="openshift-authentication/oauth-openshift-556766588f-d8d64" Feb 17 13:40:52 crc kubenswrapper[4768]: I0217 13:40:52.849362 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/d032f2fe-bb01-492a-9581-834a5c4238f0-v4-0-config-system-cliconfig\") pod \"oauth-openshift-556766588f-d8d64\" (UID: \"d032f2fe-bb01-492a-9581-834a5c4238f0\") " pod="openshift-authentication/oauth-openshift-556766588f-d8d64" Feb 17 13:40:52 crc kubenswrapper[4768]: I0217 13:40:52.849398 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/d032f2fe-bb01-492a-9581-834a5c4238f0-audit-policies\") pod \"oauth-openshift-556766588f-d8d64\" (UID: \"d032f2fe-bb01-492a-9581-834a5c4238f0\") " pod="openshift-authentication/oauth-openshift-556766588f-d8d64" Feb 17 13:40:52 crc kubenswrapper[4768]: I0217 13:40:52.849434 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/d032f2fe-bb01-492a-9581-834a5c4238f0-v4-0-config-system-router-certs\") pod \"oauth-openshift-556766588f-d8d64\" (UID: \"d032f2fe-bb01-492a-9581-834a5c4238f0\") " pod="openshift-authentication/oauth-openshift-556766588f-d8d64" Feb 17 13:40:52 crc kubenswrapper[4768]: I0217 13:40:52.849461 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/d032f2fe-bb01-492a-9581-834a5c4238f0-v4-0-config-system-session\") pod \"oauth-openshift-556766588f-d8d64\" (UID: \"d032f2fe-bb01-492a-9581-834a5c4238f0\") " pod="openshift-authentication/oauth-openshift-556766588f-d8d64" Feb 17 13:40:52 crc kubenswrapper[4768]: I0217 13:40:52.849500 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/d032f2fe-bb01-492a-9581-834a5c4238f0-v4-0-config-system-service-ca\") pod \"oauth-openshift-556766588f-d8d64\" (UID: \"d032f2fe-bb01-492a-9581-834a5c4238f0\") " pod="openshift-authentication/oauth-openshift-556766588f-d8d64" Feb 17 13:40:52 crc kubenswrapper[4768]: I0217 13:40:52.849542 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/d032f2fe-bb01-492a-9581-834a5c4238f0-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-556766588f-d8d64\" (UID: \"d032f2fe-bb01-492a-9581-834a5c4238f0\") " pod="openshift-authentication/oauth-openshift-556766588f-d8d64" Feb 17 13:40:52 crc kubenswrapper[4768]: I0217 13:40:52.849590 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/d032f2fe-bb01-492a-9581-834a5c4238f0-audit-dir\") pod \"oauth-openshift-556766588f-d8d64\" (UID: \"d032f2fe-bb01-492a-9581-834a5c4238f0\") " pod="openshift-authentication/oauth-openshift-556766588f-d8d64" Feb 17 13:40:52 crc kubenswrapper[4768]: I0217 13:40:52.855539 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Feb 17 13:40:52 crc kubenswrapper[4768]: I0217 13:40:52.907747 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Feb 17 13:40:52 crc kubenswrapper[4768]: I0217 13:40:52.951428 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bp8hz\" (UniqueName: \"kubernetes.io/projected/d032f2fe-bb01-492a-9581-834a5c4238f0-kube-api-access-bp8hz\") pod \"oauth-openshift-556766588f-d8d64\" (UID: \"d032f2fe-bb01-492a-9581-834a5c4238f0\") " pod="openshift-authentication/oauth-openshift-556766588f-d8d64" Feb 17 13:40:52 crc kubenswrapper[4768]: I0217 13:40:52.952007 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/d032f2fe-bb01-492a-9581-834a5c4238f0-v4-0-config-system-cliconfig\") pod \"oauth-openshift-556766588f-d8d64\" (UID: \"d032f2fe-bb01-492a-9581-834a5c4238f0\") " pod="openshift-authentication/oauth-openshift-556766588f-d8d64" Feb 17 13:40:52 crc kubenswrapper[4768]: I0217 13:40:52.952410 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/d032f2fe-bb01-492a-9581-834a5c4238f0-audit-policies\") pod \"oauth-openshift-556766588f-d8d64\" (UID: \"d032f2fe-bb01-492a-9581-834a5c4238f0\") " pod="openshift-authentication/oauth-openshift-556766588f-d8d64" Feb 17 13:40:52 crc kubenswrapper[4768]: I0217 13:40:52.952785 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/d032f2fe-bb01-492a-9581-834a5c4238f0-v4-0-config-system-cliconfig\") pod \"oauth-openshift-556766588f-d8d64\" (UID: \"d032f2fe-bb01-492a-9581-834a5c4238f0\") " pod="openshift-authentication/oauth-openshift-556766588f-d8d64" Feb 17 13:40:52 crc kubenswrapper[4768]: I0217 13:40:52.953220 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/d032f2fe-bb01-492a-9581-834a5c4238f0-v4-0-config-system-router-certs\") pod \"oauth-openshift-556766588f-d8d64\" (UID: \"d032f2fe-bb01-492a-9581-834a5c4238f0\") " pod="openshift-authentication/oauth-openshift-556766588f-d8d64" Feb 17 13:40:52 crc kubenswrapper[4768]: I0217 13:40:52.953453 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/d032f2fe-bb01-492a-9581-834a5c4238f0-audit-policies\") pod \"oauth-openshift-556766588f-d8d64\" (UID: \"d032f2fe-bb01-492a-9581-834a5c4238f0\") " pod="openshift-authentication/oauth-openshift-556766588f-d8d64" Feb 17 13:40:52 crc kubenswrapper[4768]: I0217 13:40:52.953780 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/d032f2fe-bb01-492a-9581-834a5c4238f0-v4-0-config-system-session\") pod \"oauth-openshift-556766588f-d8d64\" (UID: \"d032f2fe-bb01-492a-9581-834a5c4238f0\") " pod="openshift-authentication/oauth-openshift-556766588f-d8d64" Feb 17 13:40:52 crc kubenswrapper[4768]: I0217 13:40:52.954872 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/d032f2fe-bb01-492a-9581-834a5c4238f0-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-556766588f-d8d64\" (UID: \"d032f2fe-bb01-492a-9581-834a5c4238f0\") " pod="openshift-authentication/oauth-openshift-556766588f-d8d64" Feb 17 13:40:52 crc kubenswrapper[4768]: I0217 13:40:52.955471 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/d032f2fe-bb01-492a-9581-834a5c4238f0-v4-0-config-system-service-ca\") pod \"oauth-openshift-556766588f-d8d64\" (UID: \"d032f2fe-bb01-492a-9581-834a5c4238f0\") " pod="openshift-authentication/oauth-openshift-556766588f-d8d64" Feb 17 13:40:52 crc kubenswrapper[4768]: I0217 13:40:52.956285 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/d032f2fe-bb01-492a-9581-834a5c4238f0-audit-dir\") pod \"oauth-openshift-556766588f-d8d64\" (UID: \"d032f2fe-bb01-492a-9581-834a5c4238f0\") " pod="openshift-authentication/oauth-openshift-556766588f-d8d64" Feb 17 13:40:52 crc kubenswrapper[4768]: I0217 13:40:52.956411 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/d032f2fe-bb01-492a-9581-834a5c4238f0-audit-dir\") pod \"oauth-openshift-556766588f-d8d64\" (UID: \"d032f2fe-bb01-492a-9581-834a5c4238f0\") " pod="openshift-authentication/oauth-openshift-556766588f-d8d64" Feb 17 13:40:52 crc kubenswrapper[4768]: I0217 13:40:52.956603 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/d032f2fe-bb01-492a-9581-834a5c4238f0-v4-0-config-system-service-ca\") pod \"oauth-openshift-556766588f-d8d64\" (UID: \"d032f2fe-bb01-492a-9581-834a5c4238f0\") " pod="openshift-authentication/oauth-openshift-556766588f-d8d64" Feb 17 13:40:52 crc kubenswrapper[4768]: I0217 13:40:52.958605 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/d032f2fe-bb01-492a-9581-834a5c4238f0-v4-0-config-user-template-login\") pod \"oauth-openshift-556766588f-d8d64\" (UID: \"d032f2fe-bb01-492a-9581-834a5c4238f0\") " pod="openshift-authentication/oauth-openshift-556766588f-d8d64" Feb 17 13:40:52 crc kubenswrapper[4768]: I0217 13:40:52.958858 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/d032f2fe-bb01-492a-9581-834a5c4238f0-v4-0-config-system-serving-cert\") pod \"oauth-openshift-556766588f-d8d64\" (UID: \"d032f2fe-bb01-492a-9581-834a5c4238f0\") " pod="openshift-authentication/oauth-openshift-556766588f-d8d64" Feb 17 13:40:52 crc kubenswrapper[4768]: I0217 13:40:52.958652 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/d032f2fe-bb01-492a-9581-834a5c4238f0-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-556766588f-d8d64\" (UID: \"d032f2fe-bb01-492a-9581-834a5c4238f0\") " pod="openshift-authentication/oauth-openshift-556766588f-d8d64" Feb 17 13:40:52 crc kubenswrapper[4768]: I0217 13:40:52.959196 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/d032f2fe-bb01-492a-9581-834a5c4238f0-v4-0-config-system-router-certs\") pod \"oauth-openshift-556766588f-d8d64\" (UID: \"d032f2fe-bb01-492a-9581-834a5c4238f0\") " pod="openshift-authentication/oauth-openshift-556766588f-d8d64" Feb 17 13:40:52 crc kubenswrapper[4768]: I0217 13:40:52.959081 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/d032f2fe-bb01-492a-9581-834a5c4238f0-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-556766588f-d8d64\" (UID: \"d032f2fe-bb01-492a-9581-834a5c4238f0\") " pod="openshift-authentication/oauth-openshift-556766588f-d8d64" Feb 17 13:40:52 crc kubenswrapper[4768]: I0217 13:40:52.959686 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/d032f2fe-bb01-492a-9581-834a5c4238f0-v4-0-config-user-template-error\") pod \"oauth-openshift-556766588f-d8d64\" (UID: \"d032f2fe-bb01-492a-9581-834a5c4238f0\") " pod="openshift-authentication/oauth-openshift-556766588f-d8d64" Feb 17 13:40:52 crc kubenswrapper[4768]: I0217 13:40:52.959952 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/d032f2fe-bb01-492a-9581-834a5c4238f0-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-556766588f-d8d64\" (UID: \"d032f2fe-bb01-492a-9581-834a5c4238f0\") " pod="openshift-authentication/oauth-openshift-556766588f-d8d64" Feb 17 13:40:52 crc kubenswrapper[4768]: I0217 13:40:52.960216 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d032f2fe-bb01-492a-9581-834a5c4238f0-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-556766588f-d8d64\" (UID: \"d032f2fe-bb01-492a-9581-834a5c4238f0\") " pod="openshift-authentication/oauth-openshift-556766588f-d8d64" Feb 17 13:40:52 crc kubenswrapper[4768]: I0217 13:40:52.961867 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d032f2fe-bb01-492a-9581-834a5c4238f0-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-556766588f-d8d64\" (UID: \"d032f2fe-bb01-492a-9581-834a5c4238f0\") " pod="openshift-authentication/oauth-openshift-556766588f-d8d64" Feb 17 13:40:52 crc kubenswrapper[4768]: I0217 13:40:52.967587 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/d032f2fe-bb01-492a-9581-834a5c4238f0-v4-0-config-user-template-error\") pod \"oauth-openshift-556766588f-d8d64\" (UID: \"d032f2fe-bb01-492a-9581-834a5c4238f0\") " pod="openshift-authentication/oauth-openshift-556766588f-d8d64" Feb 17 13:40:52 crc kubenswrapper[4768]: I0217 13:40:52.967631 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/d032f2fe-bb01-492a-9581-834a5c4238f0-v4-0-config-system-serving-cert\") pod \"oauth-openshift-556766588f-d8d64\" (UID: \"d032f2fe-bb01-492a-9581-834a5c4238f0\") " pod="openshift-authentication/oauth-openshift-556766588f-d8d64" Feb 17 13:40:52 crc kubenswrapper[4768]: I0217 13:40:52.968322 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/d032f2fe-bb01-492a-9581-834a5c4238f0-v4-0-config-system-session\") pod \"oauth-openshift-556766588f-d8d64\" (UID: \"d032f2fe-bb01-492a-9581-834a5c4238f0\") " pod="openshift-authentication/oauth-openshift-556766588f-d8d64" Feb 17 13:40:52 crc kubenswrapper[4768]: I0217 13:40:52.968703 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/d032f2fe-bb01-492a-9581-834a5c4238f0-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-556766588f-d8d64\" (UID: \"d032f2fe-bb01-492a-9581-834a5c4238f0\") " pod="openshift-authentication/oauth-openshift-556766588f-d8d64" Feb 17 13:40:52 crc kubenswrapper[4768]: I0217 13:40:52.968907 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/d032f2fe-bb01-492a-9581-834a5c4238f0-v4-0-config-user-template-login\") pod \"oauth-openshift-556766588f-d8d64\" (UID: \"d032f2fe-bb01-492a-9581-834a5c4238f0\") " pod="openshift-authentication/oauth-openshift-556766588f-d8d64" Feb 17 13:40:52 crc kubenswrapper[4768]: I0217 13:40:52.969464 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/d032f2fe-bb01-492a-9581-834a5c4238f0-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-556766588f-d8d64\" (UID: \"d032f2fe-bb01-492a-9581-834a5c4238f0\") " pod="openshift-authentication/oauth-openshift-556766588f-d8d64" Feb 17 13:40:52 crc kubenswrapper[4768]: I0217 13:40:52.977169 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bp8hz\" (UniqueName: \"kubernetes.io/projected/d032f2fe-bb01-492a-9581-834a5c4238f0-kube-api-access-bp8hz\") pod \"oauth-openshift-556766588f-d8d64\" (UID: \"d032f2fe-bb01-492a-9581-834a5c4238f0\") " pod="openshift-authentication/oauth-openshift-556766588f-d8d64" Feb 17 13:40:52 crc kubenswrapper[4768]: I0217 13:40:52.987185 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-556766588f-d8d64" Feb 17 13:40:53 crc kubenswrapper[4768]: I0217 13:40:53.037322 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Feb 17 13:40:53 crc kubenswrapper[4768]: I0217 13:40:53.080405 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Feb 17 13:40:53 crc kubenswrapper[4768]: I0217 13:40:53.092302 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Feb 17 13:40:53 crc kubenswrapper[4768]: I0217 13:40:53.119187 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Feb 17 13:40:53 crc kubenswrapper[4768]: I0217 13:40:53.150039 4768 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Feb 17 13:40:53 crc kubenswrapper[4768]: I0217 13:40:53.191337 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Feb 17 13:40:53 crc kubenswrapper[4768]: I0217 13:40:53.299183 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Feb 17 13:40:53 crc kubenswrapper[4768]: I0217 13:40:53.374653 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Feb 17 13:40:53 crc kubenswrapper[4768]: I0217 13:40:53.407574 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Feb 17 13:40:53 crc kubenswrapper[4768]: I0217 13:40:53.462591 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Feb 17 13:40:53 crc kubenswrapper[4768]: I0217 13:40:53.536958 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 17 13:40:53 crc kubenswrapper[4768]: I0217 13:40:53.550506 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Feb 17 13:40:53 crc kubenswrapper[4768]: I0217 13:40:53.648727 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Feb 17 13:40:53 crc kubenswrapper[4768]: I0217 13:40:53.692335 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Feb 17 13:40:53 crc kubenswrapper[4768]: I0217 13:40:53.731061 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Feb 17 13:40:53 crc kubenswrapper[4768]: I0217 13:40:53.732229 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 17 13:40:53 crc kubenswrapper[4768]: I0217 13:40:53.745283 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Feb 17 13:40:53 crc kubenswrapper[4768]: I0217 13:40:53.750337 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Feb 17 13:40:53 crc kubenswrapper[4768]: I0217 13:40:53.840672 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Feb 17 13:40:53 crc kubenswrapper[4768]: I0217 13:40:53.893418 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Feb 17 13:40:53 crc kubenswrapper[4768]: I0217 13:40:53.900324 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Feb 17 13:40:54 crc kubenswrapper[4768]: I0217 13:40:54.001876 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Feb 17 13:40:54 crc kubenswrapper[4768]: I0217 13:40:54.146491 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Feb 17 13:40:54 crc kubenswrapper[4768]: I0217 13:40:54.191015 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 17 13:40:54 crc kubenswrapper[4768]: I0217 13:40:54.246680 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Feb 17 13:40:54 crc kubenswrapper[4768]: I0217 13:40:54.272641 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Feb 17 13:40:54 crc kubenswrapper[4768]: I0217 13:40:54.302954 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Feb 17 13:40:54 crc kubenswrapper[4768]: I0217 13:40:54.359050 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Feb 17 13:40:54 crc kubenswrapper[4768]: I0217 13:40:54.379181 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Feb 17 13:40:54 crc kubenswrapper[4768]: I0217 13:40:54.385737 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Feb 17 13:40:54 crc kubenswrapper[4768]: I0217 13:40:54.400314 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Feb 17 13:40:54 crc kubenswrapper[4768]: I0217 13:40:54.515365 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Feb 17 13:40:54 crc kubenswrapper[4768]: I0217 13:40:54.517122 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Feb 17 13:40:54 crc kubenswrapper[4768]: I0217 13:40:54.650050 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Feb 17 13:40:54 crc kubenswrapper[4768]: I0217 13:40:54.673517 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Feb 17 13:40:54 crc kubenswrapper[4768]: I0217 13:40:54.760579 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Feb 17 13:40:54 crc kubenswrapper[4768]: I0217 13:40:54.762354 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Feb 17 13:40:54 crc kubenswrapper[4768]: I0217 13:40:54.773848 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Feb 17 13:40:54 crc kubenswrapper[4768]: I0217 13:40:54.777492 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Feb 17 13:40:54 crc kubenswrapper[4768]: I0217 13:40:54.803719 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Feb 17 13:40:54 crc kubenswrapper[4768]: I0217 13:40:54.817672 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Feb 17 13:40:54 crc kubenswrapper[4768]: I0217 13:40:54.862940 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 17 13:40:54 crc kubenswrapper[4768]: I0217 13:40:54.938609 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Feb 17 13:40:54 crc kubenswrapper[4768]: I0217 13:40:54.946000 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Feb 17 13:40:55 crc kubenswrapper[4768]: I0217 13:40:55.073485 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Feb 17 13:40:55 crc kubenswrapper[4768]: I0217 13:40:55.213042 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Feb 17 13:40:55 crc kubenswrapper[4768]: I0217 13:40:55.414570 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 17 13:40:55 crc kubenswrapper[4768]: I0217 13:40:55.466217 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Feb 17 13:40:55 crc kubenswrapper[4768]: I0217 13:40:55.564721 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Feb 17 13:40:55 crc kubenswrapper[4768]: I0217 13:40:55.597638 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Feb 17 13:40:55 crc kubenswrapper[4768]: I0217 13:40:55.598253 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Feb 17 13:40:55 crc kubenswrapper[4768]: I0217 13:40:55.609772 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Feb 17 13:40:55 crc kubenswrapper[4768]: I0217 13:40:55.632010 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-556766588f-d8d64"] Feb 17 13:40:55 crc kubenswrapper[4768]: I0217 13:40:55.688692 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Feb 17 13:40:55 crc kubenswrapper[4768]: I0217 13:40:55.743285 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Feb 17 13:40:55 crc kubenswrapper[4768]: I0217 13:40:55.745885 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Feb 17 13:40:55 crc kubenswrapper[4768]: I0217 13:40:55.775290 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Feb 17 13:40:55 crc kubenswrapper[4768]: I0217 13:40:55.782689 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Feb 17 13:40:56 crc kubenswrapper[4768]: I0217 13:40:56.008127 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Feb 17 13:40:56 crc kubenswrapper[4768]: I0217 13:40:56.179252 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Feb 17 13:40:56 crc kubenswrapper[4768]: I0217 13:40:56.226074 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Feb 17 13:40:56 crc kubenswrapper[4768]: I0217 13:40:56.296264 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Feb 17 13:40:56 crc kubenswrapper[4768]: I0217 13:40:56.306893 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Feb 17 13:40:56 crc kubenswrapper[4768]: E0217 13:40:56.348666 4768 log.go:32] "RunPodSandbox from runtime service failed" err=< Feb 17 13:40:56 crc kubenswrapper[4768]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_oauth-openshift-556766588f-d8d64_openshift-authentication_d032f2fe-bb01-492a-9581-834a5c4238f0_0(3c8b250ef5016b559145c338a77b7d9c8cbf0a79a576c203b464175b296450be): error adding pod openshift-authentication_oauth-openshift-556766588f-d8d64 to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"3c8b250ef5016b559145c338a77b7d9c8cbf0a79a576c203b464175b296450be" Netns:"/var/run/netns/c9054405-ef1d-409c-9352-f83f49f75e71" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-authentication;K8S_POD_NAME=oauth-openshift-556766588f-d8d64;K8S_POD_INFRA_CONTAINER_ID=3c8b250ef5016b559145c338a77b7d9c8cbf0a79a576c203b464175b296450be;K8S_POD_UID=d032f2fe-bb01-492a-9581-834a5c4238f0" Path:"" ERRORED: error configuring pod [openshift-authentication/oauth-openshift-556766588f-d8d64] networking: Multus: [openshift-authentication/oauth-openshift-556766588f-d8d64/d032f2fe-bb01-492a-9581-834a5c4238f0]: error setting the networks status, pod was already deleted: SetPodNetworkStatusAnnotation: failed to query the pod oauth-openshift-556766588f-d8d64 in out of cluster comm: pod "oauth-openshift-556766588f-d8d64" not found Feb 17 13:40:56 crc kubenswrapper[4768]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Feb 17 13:40:56 crc kubenswrapper[4768]: > Feb 17 13:40:56 crc kubenswrapper[4768]: E0217 13:40:56.348769 4768 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err=< Feb 17 13:40:56 crc kubenswrapper[4768]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_oauth-openshift-556766588f-d8d64_openshift-authentication_d032f2fe-bb01-492a-9581-834a5c4238f0_0(3c8b250ef5016b559145c338a77b7d9c8cbf0a79a576c203b464175b296450be): error adding pod openshift-authentication_oauth-openshift-556766588f-d8d64 to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"3c8b250ef5016b559145c338a77b7d9c8cbf0a79a576c203b464175b296450be" Netns:"/var/run/netns/c9054405-ef1d-409c-9352-f83f49f75e71" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-authentication;K8S_POD_NAME=oauth-openshift-556766588f-d8d64;K8S_POD_INFRA_CONTAINER_ID=3c8b250ef5016b559145c338a77b7d9c8cbf0a79a576c203b464175b296450be;K8S_POD_UID=d032f2fe-bb01-492a-9581-834a5c4238f0" Path:"" ERRORED: error configuring pod [openshift-authentication/oauth-openshift-556766588f-d8d64] networking: Multus: [openshift-authentication/oauth-openshift-556766588f-d8d64/d032f2fe-bb01-492a-9581-834a5c4238f0]: error setting the networks status, pod was already deleted: SetPodNetworkStatusAnnotation: failed to query the pod oauth-openshift-556766588f-d8d64 in out of cluster comm: pod "oauth-openshift-556766588f-d8d64" not found Feb 17 13:40:56 crc kubenswrapper[4768]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Feb 17 13:40:56 crc kubenswrapper[4768]: > pod="openshift-authentication/oauth-openshift-556766588f-d8d64" Feb 17 13:40:56 crc kubenswrapper[4768]: E0217 13:40:56.348796 4768 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err=< Feb 17 13:40:56 crc kubenswrapper[4768]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_oauth-openshift-556766588f-d8d64_openshift-authentication_d032f2fe-bb01-492a-9581-834a5c4238f0_0(3c8b250ef5016b559145c338a77b7d9c8cbf0a79a576c203b464175b296450be): error adding pod openshift-authentication_oauth-openshift-556766588f-d8d64 to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"3c8b250ef5016b559145c338a77b7d9c8cbf0a79a576c203b464175b296450be" Netns:"/var/run/netns/c9054405-ef1d-409c-9352-f83f49f75e71" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-authentication;K8S_POD_NAME=oauth-openshift-556766588f-d8d64;K8S_POD_INFRA_CONTAINER_ID=3c8b250ef5016b559145c338a77b7d9c8cbf0a79a576c203b464175b296450be;K8S_POD_UID=d032f2fe-bb01-492a-9581-834a5c4238f0" Path:"" ERRORED: error configuring pod [openshift-authentication/oauth-openshift-556766588f-d8d64] networking: Multus: [openshift-authentication/oauth-openshift-556766588f-d8d64/d032f2fe-bb01-492a-9581-834a5c4238f0]: error setting the networks status, pod was already deleted: SetPodNetworkStatusAnnotation: failed to query the pod oauth-openshift-556766588f-d8d64 in out of cluster comm: pod "oauth-openshift-556766588f-d8d64" not found Feb 17 13:40:56 crc kubenswrapper[4768]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Feb 17 13:40:56 crc kubenswrapper[4768]: > pod="openshift-authentication/oauth-openshift-556766588f-d8d64" Feb 17 13:40:56 crc kubenswrapper[4768]: E0217 13:40:56.348895 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"oauth-openshift-556766588f-d8d64_openshift-authentication(d032f2fe-bb01-492a-9581-834a5c4238f0)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"oauth-openshift-556766588f-d8d64_openshift-authentication(d032f2fe-bb01-492a-9581-834a5c4238f0)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_oauth-openshift-556766588f-d8d64_openshift-authentication_d032f2fe-bb01-492a-9581-834a5c4238f0_0(3c8b250ef5016b559145c338a77b7d9c8cbf0a79a576c203b464175b296450be): error adding pod openshift-authentication_oauth-openshift-556766588f-d8d64 to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus-shim\\\" name=\\\"multus-cni-network\\\" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:\\\"3c8b250ef5016b559145c338a77b7d9c8cbf0a79a576c203b464175b296450be\\\" Netns:\\\"/var/run/netns/c9054405-ef1d-409c-9352-f83f49f75e71\\\" IfName:\\\"eth0\\\" Args:\\\"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-authentication;K8S_POD_NAME=oauth-openshift-556766588f-d8d64;K8S_POD_INFRA_CONTAINER_ID=3c8b250ef5016b559145c338a77b7d9c8cbf0a79a576c203b464175b296450be;K8S_POD_UID=d032f2fe-bb01-492a-9581-834a5c4238f0\\\" Path:\\\"\\\" ERRORED: error configuring pod [openshift-authentication/oauth-openshift-556766588f-d8d64] networking: Multus: [openshift-authentication/oauth-openshift-556766588f-d8d64/d032f2fe-bb01-492a-9581-834a5c4238f0]: error setting the networks status, pod was already deleted: SetPodNetworkStatusAnnotation: failed to query the pod oauth-openshift-556766588f-d8d64 in out of cluster comm: pod \\\"oauth-openshift-556766588f-d8d64\\\" not found\\n': StdinData: {\\\"binDir\\\":\\\"/var/lib/cni/bin\\\",\\\"clusterNetwork\\\":\\\"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf\\\",\\\"cniVersion\\\":\\\"0.3.1\\\",\\\"daemonSocketDir\\\":\\\"/run/multus/socket\\\",\\\"globalNamespaces\\\":\\\"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv\\\",\\\"logLevel\\\":\\\"verbose\\\",\\\"logToStderr\\\":true,\\\"name\\\":\\\"multus-cni-network\\\",\\\"namespaceIsolation\\\":true,\\\"type\\\":\\\"multus-shim\\\"}\"" pod="openshift-authentication/oauth-openshift-556766588f-d8d64" podUID="d032f2fe-bb01-492a-9581-834a5c4238f0" Feb 17 13:40:56 crc kubenswrapper[4768]: I0217 13:40:56.370079 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-556766588f-d8d64" Feb 17 13:40:56 crc kubenswrapper[4768]: I0217 13:40:56.370518 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-556766588f-d8d64" Feb 17 13:40:56 crc kubenswrapper[4768]: I0217 13:40:56.493872 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Feb 17 13:40:56 crc kubenswrapper[4768]: I0217 13:40:56.499067 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Feb 17 13:40:56 crc kubenswrapper[4768]: I0217 13:40:56.513996 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Feb 17 13:40:56 crc kubenswrapper[4768]: I0217 13:40:56.679819 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Feb 17 13:40:56 crc kubenswrapper[4768]: I0217 13:40:56.732036 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Feb 17 13:40:56 crc kubenswrapper[4768]: I0217 13:40:56.782703 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Feb 17 13:40:56 crc kubenswrapper[4768]: I0217 13:40:56.807790 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Feb 17 13:40:56 crc kubenswrapper[4768]: I0217 13:40:56.964386 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Feb 17 13:40:56 crc kubenswrapper[4768]: I0217 13:40:56.982507 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Feb 17 13:40:57 crc kubenswrapper[4768]: I0217 13:40:57.032362 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Feb 17 13:40:57 crc kubenswrapper[4768]: I0217 13:40:57.067838 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Feb 17 13:40:57 crc kubenswrapper[4768]: I0217 13:40:57.110824 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Feb 17 13:40:57 crc kubenswrapper[4768]: I0217 13:40:57.172506 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Feb 17 13:40:57 crc kubenswrapper[4768]: I0217 13:40:57.205446 4768 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Feb 17 13:40:57 crc kubenswrapper[4768]: I0217 13:40:57.207157 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Feb 17 13:40:57 crc kubenswrapper[4768]: I0217 13:40:57.392581 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Feb 17 13:40:57 crc kubenswrapper[4768]: I0217 13:40:57.394404 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Feb 17 13:40:57 crc kubenswrapper[4768]: I0217 13:40:57.418912 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Feb 17 13:40:57 crc kubenswrapper[4768]: I0217 13:40:57.421557 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Feb 17 13:40:57 crc kubenswrapper[4768]: I0217 13:40:57.511156 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Feb 17 13:40:57 crc kubenswrapper[4768]: I0217 13:40:57.555224 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-556766588f-d8d64"] Feb 17 13:40:57 crc kubenswrapper[4768]: I0217 13:40:57.655204 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Feb 17 13:40:57 crc kubenswrapper[4768]: I0217 13:40:57.655385 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Feb 17 13:40:57 crc kubenswrapper[4768]: I0217 13:40:57.677905 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 17 13:40:57 crc kubenswrapper[4768]: I0217 13:40:57.712462 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Feb 17 13:40:57 crc kubenswrapper[4768]: I0217 13:40:57.915760 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Feb 17 13:40:57 crc kubenswrapper[4768]: I0217 13:40:57.944519 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Feb 17 13:40:57 crc kubenswrapper[4768]: I0217 13:40:57.991299 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Feb 17 13:40:58 crc kubenswrapper[4768]: I0217 13:40:58.062333 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Feb 17 13:40:58 crc kubenswrapper[4768]: I0217 13:40:58.183799 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Feb 17 13:40:58 crc kubenswrapper[4768]: I0217 13:40:58.240911 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Feb 17 13:40:58 crc kubenswrapper[4768]: I0217 13:40:58.355700 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Feb 17 13:40:58 crc kubenswrapper[4768]: I0217 13:40:58.371192 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Feb 17 13:40:58 crc kubenswrapper[4768]: I0217 13:40:58.380442 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-556766588f-d8d64" event={"ID":"d032f2fe-bb01-492a-9581-834a5c4238f0","Type":"ContainerStarted","Data":"fb41a95bb28cc2c2102741a64d59c20d19e43724d908c814ce2ce11eb27f38b7"} Feb 17 13:40:58 crc kubenswrapper[4768]: I0217 13:40:58.380486 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-556766588f-d8d64" event={"ID":"d032f2fe-bb01-492a-9581-834a5c4238f0","Type":"ContainerStarted","Data":"2aefe4b664a41c7011e49cb67747385a900010e072baacdc89266620fe5f4aa6"} Feb 17 13:40:58 crc kubenswrapper[4768]: I0217 13:40:58.380674 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-556766588f-d8d64" Feb 17 13:40:58 crc kubenswrapper[4768]: I0217 13:40:58.381340 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Feb 17 13:40:58 crc kubenswrapper[4768]: I0217 13:40:58.385512 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-556766588f-d8d64" Feb 17 13:40:58 crc kubenswrapper[4768]: I0217 13:40:58.405483 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-556766588f-d8d64" podStartSLOduration=66.405466329 podStartE2EDuration="1m6.405466329s" podCreationTimestamp="2026-02-17 13:39:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 13:40:58.404744528 +0000 UTC m=+277.684130970" watchObservedRunningTime="2026-02-17 13:40:58.405466329 +0000 UTC m=+277.684852771" Feb 17 13:40:58 crc kubenswrapper[4768]: I0217 13:40:58.471624 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Feb 17 13:40:58 crc kubenswrapper[4768]: I0217 13:40:58.526331 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Feb 17 13:40:58 crc kubenswrapper[4768]: I0217 13:40:58.621927 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Feb 17 13:40:58 crc kubenswrapper[4768]: I0217 13:40:58.624039 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Feb 17 13:40:58 crc kubenswrapper[4768]: I0217 13:40:58.639223 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Feb 17 13:40:58 crc kubenswrapper[4768]: I0217 13:40:58.741159 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Feb 17 13:40:58 crc kubenswrapper[4768]: I0217 13:40:58.843722 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Feb 17 13:40:58 crc kubenswrapper[4768]: I0217 13:40:58.864225 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Feb 17 13:40:59 crc kubenswrapper[4768]: I0217 13:40:59.014363 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Feb 17 13:40:59 crc kubenswrapper[4768]: I0217 13:40:59.059682 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Feb 17 13:40:59 crc kubenswrapper[4768]: I0217 13:40:59.126878 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 17 13:40:59 crc kubenswrapper[4768]: I0217 13:40:59.312827 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Feb 17 13:40:59 crc kubenswrapper[4768]: I0217 13:40:59.313591 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Feb 17 13:40:59 crc kubenswrapper[4768]: I0217 13:40:59.324268 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Feb 17 13:40:59 crc kubenswrapper[4768]: I0217 13:40:59.433380 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Feb 17 13:40:59 crc kubenswrapper[4768]: I0217 13:40:59.470385 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Feb 17 13:40:59 crc kubenswrapper[4768]: I0217 13:40:59.508709 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Feb 17 13:40:59 crc kubenswrapper[4768]: I0217 13:40:59.567887 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Feb 17 13:40:59 crc kubenswrapper[4768]: I0217 13:40:59.645357 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Feb 17 13:40:59 crc kubenswrapper[4768]: I0217 13:40:59.664346 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Feb 17 13:40:59 crc kubenswrapper[4768]: I0217 13:40:59.733808 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Feb 17 13:40:59 crc kubenswrapper[4768]: I0217 13:40:59.752237 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 17 13:40:59 crc kubenswrapper[4768]: I0217 13:40:59.762317 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 17 13:40:59 crc kubenswrapper[4768]: I0217 13:40:59.882499 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Feb 17 13:40:59 crc kubenswrapper[4768]: I0217 13:40:59.883726 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Feb 17 13:40:59 crc kubenswrapper[4768]: I0217 13:40:59.959299 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Feb 17 13:40:59 crc kubenswrapper[4768]: I0217 13:40:59.962691 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Feb 17 13:40:59 crc kubenswrapper[4768]: I0217 13:40:59.998504 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Feb 17 13:41:00 crc kubenswrapper[4768]: I0217 13:41:00.017323 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Feb 17 13:41:00 crc kubenswrapper[4768]: I0217 13:41:00.108629 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Feb 17 13:41:00 crc kubenswrapper[4768]: I0217 13:41:00.127262 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Feb 17 13:41:00 crc kubenswrapper[4768]: I0217 13:41:00.140874 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Feb 17 13:41:00 crc kubenswrapper[4768]: I0217 13:41:00.166008 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Feb 17 13:41:00 crc kubenswrapper[4768]: I0217 13:41:00.169046 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Feb 17 13:41:00 crc kubenswrapper[4768]: I0217 13:41:00.176682 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Feb 17 13:41:00 crc kubenswrapper[4768]: I0217 13:41:00.183081 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Feb 17 13:41:00 crc kubenswrapper[4768]: I0217 13:41:00.284353 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Feb 17 13:41:00 crc kubenswrapper[4768]: I0217 13:41:00.348077 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Feb 17 13:41:00 crc kubenswrapper[4768]: I0217 13:41:00.373444 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Feb 17 13:41:00 crc kubenswrapper[4768]: I0217 13:41:00.423720 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Feb 17 13:41:00 crc kubenswrapper[4768]: I0217 13:41:00.558450 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Feb 17 13:41:00 crc kubenswrapper[4768]: I0217 13:41:00.724174 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Feb 17 13:41:00 crc kubenswrapper[4768]: I0217 13:41:00.768733 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Feb 17 13:41:00 crc kubenswrapper[4768]: I0217 13:41:00.859818 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Feb 17 13:41:00 crc kubenswrapper[4768]: I0217 13:41:00.864947 4768 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Feb 17 13:41:00 crc kubenswrapper[4768]: I0217 13:41:00.865430 4768 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" containerID="cri-o://47ab018dc21478799cd53fdc410f689574630af9510f05e80a5bf5673d7b24ab" gracePeriod=5 Feb 17 13:41:00 crc kubenswrapper[4768]: I0217 13:41:00.983949 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Feb 17 13:41:00 crc kubenswrapper[4768]: I0217 13:41:00.991223 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Feb 17 13:41:01 crc kubenswrapper[4768]: I0217 13:41:01.025001 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Feb 17 13:41:01 crc kubenswrapper[4768]: I0217 13:41:01.048208 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Feb 17 13:41:01 crc kubenswrapper[4768]: I0217 13:41:01.146144 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Feb 17 13:41:01 crc kubenswrapper[4768]: I0217 13:41:01.155645 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Feb 17 13:41:01 crc kubenswrapper[4768]: I0217 13:41:01.196427 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Feb 17 13:41:01 crc kubenswrapper[4768]: I0217 13:41:01.225772 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Feb 17 13:41:01 crc kubenswrapper[4768]: I0217 13:41:01.290901 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Feb 17 13:41:01 crc kubenswrapper[4768]: I0217 13:41:01.316787 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Feb 17 13:41:01 crc kubenswrapper[4768]: I0217 13:41:01.318238 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Feb 17 13:41:01 crc kubenswrapper[4768]: I0217 13:41:01.466273 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Feb 17 13:41:01 crc kubenswrapper[4768]: I0217 13:41:01.659986 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Feb 17 13:41:01 crc kubenswrapper[4768]: I0217 13:41:01.793087 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Feb 17 13:41:01 crc kubenswrapper[4768]: I0217 13:41:01.907059 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Feb 17 13:41:02 crc kubenswrapper[4768]: I0217 13:41:02.037303 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Feb 17 13:41:02 crc kubenswrapper[4768]: I0217 13:41:02.130077 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Feb 17 13:41:02 crc kubenswrapper[4768]: I0217 13:41:02.178140 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Feb 17 13:41:02 crc kubenswrapper[4768]: I0217 13:41:02.192280 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Feb 17 13:41:02 crc kubenswrapper[4768]: I0217 13:41:02.245782 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Feb 17 13:41:02 crc kubenswrapper[4768]: I0217 13:41:02.361815 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Feb 17 13:41:02 crc kubenswrapper[4768]: I0217 13:41:02.448447 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Feb 17 13:41:02 crc kubenswrapper[4768]: I0217 13:41:02.551901 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Feb 17 13:41:02 crc kubenswrapper[4768]: I0217 13:41:02.575324 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Feb 17 13:41:02 crc kubenswrapper[4768]: I0217 13:41:02.594865 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Feb 17 13:41:02 crc kubenswrapper[4768]: I0217 13:41:02.682353 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Feb 17 13:41:02 crc kubenswrapper[4768]: I0217 13:41:02.764425 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Feb 17 13:41:02 crc kubenswrapper[4768]: I0217 13:41:02.776514 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Feb 17 13:41:03 crc kubenswrapper[4768]: I0217 13:41:03.364012 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Feb 17 13:41:03 crc kubenswrapper[4768]: I0217 13:41:03.364283 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Feb 17 13:41:03 crc kubenswrapper[4768]: I0217 13:41:03.365128 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Feb 17 13:41:03 crc kubenswrapper[4768]: I0217 13:41:03.365304 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Feb 17 13:41:03 crc kubenswrapper[4768]: I0217 13:41:03.365467 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Feb 17 13:41:03 crc kubenswrapper[4768]: I0217 13:41:03.373693 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Feb 17 13:41:03 crc kubenswrapper[4768]: I0217 13:41:03.373898 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Feb 17 13:41:03 crc kubenswrapper[4768]: I0217 13:41:03.376274 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Feb 17 13:41:03 crc kubenswrapper[4768]: I0217 13:41:03.441994 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Feb 17 13:41:03 crc kubenswrapper[4768]: I0217 13:41:03.445334 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Feb 17 13:41:03 crc kubenswrapper[4768]: I0217 13:41:03.675573 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Feb 17 13:41:03 crc kubenswrapper[4768]: I0217 13:41:03.692037 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Feb 17 13:41:03 crc kubenswrapper[4768]: I0217 13:41:03.896793 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Feb 17 13:41:04 crc kubenswrapper[4768]: I0217 13:41:04.023709 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Feb 17 13:41:04 crc kubenswrapper[4768]: I0217 13:41:04.383567 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Feb 17 13:41:04 crc kubenswrapper[4768]: I0217 13:41:04.411184 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Feb 17 13:41:04 crc kubenswrapper[4768]: I0217 13:41:04.695876 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Feb 17 13:41:05 crc kubenswrapper[4768]: I0217 13:41:05.114897 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Feb 17 13:41:06 crc kubenswrapper[4768]: I0217 13:41:06.429762 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Feb 17 13:41:06 crc kubenswrapper[4768]: I0217 13:41:06.430269 4768 generic.go:334] "Generic (PLEG): container finished" podID="f85e55b1a89d02b0cb034b1ea31ed45a" containerID="47ab018dc21478799cd53fdc410f689574630af9510f05e80a5bf5673d7b24ab" exitCode=137 Feb 17 13:41:06 crc kubenswrapper[4768]: I0217 13:41:06.430344 4768 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6890bb4fff04561f00e900359819b8480a99f044ba714399905493555b6ae881" Feb 17 13:41:06 crc kubenswrapper[4768]: I0217 13:41:06.472342 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Feb 17 13:41:06 crc kubenswrapper[4768]: I0217 13:41:06.472453 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 17 13:41:06 crc kubenswrapper[4768]: I0217 13:41:06.601478 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 17 13:41:06 crc kubenswrapper[4768]: I0217 13:41:06.601536 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 17 13:41:06 crc kubenswrapper[4768]: I0217 13:41:06.601589 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log" (OuterVolumeSpecName: "var-log") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 13:41:06 crc kubenswrapper[4768]: I0217 13:41:06.601668 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 17 13:41:06 crc kubenswrapper[4768]: I0217 13:41:06.601763 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 13:41:06 crc kubenswrapper[4768]: I0217 13:41:06.601953 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 17 13:41:06 crc kubenswrapper[4768]: I0217 13:41:06.602051 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 17 13:41:06 crc kubenswrapper[4768]: I0217 13:41:06.602303 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock" (OuterVolumeSpecName: "var-lock") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 13:41:06 crc kubenswrapper[4768]: I0217 13:41:06.602518 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests" (OuterVolumeSpecName: "manifests") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "manifests". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 13:41:06 crc kubenswrapper[4768]: I0217 13:41:06.602765 4768 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") on node \"crc\" DevicePath \"\"" Feb 17 13:41:06 crc kubenswrapper[4768]: I0217 13:41:06.602899 4768 reconciler_common.go:293] "Volume detached for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") on node \"crc\" DevicePath \"\"" Feb 17 13:41:06 crc kubenswrapper[4768]: I0217 13:41:06.603012 4768 reconciler_common.go:293] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") on node \"crc\" DevicePath \"\"" Feb 17 13:41:06 crc kubenswrapper[4768]: I0217 13:41:06.603150 4768 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") on node \"crc\" DevicePath \"\"" Feb 17 13:41:06 crc kubenswrapper[4768]: I0217 13:41:06.614805 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir" (OuterVolumeSpecName: "pod-resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "pod-resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 13:41:06 crc kubenswrapper[4768]: I0217 13:41:06.703796 4768 reconciler_common.go:293] "Volume detached for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") on node \"crc\" DevicePath \"\"" Feb 17 13:41:07 crc kubenswrapper[4768]: I0217 13:41:07.436455 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 17 13:41:07 crc kubenswrapper[4768]: I0217 13:41:07.547137 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" path="/var/lib/kubelet/pods/f85e55b1a89d02b0cb034b1ea31ed45a/volumes" Feb 17 13:41:20 crc kubenswrapper[4768]: I0217 13:41:20.502607 4768 generic.go:334] "Generic (PLEG): container finished" podID="29f8bd1f-9a14-4725-a333-ee7509778b5d" containerID="82c76dad84961fd21b2badeef4da9697bb097acb8b567939d32878f86ca80523" exitCode=0 Feb 17 13:41:20 crc kubenswrapper[4768]: I0217 13:41:20.502694 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-5wgl6" event={"ID":"29f8bd1f-9a14-4725-a333-ee7509778b5d","Type":"ContainerDied","Data":"82c76dad84961fd21b2badeef4da9697bb097acb8b567939d32878f86ca80523"} Feb 17 13:41:20 crc kubenswrapper[4768]: I0217 13:41:20.504307 4768 scope.go:117] "RemoveContainer" containerID="82c76dad84961fd21b2badeef4da9697bb097acb8b567939d32878f86ca80523" Feb 17 13:41:21 crc kubenswrapper[4768]: I0217 13:41:21.327581 4768 cert_rotation.go:91] certificate rotation detected, shutting down client connections to start using new credentials Feb 17 13:41:21 crc kubenswrapper[4768]: I0217 13:41:21.510383 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-5wgl6" event={"ID":"29f8bd1f-9a14-4725-a333-ee7509778b5d","Type":"ContainerStarted","Data":"1a9d7c1759120b58b3c6e021d32cc7b1c46f863e8541ccd867ceb5b19ebd6638"} Feb 17 13:41:21 crc kubenswrapper[4768]: I0217 13:41:21.511384 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-5wgl6" Feb 17 13:41:21 crc kubenswrapper[4768]: I0217 13:41:21.514357 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-5wgl6" Feb 17 13:41:21 crc kubenswrapper[4768]: I0217 13:41:21.628441 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Feb 17 13:41:36 crc kubenswrapper[4768]: I0217 13:41:36.474388 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Feb 17 13:41:44 crc kubenswrapper[4768]: I0217 13:41:44.696453 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Feb 17 13:41:58 crc kubenswrapper[4768]: I0217 13:41:58.060305 4768 patch_prober.go:28] interesting pod/machine-config-daemon-p97z4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 13:41:58 crc kubenswrapper[4768]: I0217 13:41:58.060978 4768 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-p97z4" podUID="10c685ba-8fe0-425c-958c-3fb6754d3d84" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 13:42:21 crc kubenswrapper[4768]: I0217 13:42:21.634584 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-2q82n"] Feb 17 13:42:21 crc kubenswrapper[4768]: E0217 13:42:21.635774 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Feb 17 13:42:21 crc kubenswrapper[4768]: I0217 13:42:21.635804 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Feb 17 13:42:21 crc kubenswrapper[4768]: I0217 13:42:21.636042 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Feb 17 13:42:21 crc kubenswrapper[4768]: I0217 13:42:21.637009 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-2q82n" Feb 17 13:42:21 crc kubenswrapper[4768]: I0217 13:42:21.648291 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-2q82n"] Feb 17 13:42:21 crc kubenswrapper[4768]: I0217 13:42:21.722355 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/f86ff986-14be-4a82-9e14-9e60085c1589-registry-certificates\") pod \"image-registry-66df7c8f76-2q82n\" (UID: \"f86ff986-14be-4a82-9e14-9e60085c1589\") " pod="openshift-image-registry/image-registry-66df7c8f76-2q82n" Feb 17 13:42:21 crc kubenswrapper[4768]: I0217 13:42:21.722456 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m88gw\" (UniqueName: \"kubernetes.io/projected/f86ff986-14be-4a82-9e14-9e60085c1589-kube-api-access-m88gw\") pod \"image-registry-66df7c8f76-2q82n\" (UID: \"f86ff986-14be-4a82-9e14-9e60085c1589\") " pod="openshift-image-registry/image-registry-66df7c8f76-2q82n" Feb 17 13:42:21 crc kubenswrapper[4768]: I0217 13:42:21.722618 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/f86ff986-14be-4a82-9e14-9e60085c1589-ca-trust-extracted\") pod \"image-registry-66df7c8f76-2q82n\" (UID: \"f86ff986-14be-4a82-9e14-9e60085c1589\") " pod="openshift-image-registry/image-registry-66df7c8f76-2q82n" Feb 17 13:42:21 crc kubenswrapper[4768]: I0217 13:42:21.722676 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/f86ff986-14be-4a82-9e14-9e60085c1589-registry-tls\") pod \"image-registry-66df7c8f76-2q82n\" (UID: \"f86ff986-14be-4a82-9e14-9e60085c1589\") " pod="openshift-image-registry/image-registry-66df7c8f76-2q82n" Feb 17 13:42:21 crc kubenswrapper[4768]: I0217 13:42:21.722749 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/f86ff986-14be-4a82-9e14-9e60085c1589-installation-pull-secrets\") pod \"image-registry-66df7c8f76-2q82n\" (UID: \"f86ff986-14be-4a82-9e14-9e60085c1589\") " pod="openshift-image-registry/image-registry-66df7c8f76-2q82n" Feb 17 13:42:21 crc kubenswrapper[4768]: I0217 13:42:21.722817 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/f86ff986-14be-4a82-9e14-9e60085c1589-bound-sa-token\") pod \"image-registry-66df7c8f76-2q82n\" (UID: \"f86ff986-14be-4a82-9e14-9e60085c1589\") " pod="openshift-image-registry/image-registry-66df7c8f76-2q82n" Feb 17 13:42:21 crc kubenswrapper[4768]: I0217 13:42:21.722870 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f86ff986-14be-4a82-9e14-9e60085c1589-trusted-ca\") pod \"image-registry-66df7c8f76-2q82n\" (UID: \"f86ff986-14be-4a82-9e14-9e60085c1589\") " pod="openshift-image-registry/image-registry-66df7c8f76-2q82n" Feb 17 13:42:21 crc kubenswrapper[4768]: I0217 13:42:21.722946 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-2q82n\" (UID: \"f86ff986-14be-4a82-9e14-9e60085c1589\") " pod="openshift-image-registry/image-registry-66df7c8f76-2q82n" Feb 17 13:42:21 crc kubenswrapper[4768]: I0217 13:42:21.759566 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-2q82n\" (UID: \"f86ff986-14be-4a82-9e14-9e60085c1589\") " pod="openshift-image-registry/image-registry-66df7c8f76-2q82n" Feb 17 13:42:21 crc kubenswrapper[4768]: I0217 13:42:21.823864 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/f86ff986-14be-4a82-9e14-9e60085c1589-ca-trust-extracted\") pod \"image-registry-66df7c8f76-2q82n\" (UID: \"f86ff986-14be-4a82-9e14-9e60085c1589\") " pod="openshift-image-registry/image-registry-66df7c8f76-2q82n" Feb 17 13:42:21 crc kubenswrapper[4768]: I0217 13:42:21.823916 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/f86ff986-14be-4a82-9e14-9e60085c1589-registry-tls\") pod \"image-registry-66df7c8f76-2q82n\" (UID: \"f86ff986-14be-4a82-9e14-9e60085c1589\") " pod="openshift-image-registry/image-registry-66df7c8f76-2q82n" Feb 17 13:42:21 crc kubenswrapper[4768]: I0217 13:42:21.823948 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/f86ff986-14be-4a82-9e14-9e60085c1589-installation-pull-secrets\") pod \"image-registry-66df7c8f76-2q82n\" (UID: \"f86ff986-14be-4a82-9e14-9e60085c1589\") " pod="openshift-image-registry/image-registry-66df7c8f76-2q82n" Feb 17 13:42:21 crc kubenswrapper[4768]: I0217 13:42:21.823976 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/f86ff986-14be-4a82-9e14-9e60085c1589-bound-sa-token\") pod \"image-registry-66df7c8f76-2q82n\" (UID: \"f86ff986-14be-4a82-9e14-9e60085c1589\") " pod="openshift-image-registry/image-registry-66df7c8f76-2q82n" Feb 17 13:42:21 crc kubenswrapper[4768]: I0217 13:42:21.824000 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f86ff986-14be-4a82-9e14-9e60085c1589-trusted-ca\") pod \"image-registry-66df7c8f76-2q82n\" (UID: \"f86ff986-14be-4a82-9e14-9e60085c1589\") " pod="openshift-image-registry/image-registry-66df7c8f76-2q82n" Feb 17 13:42:21 crc kubenswrapper[4768]: I0217 13:42:21.824039 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/f86ff986-14be-4a82-9e14-9e60085c1589-registry-certificates\") pod \"image-registry-66df7c8f76-2q82n\" (UID: \"f86ff986-14be-4a82-9e14-9e60085c1589\") " pod="openshift-image-registry/image-registry-66df7c8f76-2q82n" Feb 17 13:42:21 crc kubenswrapper[4768]: I0217 13:42:21.824068 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m88gw\" (UniqueName: \"kubernetes.io/projected/f86ff986-14be-4a82-9e14-9e60085c1589-kube-api-access-m88gw\") pod \"image-registry-66df7c8f76-2q82n\" (UID: \"f86ff986-14be-4a82-9e14-9e60085c1589\") " pod="openshift-image-registry/image-registry-66df7c8f76-2q82n" Feb 17 13:42:21 crc kubenswrapper[4768]: I0217 13:42:21.824847 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/f86ff986-14be-4a82-9e14-9e60085c1589-ca-trust-extracted\") pod \"image-registry-66df7c8f76-2q82n\" (UID: \"f86ff986-14be-4a82-9e14-9e60085c1589\") " pod="openshift-image-registry/image-registry-66df7c8f76-2q82n" Feb 17 13:42:21 crc kubenswrapper[4768]: I0217 13:42:21.825721 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f86ff986-14be-4a82-9e14-9e60085c1589-trusted-ca\") pod \"image-registry-66df7c8f76-2q82n\" (UID: \"f86ff986-14be-4a82-9e14-9e60085c1589\") " pod="openshift-image-registry/image-registry-66df7c8f76-2q82n" Feb 17 13:42:21 crc kubenswrapper[4768]: I0217 13:42:21.827242 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/f86ff986-14be-4a82-9e14-9e60085c1589-registry-certificates\") pod \"image-registry-66df7c8f76-2q82n\" (UID: \"f86ff986-14be-4a82-9e14-9e60085c1589\") " pod="openshift-image-registry/image-registry-66df7c8f76-2q82n" Feb 17 13:42:21 crc kubenswrapper[4768]: I0217 13:42:21.834351 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/f86ff986-14be-4a82-9e14-9e60085c1589-registry-tls\") pod \"image-registry-66df7c8f76-2q82n\" (UID: \"f86ff986-14be-4a82-9e14-9e60085c1589\") " pod="openshift-image-registry/image-registry-66df7c8f76-2q82n" Feb 17 13:42:21 crc kubenswrapper[4768]: I0217 13:42:21.834738 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/f86ff986-14be-4a82-9e14-9e60085c1589-installation-pull-secrets\") pod \"image-registry-66df7c8f76-2q82n\" (UID: \"f86ff986-14be-4a82-9e14-9e60085c1589\") " pod="openshift-image-registry/image-registry-66df7c8f76-2q82n" Feb 17 13:42:21 crc kubenswrapper[4768]: I0217 13:42:21.842357 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/f86ff986-14be-4a82-9e14-9e60085c1589-bound-sa-token\") pod \"image-registry-66df7c8f76-2q82n\" (UID: \"f86ff986-14be-4a82-9e14-9e60085c1589\") " pod="openshift-image-registry/image-registry-66df7c8f76-2q82n" Feb 17 13:42:21 crc kubenswrapper[4768]: I0217 13:42:21.845710 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m88gw\" (UniqueName: \"kubernetes.io/projected/f86ff986-14be-4a82-9e14-9e60085c1589-kube-api-access-m88gw\") pod \"image-registry-66df7c8f76-2q82n\" (UID: \"f86ff986-14be-4a82-9e14-9e60085c1589\") " pod="openshift-image-registry/image-registry-66df7c8f76-2q82n" Feb 17 13:42:22 crc kubenswrapper[4768]: I0217 13:42:22.003436 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-2q82n" Feb 17 13:42:22 crc kubenswrapper[4768]: I0217 13:42:22.225547 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-2q82n"] Feb 17 13:42:22 crc kubenswrapper[4768]: I0217 13:42:22.874419 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-2q82n" event={"ID":"f86ff986-14be-4a82-9e14-9e60085c1589","Type":"ContainerStarted","Data":"84007112da8b10550f45489e57380e1d885bf3ee054e2c1ace56428aec48bf50"} Feb 17 13:42:22 crc kubenswrapper[4768]: I0217 13:42:22.874714 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-2q82n" event={"ID":"f86ff986-14be-4a82-9e14-9e60085c1589","Type":"ContainerStarted","Data":"0f0a708ba7f68b75d2a6c819ed1a2986739c3c373640c4cb1ecfd22b9e4c3ad2"} Feb 17 13:42:22 crc kubenswrapper[4768]: I0217 13:42:22.874777 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-66df7c8f76-2q82n" Feb 17 13:42:22 crc kubenswrapper[4768]: I0217 13:42:22.894746 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-66df7c8f76-2q82n" podStartSLOduration=1.894724085 podStartE2EDuration="1.894724085s" podCreationTimestamp="2026-02-17 13:42:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 13:42:22.891774651 +0000 UTC m=+362.171161103" watchObservedRunningTime="2026-02-17 13:42:22.894724085 +0000 UTC m=+362.174110547" Feb 17 13:42:27 crc kubenswrapper[4768]: I0217 13:42:27.677600 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-pc222"] Feb 17 13:42:27 crc kubenswrapper[4768]: I0217 13:42:27.686375 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-wsm7m"] Feb 17 13:42:27 crc kubenswrapper[4768]: I0217 13:42:27.686678 4768 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-wsm7m" podUID="7e7136ca-949e-49ff-9f79-47e485a039cb" containerName="registry-server" containerID="cri-o://9278fefa4c49699021989ee3cb743044d055facb0872caf9a9d46dffdd91d025" gracePeriod=30 Feb 17 13:42:27 crc kubenswrapper[4768]: I0217 13:42:27.686990 4768 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-pc222" podUID="a409f38d-1da9-42e5-94ff-502133f6cee2" containerName="registry-server" containerID="cri-o://38cea3e6dea32bbb82bf277a79c52a6f19ef1e1488aff139fc90b871b156d1e5" gracePeriod=30 Feb 17 13:42:27 crc kubenswrapper[4768]: I0217 13:42:27.689192 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-5wgl6"] Feb 17 13:42:27 crc kubenswrapper[4768]: I0217 13:42:27.689393 4768 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/marketplace-operator-79b997595-5wgl6" podUID="29f8bd1f-9a14-4725-a333-ee7509778b5d" containerName="marketplace-operator" containerID="cri-o://1a9d7c1759120b58b3c6e021d32cc7b1c46f863e8541ccd867ceb5b19ebd6638" gracePeriod=30 Feb 17 13:42:27 crc kubenswrapper[4768]: I0217 13:42:27.707151 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-ncxf5"] Feb 17 13:42:27 crc kubenswrapper[4768]: I0217 13:42:27.707857 4768 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-ncxf5" podUID="9497730e-2a05-40b9-a4ee-364b67a9133c" containerName="registry-server" containerID="cri-o://dcb86514e3bef6771df47f752ee3adfae73ed35b177b35ee43a6e3ef35c0672f" gracePeriod=30 Feb 17 13:42:27 crc kubenswrapper[4768]: I0217 13:42:27.717777 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-fg2l6"] Feb 17 13:42:27 crc kubenswrapper[4768]: I0217 13:42:27.717995 4768 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-fg2l6" podUID="3529a765-a06e-42c3-9a16-959ca7662469" containerName="registry-server" containerID="cri-o://77952fd948545e2c1b9533b2e824eea69775681b93fb2de558e55544ac829e05" gracePeriod=30 Feb 17 13:42:27 crc kubenswrapper[4768]: I0217 13:42:27.733633 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-grp2v"] Feb 17 13:42:27 crc kubenswrapper[4768]: I0217 13:42:27.734313 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-grp2v" Feb 17 13:42:27 crc kubenswrapper[4768]: I0217 13:42:27.749701 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-grp2v"] Feb 17 13:42:27 crc kubenswrapper[4768]: I0217 13:42:27.839262 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/949f4cbc-e86f-4f30-bac7-d31c24169e4e-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-grp2v\" (UID: \"949f4cbc-e86f-4f30-bac7-d31c24169e4e\") " pod="openshift-marketplace/marketplace-operator-79b997595-grp2v" Feb 17 13:42:27 crc kubenswrapper[4768]: I0217 13:42:27.839347 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/949f4cbc-e86f-4f30-bac7-d31c24169e4e-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-grp2v\" (UID: \"949f4cbc-e86f-4f30-bac7-d31c24169e4e\") " pod="openshift-marketplace/marketplace-operator-79b997595-grp2v" Feb 17 13:42:27 crc kubenswrapper[4768]: I0217 13:42:27.839455 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hrtdf\" (UniqueName: \"kubernetes.io/projected/949f4cbc-e86f-4f30-bac7-d31c24169e4e-kube-api-access-hrtdf\") pod \"marketplace-operator-79b997595-grp2v\" (UID: \"949f4cbc-e86f-4f30-bac7-d31c24169e4e\") " pod="openshift-marketplace/marketplace-operator-79b997595-grp2v" Feb 17 13:42:27 crc kubenswrapper[4768]: I0217 13:42:27.940524 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/949f4cbc-e86f-4f30-bac7-d31c24169e4e-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-grp2v\" (UID: \"949f4cbc-e86f-4f30-bac7-d31c24169e4e\") " pod="openshift-marketplace/marketplace-operator-79b997595-grp2v" Feb 17 13:42:27 crc kubenswrapper[4768]: I0217 13:42:27.940594 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hrtdf\" (UniqueName: \"kubernetes.io/projected/949f4cbc-e86f-4f30-bac7-d31c24169e4e-kube-api-access-hrtdf\") pod \"marketplace-operator-79b997595-grp2v\" (UID: \"949f4cbc-e86f-4f30-bac7-d31c24169e4e\") " pod="openshift-marketplace/marketplace-operator-79b997595-grp2v" Feb 17 13:42:27 crc kubenswrapper[4768]: I0217 13:42:27.940662 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/949f4cbc-e86f-4f30-bac7-d31c24169e4e-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-grp2v\" (UID: \"949f4cbc-e86f-4f30-bac7-d31c24169e4e\") " pod="openshift-marketplace/marketplace-operator-79b997595-grp2v" Feb 17 13:42:27 crc kubenswrapper[4768]: I0217 13:42:27.942079 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/949f4cbc-e86f-4f30-bac7-d31c24169e4e-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-grp2v\" (UID: \"949f4cbc-e86f-4f30-bac7-d31c24169e4e\") " pod="openshift-marketplace/marketplace-operator-79b997595-grp2v" Feb 17 13:42:27 crc kubenswrapper[4768]: I0217 13:42:27.947400 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/949f4cbc-e86f-4f30-bac7-d31c24169e4e-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-grp2v\" (UID: \"949f4cbc-e86f-4f30-bac7-d31c24169e4e\") " pod="openshift-marketplace/marketplace-operator-79b997595-grp2v" Feb 17 13:42:27 crc kubenswrapper[4768]: I0217 13:42:27.958436 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hrtdf\" (UniqueName: \"kubernetes.io/projected/949f4cbc-e86f-4f30-bac7-d31c24169e4e-kube-api-access-hrtdf\") pod \"marketplace-operator-79b997595-grp2v\" (UID: \"949f4cbc-e86f-4f30-bac7-d31c24169e4e\") " pod="openshift-marketplace/marketplace-operator-79b997595-grp2v" Feb 17 13:42:28 crc kubenswrapper[4768]: I0217 13:42:28.049449 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-grp2v" Feb 17 13:42:28 crc kubenswrapper[4768]: I0217 13:42:28.060496 4768 patch_prober.go:28] interesting pod/machine-config-daemon-p97z4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 13:42:28 crc kubenswrapper[4768]: I0217 13:42:28.060581 4768 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-p97z4" podUID="10c685ba-8fe0-425c-958c-3fb6754d3d84" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 13:42:28 crc kubenswrapper[4768]: I0217 13:42:28.235969 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-grp2v"] Feb 17 13:42:28 crc kubenswrapper[4768]: I0217 13:42:28.490569 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-pc222" Feb 17 13:42:28 crc kubenswrapper[4768]: I0217 13:42:28.605372 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wsm7m" Feb 17 13:42:28 crc kubenswrapper[4768]: I0217 13:42:28.651120 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a409f38d-1da9-42e5-94ff-502133f6cee2-utilities\") pod \"a409f38d-1da9-42e5-94ff-502133f6cee2\" (UID: \"a409f38d-1da9-42e5-94ff-502133f6cee2\") " Feb 17 13:42:28 crc kubenswrapper[4768]: I0217 13:42:28.651210 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a409f38d-1da9-42e5-94ff-502133f6cee2-catalog-content\") pod \"a409f38d-1da9-42e5-94ff-502133f6cee2\" (UID: \"a409f38d-1da9-42e5-94ff-502133f6cee2\") " Feb 17 13:42:28 crc kubenswrapper[4768]: I0217 13:42:28.651244 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7n4dv\" (UniqueName: \"kubernetes.io/projected/a409f38d-1da9-42e5-94ff-502133f6cee2-kube-api-access-7n4dv\") pod \"a409f38d-1da9-42e5-94ff-502133f6cee2\" (UID: \"a409f38d-1da9-42e5-94ff-502133f6cee2\") " Feb 17 13:42:28 crc kubenswrapper[4768]: I0217 13:42:28.653214 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a409f38d-1da9-42e5-94ff-502133f6cee2-utilities" (OuterVolumeSpecName: "utilities") pod "a409f38d-1da9-42e5-94ff-502133f6cee2" (UID: "a409f38d-1da9-42e5-94ff-502133f6cee2"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 13:42:28 crc kubenswrapper[4768]: I0217 13:42:28.657795 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a409f38d-1da9-42e5-94ff-502133f6cee2-kube-api-access-7n4dv" (OuterVolumeSpecName: "kube-api-access-7n4dv") pod "a409f38d-1da9-42e5-94ff-502133f6cee2" (UID: "a409f38d-1da9-42e5-94ff-502133f6cee2"). InnerVolumeSpecName "kube-api-access-7n4dv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 13:42:28 crc kubenswrapper[4768]: I0217 13:42:28.670570 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-fg2l6" Feb 17 13:42:28 crc kubenswrapper[4768]: I0217 13:42:28.711120 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-ncxf5" Feb 17 13:42:28 crc kubenswrapper[4768]: I0217 13:42:28.717423 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a409f38d-1da9-42e5-94ff-502133f6cee2-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a409f38d-1da9-42e5-94ff-502133f6cee2" (UID: "a409f38d-1da9-42e5-94ff-502133f6cee2"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 13:42:28 crc kubenswrapper[4768]: I0217 13:42:28.739410 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-5wgl6" Feb 17 13:42:28 crc kubenswrapper[4768]: I0217 13:42:28.751803 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7e7136ca-949e-49ff-9f79-47e485a039cb-utilities\") pod \"7e7136ca-949e-49ff-9f79-47e485a039cb\" (UID: \"7e7136ca-949e-49ff-9f79-47e485a039cb\") " Feb 17 13:42:28 crc kubenswrapper[4768]: I0217 13:42:28.751873 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7e7136ca-949e-49ff-9f79-47e485a039cb-catalog-content\") pod \"7e7136ca-949e-49ff-9f79-47e485a039cb\" (UID: \"7e7136ca-949e-49ff-9f79-47e485a039cb\") " Feb 17 13:42:28 crc kubenswrapper[4768]: I0217 13:42:28.751932 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rkcmq\" (UniqueName: \"kubernetes.io/projected/7e7136ca-949e-49ff-9f79-47e485a039cb-kube-api-access-rkcmq\") pod \"7e7136ca-949e-49ff-9f79-47e485a039cb\" (UID: \"7e7136ca-949e-49ff-9f79-47e485a039cb\") " Feb 17 13:42:28 crc kubenswrapper[4768]: I0217 13:42:28.752279 4768 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a409f38d-1da9-42e5-94ff-502133f6cee2-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 17 13:42:28 crc kubenswrapper[4768]: I0217 13:42:28.752297 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7n4dv\" (UniqueName: \"kubernetes.io/projected/a409f38d-1da9-42e5-94ff-502133f6cee2-kube-api-access-7n4dv\") on node \"crc\" DevicePath \"\"" Feb 17 13:42:28 crc kubenswrapper[4768]: I0217 13:42:28.752308 4768 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a409f38d-1da9-42e5-94ff-502133f6cee2-utilities\") on node \"crc\" DevicePath \"\"" Feb 17 13:42:28 crc kubenswrapper[4768]: I0217 13:42:28.754844 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7e7136ca-949e-49ff-9f79-47e485a039cb-kube-api-access-rkcmq" (OuterVolumeSpecName: "kube-api-access-rkcmq") pod "7e7136ca-949e-49ff-9f79-47e485a039cb" (UID: "7e7136ca-949e-49ff-9f79-47e485a039cb"). InnerVolumeSpecName "kube-api-access-rkcmq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 13:42:28 crc kubenswrapper[4768]: I0217 13:42:28.755195 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7e7136ca-949e-49ff-9f79-47e485a039cb-utilities" (OuterVolumeSpecName: "utilities") pod "7e7136ca-949e-49ff-9f79-47e485a039cb" (UID: "7e7136ca-949e-49ff-9f79-47e485a039cb"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 13:42:28 crc kubenswrapper[4768]: I0217 13:42:28.809826 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7e7136ca-949e-49ff-9f79-47e485a039cb-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "7e7136ca-949e-49ff-9f79-47e485a039cb" (UID: "7e7136ca-949e-49ff-9f79-47e485a039cb"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 13:42:28 crc kubenswrapper[4768]: I0217 13:42:28.852999 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnlnm\" (UniqueName: \"kubernetes.io/projected/9497730e-2a05-40b9-a4ee-364b67a9133c-kube-api-access-rnlnm\") pod \"9497730e-2a05-40b9-a4ee-364b67a9133c\" (UID: \"9497730e-2a05-40b9-a4ee-364b67a9133c\") " Feb 17 13:42:28 crc kubenswrapper[4768]: I0217 13:42:28.853072 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3529a765-a06e-42c3-9a16-959ca7662469-catalog-content\") pod \"3529a765-a06e-42c3-9a16-959ca7662469\" (UID: \"3529a765-a06e-42c3-9a16-959ca7662469\") " Feb 17 13:42:28 crc kubenswrapper[4768]: I0217 13:42:28.853132 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/29f8bd1f-9a14-4725-a333-ee7509778b5d-marketplace-operator-metrics\") pod \"29f8bd1f-9a14-4725-a333-ee7509778b5d\" (UID: \"29f8bd1f-9a14-4725-a333-ee7509778b5d\") " Feb 17 13:42:28 crc kubenswrapper[4768]: I0217 13:42:28.853160 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9497730e-2a05-40b9-a4ee-364b67a9133c-catalog-content\") pod \"9497730e-2a05-40b9-a4ee-364b67a9133c\" (UID: \"9497730e-2a05-40b9-a4ee-364b67a9133c\") " Feb 17 13:42:28 crc kubenswrapper[4768]: I0217 13:42:28.853731 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/29f8bd1f-9a14-4725-a333-ee7509778b5d-marketplace-trusted-ca\") pod \"29f8bd1f-9a14-4725-a333-ee7509778b5d\" (UID: \"29f8bd1f-9a14-4725-a333-ee7509778b5d\") " Feb 17 13:42:28 crc kubenswrapper[4768]: I0217 13:42:28.853852 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9497730e-2a05-40b9-a4ee-364b67a9133c-utilities\") pod \"9497730e-2a05-40b9-a4ee-364b67a9133c\" (UID: \"9497730e-2a05-40b9-a4ee-364b67a9133c\") " Feb 17 13:42:28 crc kubenswrapper[4768]: I0217 13:42:28.854275 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/29f8bd1f-9a14-4725-a333-ee7509778b5d-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "29f8bd1f-9a14-4725-a333-ee7509778b5d" (UID: "29f8bd1f-9a14-4725-a333-ee7509778b5d"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 13:42:28 crc kubenswrapper[4768]: I0217 13:42:28.854643 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9497730e-2a05-40b9-a4ee-364b67a9133c-utilities" (OuterVolumeSpecName: "utilities") pod "9497730e-2a05-40b9-a4ee-364b67a9133c" (UID: "9497730e-2a05-40b9-a4ee-364b67a9133c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 13:42:28 crc kubenswrapper[4768]: I0217 13:42:28.854825 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3529a765-a06e-42c3-9a16-959ca7662469-utilities\") pod \"3529a765-a06e-42c3-9a16-959ca7662469\" (UID: \"3529a765-a06e-42c3-9a16-959ca7662469\") " Feb 17 13:42:28 crc kubenswrapper[4768]: I0217 13:42:28.854883 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8n6f7\" (UniqueName: \"kubernetes.io/projected/29f8bd1f-9a14-4725-a333-ee7509778b5d-kube-api-access-8n6f7\") pod \"29f8bd1f-9a14-4725-a333-ee7509778b5d\" (UID: \"29f8bd1f-9a14-4725-a333-ee7509778b5d\") " Feb 17 13:42:28 crc kubenswrapper[4768]: I0217 13:42:28.854930 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9skbx\" (UniqueName: \"kubernetes.io/projected/3529a765-a06e-42c3-9a16-959ca7662469-kube-api-access-9skbx\") pod \"3529a765-a06e-42c3-9a16-959ca7662469\" (UID: \"3529a765-a06e-42c3-9a16-959ca7662469\") " Feb 17 13:42:28 crc kubenswrapper[4768]: I0217 13:42:28.856191 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9497730e-2a05-40b9-a4ee-364b67a9133c-kube-api-access-rnlnm" (OuterVolumeSpecName: "kube-api-access-rnlnm") pod "9497730e-2a05-40b9-a4ee-364b67a9133c" (UID: "9497730e-2a05-40b9-a4ee-364b67a9133c"). InnerVolumeSpecName "kube-api-access-rnlnm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 13:42:28 crc kubenswrapper[4768]: I0217 13:42:28.856346 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3529a765-a06e-42c3-9a16-959ca7662469-utilities" (OuterVolumeSpecName: "utilities") pod "3529a765-a06e-42c3-9a16-959ca7662469" (UID: "3529a765-a06e-42c3-9a16-959ca7662469"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 13:42:28 crc kubenswrapper[4768]: I0217 13:42:28.856448 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/29f8bd1f-9a14-4725-a333-ee7509778b5d-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "29f8bd1f-9a14-4725-a333-ee7509778b5d" (UID: "29f8bd1f-9a14-4725-a333-ee7509778b5d"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 13:42:28 crc kubenswrapper[4768]: I0217 13:42:28.856506 4768 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/29f8bd1f-9a14-4725-a333-ee7509778b5d-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 17 13:42:28 crc kubenswrapper[4768]: I0217 13:42:28.856526 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rkcmq\" (UniqueName: \"kubernetes.io/projected/7e7136ca-949e-49ff-9f79-47e485a039cb-kube-api-access-rkcmq\") on node \"crc\" DevicePath \"\"" Feb 17 13:42:28 crc kubenswrapper[4768]: I0217 13:42:28.856539 4768 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9497730e-2a05-40b9-a4ee-364b67a9133c-utilities\") on node \"crc\" DevicePath \"\"" Feb 17 13:42:28 crc kubenswrapper[4768]: I0217 13:42:28.856573 4768 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7e7136ca-949e-49ff-9f79-47e485a039cb-utilities\") on node \"crc\" DevicePath \"\"" Feb 17 13:42:28 crc kubenswrapper[4768]: I0217 13:42:28.856585 4768 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7e7136ca-949e-49ff-9f79-47e485a039cb-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 17 13:42:28 crc kubenswrapper[4768]: I0217 13:42:28.859032 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3529a765-a06e-42c3-9a16-959ca7662469-kube-api-access-9skbx" (OuterVolumeSpecName: "kube-api-access-9skbx") pod "3529a765-a06e-42c3-9a16-959ca7662469" (UID: "3529a765-a06e-42c3-9a16-959ca7662469"). InnerVolumeSpecName "kube-api-access-9skbx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 13:42:28 crc kubenswrapper[4768]: I0217 13:42:28.859356 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/29f8bd1f-9a14-4725-a333-ee7509778b5d-kube-api-access-8n6f7" (OuterVolumeSpecName: "kube-api-access-8n6f7") pod "29f8bd1f-9a14-4725-a333-ee7509778b5d" (UID: "29f8bd1f-9a14-4725-a333-ee7509778b5d"). InnerVolumeSpecName "kube-api-access-8n6f7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 13:42:28 crc kubenswrapper[4768]: I0217 13:42:28.890277 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9497730e-2a05-40b9-a4ee-364b67a9133c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9497730e-2a05-40b9-a4ee-364b67a9133c" (UID: "9497730e-2a05-40b9-a4ee-364b67a9133c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 13:42:28 crc kubenswrapper[4768]: I0217 13:42:28.936604 4768 generic.go:334] "Generic (PLEG): container finished" podID="a409f38d-1da9-42e5-94ff-502133f6cee2" containerID="38cea3e6dea32bbb82bf277a79c52a6f19ef1e1488aff139fc90b871b156d1e5" exitCode=0 Feb 17 13:42:28 crc kubenswrapper[4768]: I0217 13:42:28.936669 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pc222" event={"ID":"a409f38d-1da9-42e5-94ff-502133f6cee2","Type":"ContainerDied","Data":"38cea3e6dea32bbb82bf277a79c52a6f19ef1e1488aff139fc90b871b156d1e5"} Feb 17 13:42:28 crc kubenswrapper[4768]: I0217 13:42:28.936699 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pc222" event={"ID":"a409f38d-1da9-42e5-94ff-502133f6cee2","Type":"ContainerDied","Data":"ae08bbc669c26ea54a1eccf5d8a7581a7db482300f371666c002c6680cf517cc"} Feb 17 13:42:28 crc kubenswrapper[4768]: I0217 13:42:28.936717 4768 scope.go:117] "RemoveContainer" containerID="38cea3e6dea32bbb82bf277a79c52a6f19ef1e1488aff139fc90b871b156d1e5" Feb 17 13:42:28 crc kubenswrapper[4768]: I0217 13:42:28.936863 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-pc222" Feb 17 13:42:28 crc kubenswrapper[4768]: I0217 13:42:28.942321 4768 generic.go:334] "Generic (PLEG): container finished" podID="7e7136ca-949e-49ff-9f79-47e485a039cb" containerID="9278fefa4c49699021989ee3cb743044d055facb0872caf9a9d46dffdd91d025" exitCode=0 Feb 17 13:42:28 crc kubenswrapper[4768]: I0217 13:42:28.942389 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wsm7m" event={"ID":"7e7136ca-949e-49ff-9f79-47e485a039cb","Type":"ContainerDied","Data":"9278fefa4c49699021989ee3cb743044d055facb0872caf9a9d46dffdd91d025"} Feb 17 13:42:28 crc kubenswrapper[4768]: I0217 13:42:28.942411 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wsm7m" event={"ID":"7e7136ca-949e-49ff-9f79-47e485a039cb","Type":"ContainerDied","Data":"ce796e0af413a3d24605aa1cb123933bbacb432e8731ba5e1f6cf64cdcf3e78f"} Feb 17 13:42:28 crc kubenswrapper[4768]: I0217 13:42:28.942427 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wsm7m" Feb 17 13:42:28 crc kubenswrapper[4768]: I0217 13:42:28.944887 4768 generic.go:334] "Generic (PLEG): container finished" podID="9497730e-2a05-40b9-a4ee-364b67a9133c" containerID="dcb86514e3bef6771df47f752ee3adfae73ed35b177b35ee43a6e3ef35c0672f" exitCode=0 Feb 17 13:42:28 crc kubenswrapper[4768]: I0217 13:42:28.944944 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ncxf5" event={"ID":"9497730e-2a05-40b9-a4ee-364b67a9133c","Type":"ContainerDied","Data":"dcb86514e3bef6771df47f752ee3adfae73ed35b177b35ee43a6e3ef35c0672f"} Feb 17 13:42:28 crc kubenswrapper[4768]: I0217 13:42:28.944968 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ncxf5" event={"ID":"9497730e-2a05-40b9-a4ee-364b67a9133c","Type":"ContainerDied","Data":"d7b837c2ab3f225a7a1115fa7a02de21d79487055c24cad566eecffda97250cb"} Feb 17 13:42:28 crc kubenswrapper[4768]: I0217 13:42:28.944987 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-ncxf5" Feb 17 13:42:28 crc kubenswrapper[4768]: I0217 13:42:28.946727 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-grp2v" event={"ID":"949f4cbc-e86f-4f30-bac7-d31c24169e4e","Type":"ContainerStarted","Data":"35fdbc8e1930ee2fbd73469b1d25d2c94fe40932320535f07a25c4d1702ccdeb"} Feb 17 13:42:28 crc kubenswrapper[4768]: I0217 13:42:28.946755 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-grp2v" event={"ID":"949f4cbc-e86f-4f30-bac7-d31c24169e4e","Type":"ContainerStarted","Data":"b475c1b6553481f5fd1f38f30d37118061e7475a3f005525f9caa92580fc0f78"} Feb 17 13:42:28 crc kubenswrapper[4768]: I0217 13:42:28.947460 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-grp2v" Feb 17 13:42:28 crc kubenswrapper[4768]: I0217 13:42:28.948945 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-grp2v" Feb 17 13:42:28 crc kubenswrapper[4768]: I0217 13:42:28.949841 4768 generic.go:334] "Generic (PLEG): container finished" podID="3529a765-a06e-42c3-9a16-959ca7662469" containerID="77952fd948545e2c1b9533b2e824eea69775681b93fb2de558e55544ac829e05" exitCode=0 Feb 17 13:42:28 crc kubenswrapper[4768]: I0217 13:42:28.949905 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fg2l6" event={"ID":"3529a765-a06e-42c3-9a16-959ca7662469","Type":"ContainerDied","Data":"77952fd948545e2c1b9533b2e824eea69775681b93fb2de558e55544ac829e05"} Feb 17 13:42:28 crc kubenswrapper[4768]: I0217 13:42:28.949924 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fg2l6" event={"ID":"3529a765-a06e-42c3-9a16-959ca7662469","Type":"ContainerDied","Data":"a89dda73380302fda39de9b4c0ce4be9cc46329aed64bcbc91e1c222bfee5caf"} Feb 17 13:42:28 crc kubenswrapper[4768]: I0217 13:42:28.950059 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-fg2l6" Feb 17 13:42:28 crc kubenswrapper[4768]: I0217 13:42:28.952255 4768 generic.go:334] "Generic (PLEG): container finished" podID="29f8bd1f-9a14-4725-a333-ee7509778b5d" containerID="1a9d7c1759120b58b3c6e021d32cc7b1c46f863e8541ccd867ceb5b19ebd6638" exitCode=0 Feb 17 13:42:28 crc kubenswrapper[4768]: I0217 13:42:28.952279 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-5wgl6" event={"ID":"29f8bd1f-9a14-4725-a333-ee7509778b5d","Type":"ContainerDied","Data":"1a9d7c1759120b58b3c6e021d32cc7b1c46f863e8541ccd867ceb5b19ebd6638"} Feb 17 13:42:28 crc kubenswrapper[4768]: I0217 13:42:28.952299 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-5wgl6" event={"ID":"29f8bd1f-9a14-4725-a333-ee7509778b5d","Type":"ContainerDied","Data":"9ade9ce96b94d478c6acadbc599460dc662a98ebe7f86c585e40427e6b9363ea"} Feb 17 13:42:28 crc kubenswrapper[4768]: I0217 13:42:28.952334 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-5wgl6" Feb 17 13:42:28 crc kubenswrapper[4768]: I0217 13:42:28.957142 4768 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3529a765-a06e-42c3-9a16-959ca7662469-utilities\") on node \"crc\" DevicePath \"\"" Feb 17 13:42:28 crc kubenswrapper[4768]: I0217 13:42:28.957166 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8n6f7\" (UniqueName: \"kubernetes.io/projected/29f8bd1f-9a14-4725-a333-ee7509778b5d-kube-api-access-8n6f7\") on node \"crc\" DevicePath \"\"" Feb 17 13:42:28 crc kubenswrapper[4768]: I0217 13:42:28.957176 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9skbx\" (UniqueName: \"kubernetes.io/projected/3529a765-a06e-42c3-9a16-959ca7662469-kube-api-access-9skbx\") on node \"crc\" DevicePath \"\"" Feb 17 13:42:28 crc kubenswrapper[4768]: I0217 13:42:28.957184 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rnlnm\" (UniqueName: \"kubernetes.io/projected/9497730e-2a05-40b9-a4ee-364b67a9133c-kube-api-access-rnlnm\") on node \"crc\" DevicePath \"\"" Feb 17 13:42:28 crc kubenswrapper[4768]: I0217 13:42:28.957193 4768 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/29f8bd1f-9a14-4725-a333-ee7509778b5d-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Feb 17 13:42:28 crc kubenswrapper[4768]: I0217 13:42:28.957201 4768 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9497730e-2a05-40b9-a4ee-364b67a9133c-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 17 13:42:28 crc kubenswrapper[4768]: I0217 13:42:28.957871 4768 scope.go:117] "RemoveContainer" containerID="4ed86827d60fc6573a53c0e8439f2fc255f866b9306ecc21f22ad7f8a4ee90f7" Feb 17 13:42:28 crc kubenswrapper[4768]: I0217 13:42:28.977066 4768 scope.go:117] "RemoveContainer" containerID="5cae6f7ee03b0057e0f8d790de9aef10c4daa71bfdb30e3bdad31b6102b36960" Feb 17 13:42:28 crc kubenswrapper[4768]: I0217 13:42:28.982385 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3529a765-a06e-42c3-9a16-959ca7662469-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3529a765-a06e-42c3-9a16-959ca7662469" (UID: "3529a765-a06e-42c3-9a16-959ca7662469"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 13:42:28 crc kubenswrapper[4768]: I0217 13:42:28.983905 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-grp2v" podStartSLOduration=1.983889319 podStartE2EDuration="1.983889319s" podCreationTimestamp="2026-02-17 13:42:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 13:42:28.981017908 +0000 UTC m=+368.260404350" watchObservedRunningTime="2026-02-17 13:42:28.983889319 +0000 UTC m=+368.263275761" Feb 17 13:42:29 crc kubenswrapper[4768]: I0217 13:42:29.001331 4768 scope.go:117] "RemoveContainer" containerID="38cea3e6dea32bbb82bf277a79c52a6f19ef1e1488aff139fc90b871b156d1e5" Feb 17 13:42:29 crc kubenswrapper[4768]: E0217 13:42:29.003875 4768 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"38cea3e6dea32bbb82bf277a79c52a6f19ef1e1488aff139fc90b871b156d1e5\": container with ID starting with 38cea3e6dea32bbb82bf277a79c52a6f19ef1e1488aff139fc90b871b156d1e5 not found: ID does not exist" containerID="38cea3e6dea32bbb82bf277a79c52a6f19ef1e1488aff139fc90b871b156d1e5" Feb 17 13:42:29 crc kubenswrapper[4768]: I0217 13:42:29.003949 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"38cea3e6dea32bbb82bf277a79c52a6f19ef1e1488aff139fc90b871b156d1e5"} err="failed to get container status \"38cea3e6dea32bbb82bf277a79c52a6f19ef1e1488aff139fc90b871b156d1e5\": rpc error: code = NotFound desc = could not find container \"38cea3e6dea32bbb82bf277a79c52a6f19ef1e1488aff139fc90b871b156d1e5\": container with ID starting with 38cea3e6dea32bbb82bf277a79c52a6f19ef1e1488aff139fc90b871b156d1e5 not found: ID does not exist" Feb 17 13:42:29 crc kubenswrapper[4768]: I0217 13:42:29.003981 4768 scope.go:117] "RemoveContainer" containerID="4ed86827d60fc6573a53c0e8439f2fc255f866b9306ecc21f22ad7f8a4ee90f7" Feb 17 13:42:29 crc kubenswrapper[4768]: E0217 13:42:29.004848 4768 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4ed86827d60fc6573a53c0e8439f2fc255f866b9306ecc21f22ad7f8a4ee90f7\": container with ID starting with 4ed86827d60fc6573a53c0e8439f2fc255f866b9306ecc21f22ad7f8a4ee90f7 not found: ID does not exist" containerID="4ed86827d60fc6573a53c0e8439f2fc255f866b9306ecc21f22ad7f8a4ee90f7" Feb 17 13:42:29 crc kubenswrapper[4768]: I0217 13:42:29.004904 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4ed86827d60fc6573a53c0e8439f2fc255f866b9306ecc21f22ad7f8a4ee90f7"} err="failed to get container status \"4ed86827d60fc6573a53c0e8439f2fc255f866b9306ecc21f22ad7f8a4ee90f7\": rpc error: code = NotFound desc = could not find container \"4ed86827d60fc6573a53c0e8439f2fc255f866b9306ecc21f22ad7f8a4ee90f7\": container with ID starting with 4ed86827d60fc6573a53c0e8439f2fc255f866b9306ecc21f22ad7f8a4ee90f7 not found: ID does not exist" Feb 17 13:42:29 crc kubenswrapper[4768]: I0217 13:42:29.004927 4768 scope.go:117] "RemoveContainer" containerID="5cae6f7ee03b0057e0f8d790de9aef10c4daa71bfdb30e3bdad31b6102b36960" Feb 17 13:42:29 crc kubenswrapper[4768]: E0217 13:42:29.005252 4768 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5cae6f7ee03b0057e0f8d790de9aef10c4daa71bfdb30e3bdad31b6102b36960\": container with ID starting with 5cae6f7ee03b0057e0f8d790de9aef10c4daa71bfdb30e3bdad31b6102b36960 not found: ID does not exist" containerID="5cae6f7ee03b0057e0f8d790de9aef10c4daa71bfdb30e3bdad31b6102b36960" Feb 17 13:42:29 crc kubenswrapper[4768]: I0217 13:42:29.005339 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5cae6f7ee03b0057e0f8d790de9aef10c4daa71bfdb30e3bdad31b6102b36960"} err="failed to get container status \"5cae6f7ee03b0057e0f8d790de9aef10c4daa71bfdb30e3bdad31b6102b36960\": rpc error: code = NotFound desc = could not find container \"5cae6f7ee03b0057e0f8d790de9aef10c4daa71bfdb30e3bdad31b6102b36960\": container with ID starting with 5cae6f7ee03b0057e0f8d790de9aef10c4daa71bfdb30e3bdad31b6102b36960 not found: ID does not exist" Feb 17 13:42:29 crc kubenswrapper[4768]: I0217 13:42:29.005367 4768 scope.go:117] "RemoveContainer" containerID="9278fefa4c49699021989ee3cb743044d055facb0872caf9a9d46dffdd91d025" Feb 17 13:42:29 crc kubenswrapper[4768]: I0217 13:42:29.031777 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-pc222"] Feb 17 13:42:29 crc kubenswrapper[4768]: I0217 13:42:29.039767 4768 scope.go:117] "RemoveContainer" containerID="73e337e7f7e17e46af93898770869505955501061a0c1e3fa57eaf7f6ec4d634" Feb 17 13:42:29 crc kubenswrapper[4768]: I0217 13:42:29.053186 4768 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-pc222"] Feb 17 13:42:29 crc kubenswrapper[4768]: I0217 13:42:29.058037 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-5wgl6"] Feb 17 13:42:29 crc kubenswrapper[4768]: I0217 13:42:29.061872 4768 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-5wgl6"] Feb 17 13:42:29 crc kubenswrapper[4768]: I0217 13:42:29.065737 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-ncxf5"] Feb 17 13:42:29 crc kubenswrapper[4768]: I0217 13:42:29.067360 4768 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3529a765-a06e-42c3-9a16-959ca7662469-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 17 13:42:29 crc kubenswrapper[4768]: I0217 13:42:29.067977 4768 scope.go:117] "RemoveContainer" containerID="6144258bc968b02299b9af597df580d71e626025bd538c8f7c0eb55a6aee85bb" Feb 17 13:42:29 crc kubenswrapper[4768]: I0217 13:42:29.069280 4768 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-ncxf5"] Feb 17 13:42:29 crc kubenswrapper[4768]: I0217 13:42:29.072422 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-wsm7m"] Feb 17 13:42:29 crc kubenswrapper[4768]: I0217 13:42:29.076199 4768 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-wsm7m"] Feb 17 13:42:29 crc kubenswrapper[4768]: I0217 13:42:29.082872 4768 scope.go:117] "RemoveContainer" containerID="9278fefa4c49699021989ee3cb743044d055facb0872caf9a9d46dffdd91d025" Feb 17 13:42:29 crc kubenswrapper[4768]: E0217 13:42:29.083311 4768 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9278fefa4c49699021989ee3cb743044d055facb0872caf9a9d46dffdd91d025\": container with ID starting with 9278fefa4c49699021989ee3cb743044d055facb0872caf9a9d46dffdd91d025 not found: ID does not exist" containerID="9278fefa4c49699021989ee3cb743044d055facb0872caf9a9d46dffdd91d025" Feb 17 13:42:29 crc kubenswrapper[4768]: I0217 13:42:29.083343 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9278fefa4c49699021989ee3cb743044d055facb0872caf9a9d46dffdd91d025"} err="failed to get container status \"9278fefa4c49699021989ee3cb743044d055facb0872caf9a9d46dffdd91d025\": rpc error: code = NotFound desc = could not find container \"9278fefa4c49699021989ee3cb743044d055facb0872caf9a9d46dffdd91d025\": container with ID starting with 9278fefa4c49699021989ee3cb743044d055facb0872caf9a9d46dffdd91d025 not found: ID does not exist" Feb 17 13:42:29 crc kubenswrapper[4768]: I0217 13:42:29.083401 4768 scope.go:117] "RemoveContainer" containerID="73e337e7f7e17e46af93898770869505955501061a0c1e3fa57eaf7f6ec4d634" Feb 17 13:42:29 crc kubenswrapper[4768]: E0217 13:42:29.083794 4768 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"73e337e7f7e17e46af93898770869505955501061a0c1e3fa57eaf7f6ec4d634\": container with ID starting with 73e337e7f7e17e46af93898770869505955501061a0c1e3fa57eaf7f6ec4d634 not found: ID does not exist" containerID="73e337e7f7e17e46af93898770869505955501061a0c1e3fa57eaf7f6ec4d634" Feb 17 13:42:29 crc kubenswrapper[4768]: I0217 13:42:29.083848 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"73e337e7f7e17e46af93898770869505955501061a0c1e3fa57eaf7f6ec4d634"} err="failed to get container status \"73e337e7f7e17e46af93898770869505955501061a0c1e3fa57eaf7f6ec4d634\": rpc error: code = NotFound desc = could not find container \"73e337e7f7e17e46af93898770869505955501061a0c1e3fa57eaf7f6ec4d634\": container with ID starting with 73e337e7f7e17e46af93898770869505955501061a0c1e3fa57eaf7f6ec4d634 not found: ID does not exist" Feb 17 13:42:29 crc kubenswrapper[4768]: I0217 13:42:29.083866 4768 scope.go:117] "RemoveContainer" containerID="6144258bc968b02299b9af597df580d71e626025bd538c8f7c0eb55a6aee85bb" Feb 17 13:42:29 crc kubenswrapper[4768]: E0217 13:42:29.084347 4768 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6144258bc968b02299b9af597df580d71e626025bd538c8f7c0eb55a6aee85bb\": container with ID starting with 6144258bc968b02299b9af597df580d71e626025bd538c8f7c0eb55a6aee85bb not found: ID does not exist" containerID="6144258bc968b02299b9af597df580d71e626025bd538c8f7c0eb55a6aee85bb" Feb 17 13:42:29 crc kubenswrapper[4768]: I0217 13:42:29.084402 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6144258bc968b02299b9af597df580d71e626025bd538c8f7c0eb55a6aee85bb"} err="failed to get container status \"6144258bc968b02299b9af597df580d71e626025bd538c8f7c0eb55a6aee85bb\": rpc error: code = NotFound desc = could not find container \"6144258bc968b02299b9af597df580d71e626025bd538c8f7c0eb55a6aee85bb\": container with ID starting with 6144258bc968b02299b9af597df580d71e626025bd538c8f7c0eb55a6aee85bb not found: ID does not exist" Feb 17 13:42:29 crc kubenswrapper[4768]: I0217 13:42:29.084421 4768 scope.go:117] "RemoveContainer" containerID="dcb86514e3bef6771df47f752ee3adfae73ed35b177b35ee43a6e3ef35c0672f" Feb 17 13:42:29 crc kubenswrapper[4768]: I0217 13:42:29.099133 4768 scope.go:117] "RemoveContainer" containerID="9468aeb0713155aeea4c7bd0be2a7ec89a9fb482cef3b0e24b2809af75e71ba4" Feb 17 13:42:29 crc kubenswrapper[4768]: I0217 13:42:29.120178 4768 scope.go:117] "RemoveContainer" containerID="80749b682f01b18e73b2c01731acd65c9430e9eae428240679ac432771b2a66d" Feb 17 13:42:29 crc kubenswrapper[4768]: I0217 13:42:29.134548 4768 scope.go:117] "RemoveContainer" containerID="dcb86514e3bef6771df47f752ee3adfae73ed35b177b35ee43a6e3ef35c0672f" Feb 17 13:42:29 crc kubenswrapper[4768]: E0217 13:42:29.134864 4768 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dcb86514e3bef6771df47f752ee3adfae73ed35b177b35ee43a6e3ef35c0672f\": container with ID starting with dcb86514e3bef6771df47f752ee3adfae73ed35b177b35ee43a6e3ef35c0672f not found: ID does not exist" containerID="dcb86514e3bef6771df47f752ee3adfae73ed35b177b35ee43a6e3ef35c0672f" Feb 17 13:42:29 crc kubenswrapper[4768]: I0217 13:42:29.134902 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dcb86514e3bef6771df47f752ee3adfae73ed35b177b35ee43a6e3ef35c0672f"} err="failed to get container status \"dcb86514e3bef6771df47f752ee3adfae73ed35b177b35ee43a6e3ef35c0672f\": rpc error: code = NotFound desc = could not find container \"dcb86514e3bef6771df47f752ee3adfae73ed35b177b35ee43a6e3ef35c0672f\": container with ID starting with dcb86514e3bef6771df47f752ee3adfae73ed35b177b35ee43a6e3ef35c0672f not found: ID does not exist" Feb 17 13:42:29 crc kubenswrapper[4768]: I0217 13:42:29.134930 4768 scope.go:117] "RemoveContainer" containerID="9468aeb0713155aeea4c7bd0be2a7ec89a9fb482cef3b0e24b2809af75e71ba4" Feb 17 13:42:29 crc kubenswrapper[4768]: E0217 13:42:29.135234 4768 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9468aeb0713155aeea4c7bd0be2a7ec89a9fb482cef3b0e24b2809af75e71ba4\": container with ID starting with 9468aeb0713155aeea4c7bd0be2a7ec89a9fb482cef3b0e24b2809af75e71ba4 not found: ID does not exist" containerID="9468aeb0713155aeea4c7bd0be2a7ec89a9fb482cef3b0e24b2809af75e71ba4" Feb 17 13:42:29 crc kubenswrapper[4768]: I0217 13:42:29.135261 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9468aeb0713155aeea4c7bd0be2a7ec89a9fb482cef3b0e24b2809af75e71ba4"} err="failed to get container status \"9468aeb0713155aeea4c7bd0be2a7ec89a9fb482cef3b0e24b2809af75e71ba4\": rpc error: code = NotFound desc = could not find container \"9468aeb0713155aeea4c7bd0be2a7ec89a9fb482cef3b0e24b2809af75e71ba4\": container with ID starting with 9468aeb0713155aeea4c7bd0be2a7ec89a9fb482cef3b0e24b2809af75e71ba4 not found: ID does not exist" Feb 17 13:42:29 crc kubenswrapper[4768]: I0217 13:42:29.135280 4768 scope.go:117] "RemoveContainer" containerID="80749b682f01b18e73b2c01731acd65c9430e9eae428240679ac432771b2a66d" Feb 17 13:42:29 crc kubenswrapper[4768]: E0217 13:42:29.135541 4768 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"80749b682f01b18e73b2c01731acd65c9430e9eae428240679ac432771b2a66d\": container with ID starting with 80749b682f01b18e73b2c01731acd65c9430e9eae428240679ac432771b2a66d not found: ID does not exist" containerID="80749b682f01b18e73b2c01731acd65c9430e9eae428240679ac432771b2a66d" Feb 17 13:42:29 crc kubenswrapper[4768]: I0217 13:42:29.135564 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"80749b682f01b18e73b2c01731acd65c9430e9eae428240679ac432771b2a66d"} err="failed to get container status \"80749b682f01b18e73b2c01731acd65c9430e9eae428240679ac432771b2a66d\": rpc error: code = NotFound desc = could not find container \"80749b682f01b18e73b2c01731acd65c9430e9eae428240679ac432771b2a66d\": container with ID starting with 80749b682f01b18e73b2c01731acd65c9430e9eae428240679ac432771b2a66d not found: ID does not exist" Feb 17 13:42:29 crc kubenswrapper[4768]: I0217 13:42:29.135578 4768 scope.go:117] "RemoveContainer" containerID="77952fd948545e2c1b9533b2e824eea69775681b93fb2de558e55544ac829e05" Feb 17 13:42:29 crc kubenswrapper[4768]: I0217 13:42:29.156448 4768 scope.go:117] "RemoveContainer" containerID="7629fd7b10653bd059d0bd1befbe43869303faaf4ac25417b177c53c311a55fc" Feb 17 13:42:29 crc kubenswrapper[4768]: I0217 13:42:29.173207 4768 scope.go:117] "RemoveContainer" containerID="544862a676076d1f0165f8b3d61bb247d08c75adb18b02e4917db1dd1636c75e" Feb 17 13:42:29 crc kubenswrapper[4768]: I0217 13:42:29.185233 4768 scope.go:117] "RemoveContainer" containerID="77952fd948545e2c1b9533b2e824eea69775681b93fb2de558e55544ac829e05" Feb 17 13:42:29 crc kubenswrapper[4768]: E0217 13:42:29.185636 4768 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"77952fd948545e2c1b9533b2e824eea69775681b93fb2de558e55544ac829e05\": container with ID starting with 77952fd948545e2c1b9533b2e824eea69775681b93fb2de558e55544ac829e05 not found: ID does not exist" containerID="77952fd948545e2c1b9533b2e824eea69775681b93fb2de558e55544ac829e05" Feb 17 13:42:29 crc kubenswrapper[4768]: I0217 13:42:29.185660 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"77952fd948545e2c1b9533b2e824eea69775681b93fb2de558e55544ac829e05"} err="failed to get container status \"77952fd948545e2c1b9533b2e824eea69775681b93fb2de558e55544ac829e05\": rpc error: code = NotFound desc = could not find container \"77952fd948545e2c1b9533b2e824eea69775681b93fb2de558e55544ac829e05\": container with ID starting with 77952fd948545e2c1b9533b2e824eea69775681b93fb2de558e55544ac829e05 not found: ID does not exist" Feb 17 13:42:29 crc kubenswrapper[4768]: I0217 13:42:29.185681 4768 scope.go:117] "RemoveContainer" containerID="7629fd7b10653bd059d0bd1befbe43869303faaf4ac25417b177c53c311a55fc" Feb 17 13:42:29 crc kubenswrapper[4768]: E0217 13:42:29.186376 4768 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7629fd7b10653bd059d0bd1befbe43869303faaf4ac25417b177c53c311a55fc\": container with ID starting with 7629fd7b10653bd059d0bd1befbe43869303faaf4ac25417b177c53c311a55fc not found: ID does not exist" containerID="7629fd7b10653bd059d0bd1befbe43869303faaf4ac25417b177c53c311a55fc" Feb 17 13:42:29 crc kubenswrapper[4768]: I0217 13:42:29.186414 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7629fd7b10653bd059d0bd1befbe43869303faaf4ac25417b177c53c311a55fc"} err="failed to get container status \"7629fd7b10653bd059d0bd1befbe43869303faaf4ac25417b177c53c311a55fc\": rpc error: code = NotFound desc = could not find container \"7629fd7b10653bd059d0bd1befbe43869303faaf4ac25417b177c53c311a55fc\": container with ID starting with 7629fd7b10653bd059d0bd1befbe43869303faaf4ac25417b177c53c311a55fc not found: ID does not exist" Feb 17 13:42:29 crc kubenswrapper[4768]: I0217 13:42:29.186439 4768 scope.go:117] "RemoveContainer" containerID="544862a676076d1f0165f8b3d61bb247d08c75adb18b02e4917db1dd1636c75e" Feb 17 13:42:29 crc kubenswrapper[4768]: E0217 13:42:29.186800 4768 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"544862a676076d1f0165f8b3d61bb247d08c75adb18b02e4917db1dd1636c75e\": container with ID starting with 544862a676076d1f0165f8b3d61bb247d08c75adb18b02e4917db1dd1636c75e not found: ID does not exist" containerID="544862a676076d1f0165f8b3d61bb247d08c75adb18b02e4917db1dd1636c75e" Feb 17 13:42:29 crc kubenswrapper[4768]: I0217 13:42:29.186825 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"544862a676076d1f0165f8b3d61bb247d08c75adb18b02e4917db1dd1636c75e"} err="failed to get container status \"544862a676076d1f0165f8b3d61bb247d08c75adb18b02e4917db1dd1636c75e\": rpc error: code = NotFound desc = could not find container \"544862a676076d1f0165f8b3d61bb247d08c75adb18b02e4917db1dd1636c75e\": container with ID starting with 544862a676076d1f0165f8b3d61bb247d08c75adb18b02e4917db1dd1636c75e not found: ID does not exist" Feb 17 13:42:29 crc kubenswrapper[4768]: I0217 13:42:29.186858 4768 scope.go:117] "RemoveContainer" containerID="1a9d7c1759120b58b3c6e021d32cc7b1c46f863e8541ccd867ceb5b19ebd6638" Feb 17 13:42:29 crc kubenswrapper[4768]: I0217 13:42:29.197752 4768 scope.go:117] "RemoveContainer" containerID="82c76dad84961fd21b2badeef4da9697bb097acb8b567939d32878f86ca80523" Feb 17 13:42:29 crc kubenswrapper[4768]: I0217 13:42:29.208636 4768 scope.go:117] "RemoveContainer" containerID="1a9d7c1759120b58b3c6e021d32cc7b1c46f863e8541ccd867ceb5b19ebd6638" Feb 17 13:42:29 crc kubenswrapper[4768]: E0217 13:42:29.209029 4768 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1a9d7c1759120b58b3c6e021d32cc7b1c46f863e8541ccd867ceb5b19ebd6638\": container with ID starting with 1a9d7c1759120b58b3c6e021d32cc7b1c46f863e8541ccd867ceb5b19ebd6638 not found: ID does not exist" containerID="1a9d7c1759120b58b3c6e021d32cc7b1c46f863e8541ccd867ceb5b19ebd6638" Feb 17 13:42:29 crc kubenswrapper[4768]: I0217 13:42:29.209064 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1a9d7c1759120b58b3c6e021d32cc7b1c46f863e8541ccd867ceb5b19ebd6638"} err="failed to get container status \"1a9d7c1759120b58b3c6e021d32cc7b1c46f863e8541ccd867ceb5b19ebd6638\": rpc error: code = NotFound desc = could not find container \"1a9d7c1759120b58b3c6e021d32cc7b1c46f863e8541ccd867ceb5b19ebd6638\": container with ID starting with 1a9d7c1759120b58b3c6e021d32cc7b1c46f863e8541ccd867ceb5b19ebd6638 not found: ID does not exist" Feb 17 13:42:29 crc kubenswrapper[4768]: I0217 13:42:29.209109 4768 scope.go:117] "RemoveContainer" containerID="82c76dad84961fd21b2badeef4da9697bb097acb8b567939d32878f86ca80523" Feb 17 13:42:29 crc kubenswrapper[4768]: E0217 13:42:29.209438 4768 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"82c76dad84961fd21b2badeef4da9697bb097acb8b567939d32878f86ca80523\": container with ID starting with 82c76dad84961fd21b2badeef4da9697bb097acb8b567939d32878f86ca80523 not found: ID does not exist" containerID="82c76dad84961fd21b2badeef4da9697bb097acb8b567939d32878f86ca80523" Feb 17 13:42:29 crc kubenswrapper[4768]: I0217 13:42:29.209474 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"82c76dad84961fd21b2badeef4da9697bb097acb8b567939d32878f86ca80523"} err="failed to get container status \"82c76dad84961fd21b2badeef4da9697bb097acb8b567939d32878f86ca80523\": rpc error: code = NotFound desc = could not find container \"82c76dad84961fd21b2badeef4da9697bb097acb8b567939d32878f86ca80523\": container with ID starting with 82c76dad84961fd21b2badeef4da9697bb097acb8b567939d32878f86ca80523 not found: ID does not exist" Feb 17 13:42:29 crc kubenswrapper[4768]: I0217 13:42:29.283667 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-fg2l6"] Feb 17 13:42:29 crc kubenswrapper[4768]: I0217 13:42:29.285783 4768 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-fg2l6"] Feb 17 13:42:29 crc kubenswrapper[4768]: I0217 13:42:29.547976 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="29f8bd1f-9a14-4725-a333-ee7509778b5d" path="/var/lib/kubelet/pods/29f8bd1f-9a14-4725-a333-ee7509778b5d/volumes" Feb 17 13:42:29 crc kubenswrapper[4768]: I0217 13:42:29.548471 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3529a765-a06e-42c3-9a16-959ca7662469" path="/var/lib/kubelet/pods/3529a765-a06e-42c3-9a16-959ca7662469/volumes" Feb 17 13:42:29 crc kubenswrapper[4768]: I0217 13:42:29.549040 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7e7136ca-949e-49ff-9f79-47e485a039cb" path="/var/lib/kubelet/pods/7e7136ca-949e-49ff-9f79-47e485a039cb/volumes" Feb 17 13:42:29 crc kubenswrapper[4768]: I0217 13:42:29.550172 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9497730e-2a05-40b9-a4ee-364b67a9133c" path="/var/lib/kubelet/pods/9497730e-2a05-40b9-a4ee-364b67a9133c/volumes" Feb 17 13:42:29 crc kubenswrapper[4768]: I0217 13:42:29.550898 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a409f38d-1da9-42e5-94ff-502133f6cee2" path="/var/lib/kubelet/pods/a409f38d-1da9-42e5-94ff-502133f6cee2/volumes" Feb 17 13:42:29 crc kubenswrapper[4768]: I0217 13:42:29.898905 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-288nc"] Feb 17 13:42:29 crc kubenswrapper[4768]: E0217 13:42:29.899697 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9497730e-2a05-40b9-a4ee-364b67a9133c" containerName="extract-content" Feb 17 13:42:29 crc kubenswrapper[4768]: I0217 13:42:29.899747 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="9497730e-2a05-40b9-a4ee-364b67a9133c" containerName="extract-content" Feb 17 13:42:29 crc kubenswrapper[4768]: E0217 13:42:29.899814 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7e7136ca-949e-49ff-9f79-47e485a039cb" containerName="registry-server" Feb 17 13:42:29 crc kubenswrapper[4768]: I0217 13:42:29.899822 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="7e7136ca-949e-49ff-9f79-47e485a039cb" containerName="registry-server" Feb 17 13:42:29 crc kubenswrapper[4768]: E0217 13:42:29.899878 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9497730e-2a05-40b9-a4ee-364b67a9133c" containerName="registry-server" Feb 17 13:42:29 crc kubenswrapper[4768]: I0217 13:42:29.899888 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="9497730e-2a05-40b9-a4ee-364b67a9133c" containerName="registry-server" Feb 17 13:42:29 crc kubenswrapper[4768]: E0217 13:42:29.899898 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a409f38d-1da9-42e5-94ff-502133f6cee2" containerName="extract-utilities" Feb 17 13:42:29 crc kubenswrapper[4768]: I0217 13:42:29.903079 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="a409f38d-1da9-42e5-94ff-502133f6cee2" containerName="extract-utilities" Feb 17 13:42:29 crc kubenswrapper[4768]: E0217 13:42:29.903123 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a409f38d-1da9-42e5-94ff-502133f6cee2" containerName="extract-content" Feb 17 13:42:29 crc kubenswrapper[4768]: I0217 13:42:29.903131 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="a409f38d-1da9-42e5-94ff-502133f6cee2" containerName="extract-content" Feb 17 13:42:29 crc kubenswrapper[4768]: E0217 13:42:29.903139 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="29f8bd1f-9a14-4725-a333-ee7509778b5d" containerName="marketplace-operator" Feb 17 13:42:29 crc kubenswrapper[4768]: I0217 13:42:29.903145 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="29f8bd1f-9a14-4725-a333-ee7509778b5d" containerName="marketplace-operator" Feb 17 13:42:29 crc kubenswrapper[4768]: E0217 13:42:29.903156 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9497730e-2a05-40b9-a4ee-364b67a9133c" containerName="extract-utilities" Feb 17 13:42:29 crc kubenswrapper[4768]: I0217 13:42:29.903161 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="9497730e-2a05-40b9-a4ee-364b67a9133c" containerName="extract-utilities" Feb 17 13:42:29 crc kubenswrapper[4768]: E0217 13:42:29.903168 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="29f8bd1f-9a14-4725-a333-ee7509778b5d" containerName="marketplace-operator" Feb 17 13:42:29 crc kubenswrapper[4768]: I0217 13:42:29.903174 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="29f8bd1f-9a14-4725-a333-ee7509778b5d" containerName="marketplace-operator" Feb 17 13:42:29 crc kubenswrapper[4768]: E0217 13:42:29.903183 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3529a765-a06e-42c3-9a16-959ca7662469" containerName="registry-server" Feb 17 13:42:29 crc kubenswrapper[4768]: I0217 13:42:29.903188 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="3529a765-a06e-42c3-9a16-959ca7662469" containerName="registry-server" Feb 17 13:42:29 crc kubenswrapper[4768]: E0217 13:42:29.903198 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3529a765-a06e-42c3-9a16-959ca7662469" containerName="extract-utilities" Feb 17 13:42:29 crc kubenswrapper[4768]: I0217 13:42:29.903203 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="3529a765-a06e-42c3-9a16-959ca7662469" containerName="extract-utilities" Feb 17 13:42:29 crc kubenswrapper[4768]: E0217 13:42:29.903238 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7e7136ca-949e-49ff-9f79-47e485a039cb" containerName="extract-utilities" Feb 17 13:42:29 crc kubenswrapper[4768]: I0217 13:42:29.903244 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="7e7136ca-949e-49ff-9f79-47e485a039cb" containerName="extract-utilities" Feb 17 13:42:29 crc kubenswrapper[4768]: E0217 13:42:29.903467 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a409f38d-1da9-42e5-94ff-502133f6cee2" containerName="registry-server" Feb 17 13:42:29 crc kubenswrapper[4768]: I0217 13:42:29.903528 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="a409f38d-1da9-42e5-94ff-502133f6cee2" containerName="registry-server" Feb 17 13:42:29 crc kubenswrapper[4768]: E0217 13:42:29.903560 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3529a765-a06e-42c3-9a16-959ca7662469" containerName="extract-content" Feb 17 13:42:29 crc kubenswrapper[4768]: I0217 13:42:29.903613 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="3529a765-a06e-42c3-9a16-959ca7662469" containerName="extract-content" Feb 17 13:42:29 crc kubenswrapper[4768]: E0217 13:42:29.903713 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7e7136ca-949e-49ff-9f79-47e485a039cb" containerName="extract-content" Feb 17 13:42:29 crc kubenswrapper[4768]: I0217 13:42:29.903732 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="7e7136ca-949e-49ff-9f79-47e485a039cb" containerName="extract-content" Feb 17 13:42:29 crc kubenswrapper[4768]: I0217 13:42:29.904083 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="29f8bd1f-9a14-4725-a333-ee7509778b5d" containerName="marketplace-operator" Feb 17 13:42:29 crc kubenswrapper[4768]: I0217 13:42:29.904115 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="7e7136ca-949e-49ff-9f79-47e485a039cb" containerName="registry-server" Feb 17 13:42:29 crc kubenswrapper[4768]: I0217 13:42:29.904127 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="a409f38d-1da9-42e5-94ff-502133f6cee2" containerName="registry-server" Feb 17 13:42:29 crc kubenswrapper[4768]: I0217 13:42:29.904135 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="3529a765-a06e-42c3-9a16-959ca7662469" containerName="registry-server" Feb 17 13:42:29 crc kubenswrapper[4768]: I0217 13:42:29.904142 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="9497730e-2a05-40b9-a4ee-364b67a9133c" containerName="registry-server" Feb 17 13:42:29 crc kubenswrapper[4768]: I0217 13:42:29.904315 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="29f8bd1f-9a14-4725-a333-ee7509778b5d" containerName="marketplace-operator" Feb 17 13:42:29 crc kubenswrapper[4768]: I0217 13:42:29.904789 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-288nc"] Feb 17 13:42:29 crc kubenswrapper[4768]: I0217 13:42:29.904871 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-288nc" Feb 17 13:42:29 crc kubenswrapper[4768]: I0217 13:42:29.906916 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Feb 17 13:42:30 crc kubenswrapper[4768]: I0217 13:42:30.079821 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fcgwn\" (UniqueName: \"kubernetes.io/projected/317cb29b-f26f-4bed-a923-9fe5e7d15391-kube-api-access-fcgwn\") pod \"redhat-marketplace-288nc\" (UID: \"317cb29b-f26f-4bed-a923-9fe5e7d15391\") " pod="openshift-marketplace/redhat-marketplace-288nc" Feb 17 13:42:30 crc kubenswrapper[4768]: I0217 13:42:30.079892 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/317cb29b-f26f-4bed-a923-9fe5e7d15391-catalog-content\") pod \"redhat-marketplace-288nc\" (UID: \"317cb29b-f26f-4bed-a923-9fe5e7d15391\") " pod="openshift-marketplace/redhat-marketplace-288nc" Feb 17 13:42:30 crc kubenswrapper[4768]: I0217 13:42:30.079972 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/317cb29b-f26f-4bed-a923-9fe5e7d15391-utilities\") pod \"redhat-marketplace-288nc\" (UID: \"317cb29b-f26f-4bed-a923-9fe5e7d15391\") " pod="openshift-marketplace/redhat-marketplace-288nc" Feb 17 13:42:30 crc kubenswrapper[4768]: I0217 13:42:30.086658 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-hl2m8"] Feb 17 13:42:30 crc kubenswrapper[4768]: I0217 13:42:30.091237 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-hl2m8" Feb 17 13:42:30 crc kubenswrapper[4768]: I0217 13:42:30.094283 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Feb 17 13:42:30 crc kubenswrapper[4768]: I0217 13:42:30.108669 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-hl2m8"] Feb 17 13:42:30 crc kubenswrapper[4768]: I0217 13:42:30.181223 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fcgwn\" (UniqueName: \"kubernetes.io/projected/317cb29b-f26f-4bed-a923-9fe5e7d15391-kube-api-access-fcgwn\") pod \"redhat-marketplace-288nc\" (UID: \"317cb29b-f26f-4bed-a923-9fe5e7d15391\") " pod="openshift-marketplace/redhat-marketplace-288nc" Feb 17 13:42:30 crc kubenswrapper[4768]: I0217 13:42:30.181270 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/317cb29b-f26f-4bed-a923-9fe5e7d15391-catalog-content\") pod \"redhat-marketplace-288nc\" (UID: \"317cb29b-f26f-4bed-a923-9fe5e7d15391\") " pod="openshift-marketplace/redhat-marketplace-288nc" Feb 17 13:42:30 crc kubenswrapper[4768]: I0217 13:42:30.181300 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/317cb29b-f26f-4bed-a923-9fe5e7d15391-utilities\") pod \"redhat-marketplace-288nc\" (UID: \"317cb29b-f26f-4bed-a923-9fe5e7d15391\") " pod="openshift-marketplace/redhat-marketplace-288nc" Feb 17 13:42:30 crc kubenswrapper[4768]: I0217 13:42:30.181776 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/317cb29b-f26f-4bed-a923-9fe5e7d15391-catalog-content\") pod \"redhat-marketplace-288nc\" (UID: \"317cb29b-f26f-4bed-a923-9fe5e7d15391\") " pod="openshift-marketplace/redhat-marketplace-288nc" Feb 17 13:42:30 crc kubenswrapper[4768]: I0217 13:42:30.181798 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/317cb29b-f26f-4bed-a923-9fe5e7d15391-utilities\") pod \"redhat-marketplace-288nc\" (UID: \"317cb29b-f26f-4bed-a923-9fe5e7d15391\") " pod="openshift-marketplace/redhat-marketplace-288nc" Feb 17 13:42:30 crc kubenswrapper[4768]: I0217 13:42:30.201750 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fcgwn\" (UniqueName: \"kubernetes.io/projected/317cb29b-f26f-4bed-a923-9fe5e7d15391-kube-api-access-fcgwn\") pod \"redhat-marketplace-288nc\" (UID: \"317cb29b-f26f-4bed-a923-9fe5e7d15391\") " pod="openshift-marketplace/redhat-marketplace-288nc" Feb 17 13:42:30 crc kubenswrapper[4768]: I0217 13:42:30.230463 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-288nc" Feb 17 13:42:30 crc kubenswrapper[4768]: I0217 13:42:30.282604 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f27vr\" (UniqueName: \"kubernetes.io/projected/8a1604e7-c6f5-498e-ac94-a9e888e3e6b3-kube-api-access-f27vr\") pod \"certified-operators-hl2m8\" (UID: \"8a1604e7-c6f5-498e-ac94-a9e888e3e6b3\") " pod="openshift-marketplace/certified-operators-hl2m8" Feb 17 13:42:30 crc kubenswrapper[4768]: I0217 13:42:30.282908 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8a1604e7-c6f5-498e-ac94-a9e888e3e6b3-utilities\") pod \"certified-operators-hl2m8\" (UID: \"8a1604e7-c6f5-498e-ac94-a9e888e3e6b3\") " pod="openshift-marketplace/certified-operators-hl2m8" Feb 17 13:42:30 crc kubenswrapper[4768]: I0217 13:42:30.282936 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8a1604e7-c6f5-498e-ac94-a9e888e3e6b3-catalog-content\") pod \"certified-operators-hl2m8\" (UID: \"8a1604e7-c6f5-498e-ac94-a9e888e3e6b3\") " pod="openshift-marketplace/certified-operators-hl2m8" Feb 17 13:42:30 crc kubenswrapper[4768]: I0217 13:42:30.384491 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f27vr\" (UniqueName: \"kubernetes.io/projected/8a1604e7-c6f5-498e-ac94-a9e888e3e6b3-kube-api-access-f27vr\") pod \"certified-operators-hl2m8\" (UID: \"8a1604e7-c6f5-498e-ac94-a9e888e3e6b3\") " pod="openshift-marketplace/certified-operators-hl2m8" Feb 17 13:42:30 crc kubenswrapper[4768]: I0217 13:42:30.384541 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8a1604e7-c6f5-498e-ac94-a9e888e3e6b3-utilities\") pod \"certified-operators-hl2m8\" (UID: \"8a1604e7-c6f5-498e-ac94-a9e888e3e6b3\") " pod="openshift-marketplace/certified-operators-hl2m8" Feb 17 13:42:30 crc kubenswrapper[4768]: I0217 13:42:30.384568 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8a1604e7-c6f5-498e-ac94-a9e888e3e6b3-catalog-content\") pod \"certified-operators-hl2m8\" (UID: \"8a1604e7-c6f5-498e-ac94-a9e888e3e6b3\") " pod="openshift-marketplace/certified-operators-hl2m8" Feb 17 13:42:30 crc kubenswrapper[4768]: I0217 13:42:30.385178 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8a1604e7-c6f5-498e-ac94-a9e888e3e6b3-utilities\") pod \"certified-operators-hl2m8\" (UID: \"8a1604e7-c6f5-498e-ac94-a9e888e3e6b3\") " pod="openshift-marketplace/certified-operators-hl2m8" Feb 17 13:42:30 crc kubenswrapper[4768]: I0217 13:42:30.385189 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8a1604e7-c6f5-498e-ac94-a9e888e3e6b3-catalog-content\") pod \"certified-operators-hl2m8\" (UID: \"8a1604e7-c6f5-498e-ac94-a9e888e3e6b3\") " pod="openshift-marketplace/certified-operators-hl2m8" Feb 17 13:42:30 crc kubenswrapper[4768]: I0217 13:42:30.387409 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-288nc"] Feb 17 13:42:30 crc kubenswrapper[4768]: I0217 13:42:30.401966 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f27vr\" (UniqueName: \"kubernetes.io/projected/8a1604e7-c6f5-498e-ac94-a9e888e3e6b3-kube-api-access-f27vr\") pod \"certified-operators-hl2m8\" (UID: \"8a1604e7-c6f5-498e-ac94-a9e888e3e6b3\") " pod="openshift-marketplace/certified-operators-hl2m8" Feb 17 13:42:30 crc kubenswrapper[4768]: I0217 13:42:30.409561 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-hl2m8" Feb 17 13:42:30 crc kubenswrapper[4768]: I0217 13:42:30.576880 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-hl2m8"] Feb 17 13:42:30 crc kubenswrapper[4768]: W0217 13:42:30.617638 4768 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8a1604e7_c6f5_498e_ac94_a9e888e3e6b3.slice/crio-c4e4e43c2c2df3a5f950e062a95d9262ea4fc977c57beed5840b1ce8d812ce1d WatchSource:0}: Error finding container c4e4e43c2c2df3a5f950e062a95d9262ea4fc977c57beed5840b1ce8d812ce1d: Status 404 returned error can't find the container with id c4e4e43c2c2df3a5f950e062a95d9262ea4fc977c57beed5840b1ce8d812ce1d Feb 17 13:42:30 crc kubenswrapper[4768]: I0217 13:42:30.974215 4768 generic.go:334] "Generic (PLEG): container finished" podID="317cb29b-f26f-4bed-a923-9fe5e7d15391" containerID="072dba42a5bebcfc74a082c77cca0222e7ad64d559cedc0817986d7cec48a3f9" exitCode=0 Feb 17 13:42:30 crc kubenswrapper[4768]: I0217 13:42:30.974283 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-288nc" event={"ID":"317cb29b-f26f-4bed-a923-9fe5e7d15391","Type":"ContainerDied","Data":"072dba42a5bebcfc74a082c77cca0222e7ad64d559cedc0817986d7cec48a3f9"} Feb 17 13:42:30 crc kubenswrapper[4768]: I0217 13:42:30.974309 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-288nc" event={"ID":"317cb29b-f26f-4bed-a923-9fe5e7d15391","Type":"ContainerStarted","Data":"bc73683e7764926ed4f7f62b357696f73097e3358c78119f2073ff474f434b9b"} Feb 17 13:42:30 crc kubenswrapper[4768]: I0217 13:42:30.976058 4768 generic.go:334] "Generic (PLEG): container finished" podID="8a1604e7-c6f5-498e-ac94-a9e888e3e6b3" containerID="ccc3e23390e3eeeeb6ee8d8f3967801c279071c2ae53c893a1958ec8832e56f8" exitCode=0 Feb 17 13:42:30 crc kubenswrapper[4768]: I0217 13:42:30.976116 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hl2m8" event={"ID":"8a1604e7-c6f5-498e-ac94-a9e888e3e6b3","Type":"ContainerDied","Data":"ccc3e23390e3eeeeb6ee8d8f3967801c279071c2ae53c893a1958ec8832e56f8"} Feb 17 13:42:30 crc kubenswrapper[4768]: I0217 13:42:30.976167 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hl2m8" event={"ID":"8a1604e7-c6f5-498e-ac94-a9e888e3e6b3","Type":"ContainerStarted","Data":"c4e4e43c2c2df3a5f950e062a95d9262ea4fc977c57beed5840b1ce8d812ce1d"} Feb 17 13:42:31 crc kubenswrapper[4768]: I0217 13:42:31.990943 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-288nc" event={"ID":"317cb29b-f26f-4bed-a923-9fe5e7d15391","Type":"ContainerStarted","Data":"50b2f2f119e14a4f604e6d73cc1b3a7f113000b0750bc6c6753e5845fc9f7242"} Feb 17 13:42:31 crc kubenswrapper[4768]: I0217 13:42:31.995003 4768 generic.go:334] "Generic (PLEG): container finished" podID="8a1604e7-c6f5-498e-ac94-a9e888e3e6b3" containerID="64db8f9049e37edffb6dbb9a9c67cef539bcc9157794f21a6f0f674fea38c359" exitCode=0 Feb 17 13:42:31 crc kubenswrapper[4768]: I0217 13:42:31.995037 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hl2m8" event={"ID":"8a1604e7-c6f5-498e-ac94-a9e888e3e6b3","Type":"ContainerDied","Data":"64db8f9049e37edffb6dbb9a9c67cef539bcc9157794f21a6f0f674fea38c359"} Feb 17 13:42:32 crc kubenswrapper[4768]: I0217 13:42:32.295197 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-fppgg"] Feb 17 13:42:32 crc kubenswrapper[4768]: I0217 13:42:32.297659 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-fppgg" Feb 17 13:42:32 crc kubenswrapper[4768]: I0217 13:42:32.301401 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-fppgg"] Feb 17 13:42:32 crc kubenswrapper[4768]: I0217 13:42:32.301503 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Feb 17 13:42:32 crc kubenswrapper[4768]: I0217 13:42:32.415025 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3a639663-ee39-4aa9-874c-c14cff7d6223-utilities\") pod \"redhat-operators-fppgg\" (UID: \"3a639663-ee39-4aa9-874c-c14cff7d6223\") " pod="openshift-marketplace/redhat-operators-fppgg" Feb 17 13:42:32 crc kubenswrapper[4768]: I0217 13:42:32.415062 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gzr5m\" (UniqueName: \"kubernetes.io/projected/3a639663-ee39-4aa9-874c-c14cff7d6223-kube-api-access-gzr5m\") pod \"redhat-operators-fppgg\" (UID: \"3a639663-ee39-4aa9-874c-c14cff7d6223\") " pod="openshift-marketplace/redhat-operators-fppgg" Feb 17 13:42:32 crc kubenswrapper[4768]: I0217 13:42:32.415175 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3a639663-ee39-4aa9-874c-c14cff7d6223-catalog-content\") pod \"redhat-operators-fppgg\" (UID: \"3a639663-ee39-4aa9-874c-c14cff7d6223\") " pod="openshift-marketplace/redhat-operators-fppgg" Feb 17 13:42:32 crc kubenswrapper[4768]: I0217 13:42:32.488620 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-wjsd2"] Feb 17 13:42:32 crc kubenswrapper[4768]: I0217 13:42:32.490112 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wjsd2" Feb 17 13:42:32 crc kubenswrapper[4768]: I0217 13:42:32.492609 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Feb 17 13:42:32 crc kubenswrapper[4768]: I0217 13:42:32.496374 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-wjsd2"] Feb 17 13:42:32 crc kubenswrapper[4768]: I0217 13:42:32.516066 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3a639663-ee39-4aa9-874c-c14cff7d6223-catalog-content\") pod \"redhat-operators-fppgg\" (UID: \"3a639663-ee39-4aa9-874c-c14cff7d6223\") " pod="openshift-marketplace/redhat-operators-fppgg" Feb 17 13:42:32 crc kubenswrapper[4768]: I0217 13:42:32.516297 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3a639663-ee39-4aa9-874c-c14cff7d6223-utilities\") pod \"redhat-operators-fppgg\" (UID: \"3a639663-ee39-4aa9-874c-c14cff7d6223\") " pod="openshift-marketplace/redhat-operators-fppgg" Feb 17 13:42:32 crc kubenswrapper[4768]: I0217 13:42:32.516364 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gzr5m\" (UniqueName: \"kubernetes.io/projected/3a639663-ee39-4aa9-874c-c14cff7d6223-kube-api-access-gzr5m\") pod \"redhat-operators-fppgg\" (UID: \"3a639663-ee39-4aa9-874c-c14cff7d6223\") " pod="openshift-marketplace/redhat-operators-fppgg" Feb 17 13:42:32 crc kubenswrapper[4768]: I0217 13:42:32.516549 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3a639663-ee39-4aa9-874c-c14cff7d6223-catalog-content\") pod \"redhat-operators-fppgg\" (UID: \"3a639663-ee39-4aa9-874c-c14cff7d6223\") " pod="openshift-marketplace/redhat-operators-fppgg" Feb 17 13:42:32 crc kubenswrapper[4768]: I0217 13:42:32.516721 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3a639663-ee39-4aa9-874c-c14cff7d6223-utilities\") pod \"redhat-operators-fppgg\" (UID: \"3a639663-ee39-4aa9-874c-c14cff7d6223\") " pod="openshift-marketplace/redhat-operators-fppgg" Feb 17 13:42:32 crc kubenswrapper[4768]: I0217 13:42:32.536984 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gzr5m\" (UniqueName: \"kubernetes.io/projected/3a639663-ee39-4aa9-874c-c14cff7d6223-kube-api-access-gzr5m\") pod \"redhat-operators-fppgg\" (UID: \"3a639663-ee39-4aa9-874c-c14cff7d6223\") " pod="openshift-marketplace/redhat-operators-fppgg" Feb 17 13:42:32 crc kubenswrapper[4768]: I0217 13:42:32.617642 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3e01cac9-3463-4a68-be1d-e64867827ad3-catalog-content\") pod \"community-operators-wjsd2\" (UID: \"3e01cac9-3463-4a68-be1d-e64867827ad3\") " pod="openshift-marketplace/community-operators-wjsd2" Feb 17 13:42:32 crc kubenswrapper[4768]: I0217 13:42:32.617709 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-frr5b\" (UniqueName: \"kubernetes.io/projected/3e01cac9-3463-4a68-be1d-e64867827ad3-kube-api-access-frr5b\") pod \"community-operators-wjsd2\" (UID: \"3e01cac9-3463-4a68-be1d-e64867827ad3\") " pod="openshift-marketplace/community-operators-wjsd2" Feb 17 13:42:32 crc kubenswrapper[4768]: I0217 13:42:32.617832 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3e01cac9-3463-4a68-be1d-e64867827ad3-utilities\") pod \"community-operators-wjsd2\" (UID: \"3e01cac9-3463-4a68-be1d-e64867827ad3\") " pod="openshift-marketplace/community-operators-wjsd2" Feb 17 13:42:32 crc kubenswrapper[4768]: I0217 13:42:32.675609 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-fppgg" Feb 17 13:42:32 crc kubenswrapper[4768]: I0217 13:42:32.719185 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3e01cac9-3463-4a68-be1d-e64867827ad3-catalog-content\") pod \"community-operators-wjsd2\" (UID: \"3e01cac9-3463-4a68-be1d-e64867827ad3\") " pod="openshift-marketplace/community-operators-wjsd2" Feb 17 13:42:32 crc kubenswrapper[4768]: I0217 13:42:32.719285 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-frr5b\" (UniqueName: \"kubernetes.io/projected/3e01cac9-3463-4a68-be1d-e64867827ad3-kube-api-access-frr5b\") pod \"community-operators-wjsd2\" (UID: \"3e01cac9-3463-4a68-be1d-e64867827ad3\") " pod="openshift-marketplace/community-operators-wjsd2" Feb 17 13:42:32 crc kubenswrapper[4768]: I0217 13:42:32.719354 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3e01cac9-3463-4a68-be1d-e64867827ad3-utilities\") pod \"community-operators-wjsd2\" (UID: \"3e01cac9-3463-4a68-be1d-e64867827ad3\") " pod="openshift-marketplace/community-operators-wjsd2" Feb 17 13:42:32 crc kubenswrapper[4768]: I0217 13:42:32.719674 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3e01cac9-3463-4a68-be1d-e64867827ad3-catalog-content\") pod \"community-operators-wjsd2\" (UID: \"3e01cac9-3463-4a68-be1d-e64867827ad3\") " pod="openshift-marketplace/community-operators-wjsd2" Feb 17 13:42:32 crc kubenswrapper[4768]: I0217 13:42:32.719755 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3e01cac9-3463-4a68-be1d-e64867827ad3-utilities\") pod \"community-operators-wjsd2\" (UID: \"3e01cac9-3463-4a68-be1d-e64867827ad3\") " pod="openshift-marketplace/community-operators-wjsd2" Feb 17 13:42:32 crc kubenswrapper[4768]: I0217 13:42:32.741618 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-frr5b\" (UniqueName: \"kubernetes.io/projected/3e01cac9-3463-4a68-be1d-e64867827ad3-kube-api-access-frr5b\") pod \"community-operators-wjsd2\" (UID: \"3e01cac9-3463-4a68-be1d-e64867827ad3\") " pod="openshift-marketplace/community-operators-wjsd2" Feb 17 13:42:32 crc kubenswrapper[4768]: I0217 13:42:32.816091 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wjsd2" Feb 17 13:42:33 crc kubenswrapper[4768]: I0217 13:42:33.003011 4768 generic.go:334] "Generic (PLEG): container finished" podID="317cb29b-f26f-4bed-a923-9fe5e7d15391" containerID="50b2f2f119e14a4f604e6d73cc1b3a7f113000b0750bc6c6753e5845fc9f7242" exitCode=0 Feb 17 13:42:33 crc kubenswrapper[4768]: I0217 13:42:33.003066 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-288nc" event={"ID":"317cb29b-f26f-4bed-a923-9fe5e7d15391","Type":"ContainerDied","Data":"50b2f2f119e14a4f604e6d73cc1b3a7f113000b0750bc6c6753e5845fc9f7242"} Feb 17 13:42:33 crc kubenswrapper[4768]: I0217 13:42:33.006444 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hl2m8" event={"ID":"8a1604e7-c6f5-498e-ac94-a9e888e3e6b3","Type":"ContainerStarted","Data":"63e0edba7259c70dcceead626bf0acef9d97d390cbd87c3b23b6329c572bf579"} Feb 17 13:42:33 crc kubenswrapper[4768]: I0217 13:42:33.036293 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-wjsd2"] Feb 17 13:42:33 crc kubenswrapper[4768]: W0217 13:42:33.043571 4768 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3e01cac9_3463_4a68_be1d_e64867827ad3.slice/crio-453281ec7c4b9b2fe8a9defd2d0de9d15111a7cc44702c83004c70348af68e49 WatchSource:0}: Error finding container 453281ec7c4b9b2fe8a9defd2d0de9d15111a7cc44702c83004c70348af68e49: Status 404 returned error can't find the container with id 453281ec7c4b9b2fe8a9defd2d0de9d15111a7cc44702c83004c70348af68e49 Feb 17 13:42:33 crc kubenswrapper[4768]: I0217 13:42:33.053161 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-hl2m8" podStartSLOduration=1.669978563 podStartE2EDuration="3.05312052s" podCreationTimestamp="2026-02-17 13:42:30 +0000 UTC" firstStartedPulling="2026-02-17 13:42:30.977868523 +0000 UTC m=+370.257254965" lastFinishedPulling="2026-02-17 13:42:32.36101048 +0000 UTC m=+371.640396922" observedRunningTime="2026-02-17 13:42:33.040728887 +0000 UTC m=+372.320115329" watchObservedRunningTime="2026-02-17 13:42:33.05312052 +0000 UTC m=+372.332506972" Feb 17 13:42:33 crc kubenswrapper[4768]: I0217 13:42:33.079542 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-fppgg"] Feb 17 13:42:33 crc kubenswrapper[4768]: W0217 13:42:33.102988 4768 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3a639663_ee39_4aa9_874c_c14cff7d6223.slice/crio-5c6af4ee0cb08744ee0177fa377961f649347f5b4662e20710e1bbc63f5a7ec4 WatchSource:0}: Error finding container 5c6af4ee0cb08744ee0177fa377961f649347f5b4662e20710e1bbc63f5a7ec4: Status 404 returned error can't find the container with id 5c6af4ee0cb08744ee0177fa377961f649347f5b4662e20710e1bbc63f5a7ec4 Feb 17 13:42:34 crc kubenswrapper[4768]: I0217 13:42:34.015450 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-288nc" event={"ID":"317cb29b-f26f-4bed-a923-9fe5e7d15391","Type":"ContainerStarted","Data":"c03279952281cd8d682d27390c5f5545ad5d3458991e31bceb58ce7ad9dad9de"} Feb 17 13:42:34 crc kubenswrapper[4768]: I0217 13:42:34.016818 4768 generic.go:334] "Generic (PLEG): container finished" podID="3e01cac9-3463-4a68-be1d-e64867827ad3" containerID="52a42b516afff63902324a4ab2595a87cf13666ef973c415d531ed5236099e1d" exitCode=0 Feb 17 13:42:34 crc kubenswrapper[4768]: I0217 13:42:34.016862 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wjsd2" event={"ID":"3e01cac9-3463-4a68-be1d-e64867827ad3","Type":"ContainerDied","Data":"52a42b516afff63902324a4ab2595a87cf13666ef973c415d531ed5236099e1d"} Feb 17 13:42:34 crc kubenswrapper[4768]: I0217 13:42:34.016876 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wjsd2" event={"ID":"3e01cac9-3463-4a68-be1d-e64867827ad3","Type":"ContainerStarted","Data":"453281ec7c4b9b2fe8a9defd2d0de9d15111a7cc44702c83004c70348af68e49"} Feb 17 13:42:34 crc kubenswrapper[4768]: I0217 13:42:34.018443 4768 generic.go:334] "Generic (PLEG): container finished" podID="3a639663-ee39-4aa9-874c-c14cff7d6223" containerID="c4e693b707bd771e86ce359289be73beb60b2df88608378721d7360cf3efefa0" exitCode=0 Feb 17 13:42:34 crc kubenswrapper[4768]: I0217 13:42:34.018489 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fppgg" event={"ID":"3a639663-ee39-4aa9-874c-c14cff7d6223","Type":"ContainerDied","Data":"c4e693b707bd771e86ce359289be73beb60b2df88608378721d7360cf3efefa0"} Feb 17 13:42:34 crc kubenswrapper[4768]: I0217 13:42:34.018528 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fppgg" event={"ID":"3a639663-ee39-4aa9-874c-c14cff7d6223","Type":"ContainerStarted","Data":"5c6af4ee0cb08744ee0177fa377961f649347f5b4662e20710e1bbc63f5a7ec4"} Feb 17 13:42:34 crc kubenswrapper[4768]: I0217 13:42:34.036075 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-288nc" podStartSLOduration=2.536556172 podStartE2EDuration="5.036053541s" podCreationTimestamp="2026-02-17 13:42:29 +0000 UTC" firstStartedPulling="2026-02-17 13:42:30.976081072 +0000 UTC m=+370.255467514" lastFinishedPulling="2026-02-17 13:42:33.475578441 +0000 UTC m=+372.754964883" observedRunningTime="2026-02-17 13:42:34.034442876 +0000 UTC m=+373.313829318" watchObservedRunningTime="2026-02-17 13:42:34.036053541 +0000 UTC m=+373.315439983" Feb 17 13:42:35 crc kubenswrapper[4768]: I0217 13:42:35.034190 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wjsd2" event={"ID":"3e01cac9-3463-4a68-be1d-e64867827ad3","Type":"ContainerStarted","Data":"3582d56e5080c94658e2f62a3e1c9c4833a8cd34d64af002e1adefd79107625a"} Feb 17 13:42:35 crc kubenswrapper[4768]: I0217 13:42:35.036769 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fppgg" event={"ID":"3a639663-ee39-4aa9-874c-c14cff7d6223","Type":"ContainerStarted","Data":"db8f7d005f08e0bf280dfef30b4ce02dc33e6e99e18e92df9e7998d6ed408e57"} Feb 17 13:42:36 crc kubenswrapper[4768]: I0217 13:42:36.067322 4768 generic.go:334] "Generic (PLEG): container finished" podID="3a639663-ee39-4aa9-874c-c14cff7d6223" containerID="db8f7d005f08e0bf280dfef30b4ce02dc33e6e99e18e92df9e7998d6ed408e57" exitCode=0 Feb 17 13:42:36 crc kubenswrapper[4768]: I0217 13:42:36.067411 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fppgg" event={"ID":"3a639663-ee39-4aa9-874c-c14cff7d6223","Type":"ContainerDied","Data":"db8f7d005f08e0bf280dfef30b4ce02dc33e6e99e18e92df9e7998d6ed408e57"} Feb 17 13:42:36 crc kubenswrapper[4768]: I0217 13:42:36.071545 4768 generic.go:334] "Generic (PLEG): container finished" podID="3e01cac9-3463-4a68-be1d-e64867827ad3" containerID="3582d56e5080c94658e2f62a3e1c9c4833a8cd34d64af002e1adefd79107625a" exitCode=0 Feb 17 13:42:36 crc kubenswrapper[4768]: I0217 13:42:36.071581 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wjsd2" event={"ID":"3e01cac9-3463-4a68-be1d-e64867827ad3","Type":"ContainerDied","Data":"3582d56e5080c94658e2f62a3e1c9c4833a8cd34d64af002e1adefd79107625a"} Feb 17 13:42:38 crc kubenswrapper[4768]: I0217 13:42:38.107121 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wjsd2" event={"ID":"3e01cac9-3463-4a68-be1d-e64867827ad3","Type":"ContainerStarted","Data":"64729845bb031771099cc80ae0007012f7787477bfdddaaa4492caea127355a1"} Feb 17 13:42:38 crc kubenswrapper[4768]: I0217 13:42:38.112510 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fppgg" event={"ID":"3a639663-ee39-4aa9-874c-c14cff7d6223","Type":"ContainerStarted","Data":"41fa83826ba69811f2c4579ba76776701df2ab36fb5d4716ad51ef996a957df1"} Feb 17 13:42:38 crc kubenswrapper[4768]: I0217 13:42:38.127638 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-wjsd2" podStartSLOduration=2.970339028 podStartE2EDuration="6.127617829s" podCreationTimestamp="2026-02-17 13:42:32 +0000 UTC" firstStartedPulling="2026-02-17 13:42:34.018287756 +0000 UTC m=+373.297674198" lastFinishedPulling="2026-02-17 13:42:37.175566557 +0000 UTC m=+376.454952999" observedRunningTime="2026-02-17 13:42:38.121769172 +0000 UTC m=+377.401155635" watchObservedRunningTime="2026-02-17 13:42:38.127617829 +0000 UTC m=+377.407004281" Feb 17 13:42:38 crc kubenswrapper[4768]: I0217 13:42:38.144645 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-fppgg" podStartSLOduration=3.016116312 podStartE2EDuration="6.144624853s" podCreationTimestamp="2026-02-17 13:42:32 +0000 UTC" firstStartedPulling="2026-02-17 13:42:34.01948531 +0000 UTC m=+373.298871742" lastFinishedPulling="2026-02-17 13:42:37.147993841 +0000 UTC m=+376.427380283" observedRunningTime="2026-02-17 13:42:38.142484352 +0000 UTC m=+377.421870794" watchObservedRunningTime="2026-02-17 13:42:38.144624853 +0000 UTC m=+377.424011295" Feb 17 13:42:40 crc kubenswrapper[4768]: I0217 13:42:40.231227 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-288nc" Feb 17 13:42:40 crc kubenswrapper[4768]: I0217 13:42:40.231454 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-288nc" Feb 17 13:42:40 crc kubenswrapper[4768]: I0217 13:42:40.285223 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-288nc" Feb 17 13:42:40 crc kubenswrapper[4768]: I0217 13:42:40.410368 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-hl2m8" Feb 17 13:42:40 crc kubenswrapper[4768]: I0217 13:42:40.410735 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-hl2m8" Feb 17 13:42:40 crc kubenswrapper[4768]: I0217 13:42:40.443489 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-hl2m8" Feb 17 13:42:41 crc kubenswrapper[4768]: I0217 13:42:41.165766 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-hl2m8" Feb 17 13:42:41 crc kubenswrapper[4768]: I0217 13:42:41.169292 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-288nc" Feb 17 13:42:42 crc kubenswrapper[4768]: I0217 13:42:42.010981 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-66df7c8f76-2q82n" Feb 17 13:42:42 crc kubenswrapper[4768]: I0217 13:42:42.109426 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-vfpbq"] Feb 17 13:42:42 crc kubenswrapper[4768]: I0217 13:42:42.677201 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-fppgg" Feb 17 13:42:42 crc kubenswrapper[4768]: I0217 13:42:42.678754 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-fppgg" Feb 17 13:42:42 crc kubenswrapper[4768]: I0217 13:42:42.817148 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-wjsd2" Feb 17 13:42:42 crc kubenswrapper[4768]: I0217 13:42:42.817243 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-wjsd2" Feb 17 13:42:42 crc kubenswrapper[4768]: I0217 13:42:42.860755 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-wjsd2" Feb 17 13:42:43 crc kubenswrapper[4768]: I0217 13:42:43.173768 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-wjsd2" Feb 17 13:42:43 crc kubenswrapper[4768]: I0217 13:42:43.717159 4768 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-fppgg" podUID="3a639663-ee39-4aa9-874c-c14cff7d6223" containerName="registry-server" probeResult="failure" output=< Feb 17 13:42:43 crc kubenswrapper[4768]: timeout: failed to connect service ":50051" within 1s Feb 17 13:42:43 crc kubenswrapper[4768]: > Feb 17 13:42:52 crc kubenswrapper[4768]: I0217 13:42:52.745228 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-fppgg" Feb 17 13:42:52 crc kubenswrapper[4768]: I0217 13:42:52.802344 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-fppgg" Feb 17 13:42:58 crc kubenswrapper[4768]: I0217 13:42:58.059826 4768 patch_prober.go:28] interesting pod/machine-config-daemon-p97z4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 13:42:58 crc kubenswrapper[4768]: I0217 13:42:58.060369 4768 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-p97z4" podUID="10c685ba-8fe0-425c-958c-3fb6754d3d84" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 13:42:58 crc kubenswrapper[4768]: I0217 13:42:58.060450 4768 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-p97z4" Feb 17 13:42:58 crc kubenswrapper[4768]: I0217 13:42:58.062378 4768 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"5027bbc0d5015c18de153045ec7b4f54fa804d4c644f283923fd2686e923444b"} pod="openshift-machine-config-operator/machine-config-daemon-p97z4" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 17 13:42:58 crc kubenswrapper[4768]: I0217 13:42:58.062460 4768 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-p97z4" podUID="10c685ba-8fe0-425c-958c-3fb6754d3d84" containerName="machine-config-daemon" containerID="cri-o://5027bbc0d5015c18de153045ec7b4f54fa804d4c644f283923fd2686e923444b" gracePeriod=600 Feb 17 13:42:58 crc kubenswrapper[4768]: I0217 13:42:58.215706 4768 generic.go:334] "Generic (PLEG): container finished" podID="10c685ba-8fe0-425c-958c-3fb6754d3d84" containerID="5027bbc0d5015c18de153045ec7b4f54fa804d4c644f283923fd2686e923444b" exitCode=0 Feb 17 13:42:58 crc kubenswrapper[4768]: I0217 13:42:58.215819 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-p97z4" event={"ID":"10c685ba-8fe0-425c-958c-3fb6754d3d84","Type":"ContainerDied","Data":"5027bbc0d5015c18de153045ec7b4f54fa804d4c644f283923fd2686e923444b"} Feb 17 13:42:58 crc kubenswrapper[4768]: I0217 13:42:58.215927 4768 scope.go:117] "RemoveContainer" containerID="69b662556c0f67ed5b73c21d8dda6de43cd7b0e7c3dc7a57578f3ce94a4be7cd" Feb 17 13:42:59 crc kubenswrapper[4768]: I0217 13:42:59.224431 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-p97z4" event={"ID":"10c685ba-8fe0-425c-958c-3fb6754d3d84","Type":"ContainerStarted","Data":"47b6caa1099a99506831f3c5757b6a6214ede83a2d3a9d01c1a4df4a6cd207c8"} Feb 17 13:43:07 crc kubenswrapper[4768]: I0217 13:43:07.158401 4768 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-image-registry/image-registry-697d97f7c8-vfpbq" podUID="9cf79399-929e-43c8-9ceb-06619ef1edee" containerName="registry" containerID="cri-o://729407bfd6a8cbbf40d5853d35be386b4cb4d4d836987b5cf5a752d6f135c6da" gracePeriod=30 Feb 17 13:43:07 crc kubenswrapper[4768]: I0217 13:43:07.472867 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-vfpbq" Feb 17 13:43:07 crc kubenswrapper[4768]: I0217 13:43:07.625808 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/9cf79399-929e-43c8-9ceb-06619ef1edee-bound-sa-token\") pod \"9cf79399-929e-43c8-9ceb-06619ef1edee\" (UID: \"9cf79399-929e-43c8-9ceb-06619ef1edee\") " Feb 17 13:43:07 crc kubenswrapper[4768]: I0217 13:43:07.625896 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9cf79399-929e-43c8-9ceb-06619ef1edee-trusted-ca\") pod \"9cf79399-929e-43c8-9ceb-06619ef1edee\" (UID: \"9cf79399-929e-43c8-9ceb-06619ef1edee\") " Feb 17 13:43:07 crc kubenswrapper[4768]: I0217 13:43:07.625945 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/9cf79399-929e-43c8-9ceb-06619ef1edee-installation-pull-secrets\") pod \"9cf79399-929e-43c8-9ceb-06619ef1edee\" (UID: \"9cf79399-929e-43c8-9ceb-06619ef1edee\") " Feb 17 13:43:07 crc kubenswrapper[4768]: I0217 13:43:07.626200 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-storage\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"9cf79399-929e-43c8-9ceb-06619ef1edee\" (UID: \"9cf79399-929e-43c8-9ceb-06619ef1edee\") " Feb 17 13:43:07 crc kubenswrapper[4768]: I0217 13:43:07.626306 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/9cf79399-929e-43c8-9ceb-06619ef1edee-ca-trust-extracted\") pod \"9cf79399-929e-43c8-9ceb-06619ef1edee\" (UID: \"9cf79399-929e-43c8-9ceb-06619ef1edee\") " Feb 17 13:43:07 crc kubenswrapper[4768]: I0217 13:43:07.626352 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/9cf79399-929e-43c8-9ceb-06619ef1edee-registry-certificates\") pod \"9cf79399-929e-43c8-9ceb-06619ef1edee\" (UID: \"9cf79399-929e-43c8-9ceb-06619ef1edee\") " Feb 17 13:43:07 crc kubenswrapper[4768]: I0217 13:43:07.626405 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/9cf79399-929e-43c8-9ceb-06619ef1edee-registry-tls\") pod \"9cf79399-929e-43c8-9ceb-06619ef1edee\" (UID: \"9cf79399-929e-43c8-9ceb-06619ef1edee\") " Feb 17 13:43:07 crc kubenswrapper[4768]: I0217 13:43:07.626488 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7jpcl\" (UniqueName: \"kubernetes.io/projected/9cf79399-929e-43c8-9ceb-06619ef1edee-kube-api-access-7jpcl\") pod \"9cf79399-929e-43c8-9ceb-06619ef1edee\" (UID: \"9cf79399-929e-43c8-9ceb-06619ef1edee\") " Feb 17 13:43:07 crc kubenswrapper[4768]: I0217 13:43:07.628000 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9cf79399-929e-43c8-9ceb-06619ef1edee-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9cf79399-929e-43c8-9ceb-06619ef1edee" (UID: "9cf79399-929e-43c8-9ceb-06619ef1edee"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 13:43:07 crc kubenswrapper[4768]: I0217 13:43:07.629757 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9cf79399-929e-43c8-9ceb-06619ef1edee-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "9cf79399-929e-43c8-9ceb-06619ef1edee" (UID: "9cf79399-929e-43c8-9ceb-06619ef1edee"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 13:43:07 crc kubenswrapper[4768]: I0217 13:43:07.633632 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9cf79399-929e-43c8-9ceb-06619ef1edee-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "9cf79399-929e-43c8-9ceb-06619ef1edee" (UID: "9cf79399-929e-43c8-9ceb-06619ef1edee"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 13:43:07 crc kubenswrapper[4768]: I0217 13:43:07.633907 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9cf79399-929e-43c8-9ceb-06619ef1edee-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "9cf79399-929e-43c8-9ceb-06619ef1edee" (UID: "9cf79399-929e-43c8-9ceb-06619ef1edee"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 13:43:07 crc kubenswrapper[4768]: I0217 13:43:07.634213 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9cf79399-929e-43c8-9ceb-06619ef1edee-kube-api-access-7jpcl" (OuterVolumeSpecName: "kube-api-access-7jpcl") pod "9cf79399-929e-43c8-9ceb-06619ef1edee" (UID: "9cf79399-929e-43c8-9ceb-06619ef1edee"). InnerVolumeSpecName "kube-api-access-7jpcl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 13:43:07 crc kubenswrapper[4768]: I0217 13:43:07.634609 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9cf79399-929e-43c8-9ceb-06619ef1edee-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "9cf79399-929e-43c8-9ceb-06619ef1edee" (UID: "9cf79399-929e-43c8-9ceb-06619ef1edee"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 13:43:07 crc kubenswrapper[4768]: I0217 13:43:07.640787 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "registry-storage") pod "9cf79399-929e-43c8-9ceb-06619ef1edee" (UID: "9cf79399-929e-43c8-9ceb-06619ef1edee"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 17 13:43:07 crc kubenswrapper[4768]: I0217 13:43:07.667407 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9cf79399-929e-43c8-9ceb-06619ef1edee-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "9cf79399-929e-43c8-9ceb-06619ef1edee" (UID: "9cf79399-929e-43c8-9ceb-06619ef1edee"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 13:43:07 crc kubenswrapper[4768]: I0217 13:43:07.728441 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7jpcl\" (UniqueName: \"kubernetes.io/projected/9cf79399-929e-43c8-9ceb-06619ef1edee-kube-api-access-7jpcl\") on node \"crc\" DevicePath \"\"" Feb 17 13:43:07 crc kubenswrapper[4768]: I0217 13:43:07.728502 4768 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/9cf79399-929e-43c8-9ceb-06619ef1edee-bound-sa-token\") on node \"crc\" DevicePath \"\"" Feb 17 13:43:07 crc kubenswrapper[4768]: I0217 13:43:07.728527 4768 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9cf79399-929e-43c8-9ceb-06619ef1edee-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 17 13:43:07 crc kubenswrapper[4768]: I0217 13:43:07.728543 4768 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/9cf79399-929e-43c8-9ceb-06619ef1edee-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Feb 17 13:43:07 crc kubenswrapper[4768]: I0217 13:43:07.728556 4768 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/9cf79399-929e-43c8-9ceb-06619ef1edee-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Feb 17 13:43:07 crc kubenswrapper[4768]: I0217 13:43:07.728567 4768 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/9cf79399-929e-43c8-9ceb-06619ef1edee-registry-certificates\") on node \"crc\" DevicePath \"\"" Feb 17 13:43:07 crc kubenswrapper[4768]: I0217 13:43:07.728579 4768 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/9cf79399-929e-43c8-9ceb-06619ef1edee-registry-tls\") on node \"crc\" DevicePath \"\"" Feb 17 13:43:08 crc kubenswrapper[4768]: I0217 13:43:08.275782 4768 generic.go:334] "Generic (PLEG): container finished" podID="9cf79399-929e-43c8-9ceb-06619ef1edee" containerID="729407bfd6a8cbbf40d5853d35be386b4cb4d4d836987b5cf5a752d6f135c6da" exitCode=0 Feb 17 13:43:08 crc kubenswrapper[4768]: I0217 13:43:08.275859 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-vfpbq" event={"ID":"9cf79399-929e-43c8-9ceb-06619ef1edee","Type":"ContainerDied","Data":"729407bfd6a8cbbf40d5853d35be386b4cb4d4d836987b5cf5a752d6f135c6da"} Feb 17 13:43:08 crc kubenswrapper[4768]: I0217 13:43:08.275910 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-vfpbq" Feb 17 13:43:08 crc kubenswrapper[4768]: I0217 13:43:08.276238 4768 scope.go:117] "RemoveContainer" containerID="729407bfd6a8cbbf40d5853d35be386b4cb4d4d836987b5cf5a752d6f135c6da" Feb 17 13:43:08 crc kubenswrapper[4768]: I0217 13:43:08.276216 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-vfpbq" event={"ID":"9cf79399-929e-43c8-9ceb-06619ef1edee","Type":"ContainerDied","Data":"8b3dda1eb812d3349d4195832ad6fa1ec1992b55af18e2e5c42653dc2d063b6a"} Feb 17 13:43:08 crc kubenswrapper[4768]: I0217 13:43:08.306514 4768 scope.go:117] "RemoveContainer" containerID="729407bfd6a8cbbf40d5853d35be386b4cb4d4d836987b5cf5a752d6f135c6da" Feb 17 13:43:08 crc kubenswrapper[4768]: E0217 13:43:08.307383 4768 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"729407bfd6a8cbbf40d5853d35be386b4cb4d4d836987b5cf5a752d6f135c6da\": container with ID starting with 729407bfd6a8cbbf40d5853d35be386b4cb4d4d836987b5cf5a752d6f135c6da not found: ID does not exist" containerID="729407bfd6a8cbbf40d5853d35be386b4cb4d4d836987b5cf5a752d6f135c6da" Feb 17 13:43:08 crc kubenswrapper[4768]: I0217 13:43:08.307453 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"729407bfd6a8cbbf40d5853d35be386b4cb4d4d836987b5cf5a752d6f135c6da"} err="failed to get container status \"729407bfd6a8cbbf40d5853d35be386b4cb4d4d836987b5cf5a752d6f135c6da\": rpc error: code = NotFound desc = could not find container \"729407bfd6a8cbbf40d5853d35be386b4cb4d4d836987b5cf5a752d6f135c6da\": container with ID starting with 729407bfd6a8cbbf40d5853d35be386b4cb4d4d836987b5cf5a752d6f135c6da not found: ID does not exist" Feb 17 13:43:08 crc kubenswrapper[4768]: I0217 13:43:08.317524 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-vfpbq"] Feb 17 13:43:08 crc kubenswrapper[4768]: I0217 13:43:08.324507 4768 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-vfpbq"] Feb 17 13:43:09 crc kubenswrapper[4768]: I0217 13:43:09.540387 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9cf79399-929e-43c8-9ceb-06619ef1edee" path="/var/lib/kubelet/pods/9cf79399-929e-43c8-9ceb-06619ef1edee/volumes" Feb 17 13:44:58 crc kubenswrapper[4768]: I0217 13:44:58.060555 4768 patch_prober.go:28] interesting pod/machine-config-daemon-p97z4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 13:44:58 crc kubenswrapper[4768]: I0217 13:44:58.061418 4768 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-p97z4" podUID="10c685ba-8fe0-425c-958c-3fb6754d3d84" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 13:45:00 crc kubenswrapper[4768]: I0217 13:45:00.166317 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29522265-827m7"] Feb 17 13:45:00 crc kubenswrapper[4768]: E0217 13:45:00.166814 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9cf79399-929e-43c8-9ceb-06619ef1edee" containerName="registry" Feb 17 13:45:00 crc kubenswrapper[4768]: I0217 13:45:00.166826 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="9cf79399-929e-43c8-9ceb-06619ef1edee" containerName="registry" Feb 17 13:45:00 crc kubenswrapper[4768]: I0217 13:45:00.166909 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="9cf79399-929e-43c8-9ceb-06619ef1edee" containerName="registry" Feb 17 13:45:00 crc kubenswrapper[4768]: I0217 13:45:00.167306 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29522265-827m7" Feb 17 13:45:00 crc kubenswrapper[4768]: I0217 13:45:00.169225 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 17 13:45:00 crc kubenswrapper[4768]: I0217 13:45:00.169732 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 17 13:45:00 crc kubenswrapper[4768]: I0217 13:45:00.181880 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29522265-827m7"] Feb 17 13:45:00 crc kubenswrapper[4768]: I0217 13:45:00.273471 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/31035514-a227-4c3c-b638-baa3165746d6-config-volume\") pod \"collect-profiles-29522265-827m7\" (UID: \"31035514-a227-4c3c-b638-baa3165746d6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522265-827m7" Feb 17 13:45:00 crc kubenswrapper[4768]: I0217 13:45:00.273560 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-frh8c\" (UniqueName: \"kubernetes.io/projected/31035514-a227-4c3c-b638-baa3165746d6-kube-api-access-frh8c\") pod \"collect-profiles-29522265-827m7\" (UID: \"31035514-a227-4c3c-b638-baa3165746d6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522265-827m7" Feb 17 13:45:00 crc kubenswrapper[4768]: I0217 13:45:00.273593 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/31035514-a227-4c3c-b638-baa3165746d6-secret-volume\") pod \"collect-profiles-29522265-827m7\" (UID: \"31035514-a227-4c3c-b638-baa3165746d6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522265-827m7" Feb 17 13:45:00 crc kubenswrapper[4768]: I0217 13:45:00.374967 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/31035514-a227-4c3c-b638-baa3165746d6-config-volume\") pod \"collect-profiles-29522265-827m7\" (UID: \"31035514-a227-4c3c-b638-baa3165746d6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522265-827m7" Feb 17 13:45:00 crc kubenswrapper[4768]: I0217 13:45:00.375041 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-frh8c\" (UniqueName: \"kubernetes.io/projected/31035514-a227-4c3c-b638-baa3165746d6-kube-api-access-frh8c\") pod \"collect-profiles-29522265-827m7\" (UID: \"31035514-a227-4c3c-b638-baa3165746d6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522265-827m7" Feb 17 13:45:00 crc kubenswrapper[4768]: I0217 13:45:00.375087 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/31035514-a227-4c3c-b638-baa3165746d6-secret-volume\") pod \"collect-profiles-29522265-827m7\" (UID: \"31035514-a227-4c3c-b638-baa3165746d6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522265-827m7" Feb 17 13:45:00 crc kubenswrapper[4768]: I0217 13:45:00.376181 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/31035514-a227-4c3c-b638-baa3165746d6-config-volume\") pod \"collect-profiles-29522265-827m7\" (UID: \"31035514-a227-4c3c-b638-baa3165746d6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522265-827m7" Feb 17 13:45:00 crc kubenswrapper[4768]: I0217 13:45:00.381777 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/31035514-a227-4c3c-b638-baa3165746d6-secret-volume\") pod \"collect-profiles-29522265-827m7\" (UID: \"31035514-a227-4c3c-b638-baa3165746d6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522265-827m7" Feb 17 13:45:00 crc kubenswrapper[4768]: I0217 13:45:00.394701 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-frh8c\" (UniqueName: \"kubernetes.io/projected/31035514-a227-4c3c-b638-baa3165746d6-kube-api-access-frh8c\") pod \"collect-profiles-29522265-827m7\" (UID: \"31035514-a227-4c3c-b638-baa3165746d6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522265-827m7" Feb 17 13:45:00 crc kubenswrapper[4768]: I0217 13:45:00.482499 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29522265-827m7" Feb 17 13:45:00 crc kubenswrapper[4768]: I0217 13:45:00.658167 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29522265-827m7"] Feb 17 13:45:01 crc kubenswrapper[4768]: I0217 13:45:01.188134 4768 generic.go:334] "Generic (PLEG): container finished" podID="31035514-a227-4c3c-b638-baa3165746d6" containerID="e27bc4595eff5b92c47140ca83c86a5b69c343ac2b274894137306cac14203e4" exitCode=0 Feb 17 13:45:01 crc kubenswrapper[4768]: I0217 13:45:01.188249 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29522265-827m7" event={"ID":"31035514-a227-4c3c-b638-baa3165746d6","Type":"ContainerDied","Data":"e27bc4595eff5b92c47140ca83c86a5b69c343ac2b274894137306cac14203e4"} Feb 17 13:45:01 crc kubenswrapper[4768]: I0217 13:45:01.188286 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29522265-827m7" event={"ID":"31035514-a227-4c3c-b638-baa3165746d6","Type":"ContainerStarted","Data":"a4590d9a3c84f081dd0d3cbb3731e52043c93b7947c1105e359f85de069113ba"} Feb 17 13:45:02 crc kubenswrapper[4768]: I0217 13:45:02.534222 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29522265-827m7" Feb 17 13:45:02 crc kubenswrapper[4768]: I0217 13:45:02.706367 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/31035514-a227-4c3c-b638-baa3165746d6-secret-volume\") pod \"31035514-a227-4c3c-b638-baa3165746d6\" (UID: \"31035514-a227-4c3c-b638-baa3165746d6\") " Feb 17 13:45:02 crc kubenswrapper[4768]: I0217 13:45:02.706431 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-frh8c\" (UniqueName: \"kubernetes.io/projected/31035514-a227-4c3c-b638-baa3165746d6-kube-api-access-frh8c\") pod \"31035514-a227-4c3c-b638-baa3165746d6\" (UID: \"31035514-a227-4c3c-b638-baa3165746d6\") " Feb 17 13:45:02 crc kubenswrapper[4768]: I0217 13:45:02.706526 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/31035514-a227-4c3c-b638-baa3165746d6-config-volume\") pod \"31035514-a227-4c3c-b638-baa3165746d6\" (UID: \"31035514-a227-4c3c-b638-baa3165746d6\") " Feb 17 13:45:02 crc kubenswrapper[4768]: I0217 13:45:02.707716 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31035514-a227-4c3c-b638-baa3165746d6-config-volume" (OuterVolumeSpecName: "config-volume") pod "31035514-a227-4c3c-b638-baa3165746d6" (UID: "31035514-a227-4c3c-b638-baa3165746d6"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 13:45:02 crc kubenswrapper[4768]: I0217 13:45:02.712731 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31035514-a227-4c3c-b638-baa3165746d6-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "31035514-a227-4c3c-b638-baa3165746d6" (UID: "31035514-a227-4c3c-b638-baa3165746d6"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 13:45:02 crc kubenswrapper[4768]: I0217 13:45:02.712979 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31035514-a227-4c3c-b638-baa3165746d6-kube-api-access-frh8c" (OuterVolumeSpecName: "kube-api-access-frh8c") pod "31035514-a227-4c3c-b638-baa3165746d6" (UID: "31035514-a227-4c3c-b638-baa3165746d6"). InnerVolumeSpecName "kube-api-access-frh8c". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 13:45:02 crc kubenswrapper[4768]: I0217 13:45:02.808536 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-frh8c\" (UniqueName: \"kubernetes.io/projected/31035514-a227-4c3c-b638-baa3165746d6-kube-api-access-frh8c\") on node \"crc\" DevicePath \"\"" Feb 17 13:45:02 crc kubenswrapper[4768]: I0217 13:45:02.808591 4768 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/31035514-a227-4c3c-b638-baa3165746d6-config-volume\") on node \"crc\" DevicePath \"\"" Feb 17 13:45:02 crc kubenswrapper[4768]: I0217 13:45:02.808601 4768 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/31035514-a227-4c3c-b638-baa3165746d6-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 17 13:45:03 crc kubenswrapper[4768]: I0217 13:45:03.199693 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29522265-827m7" event={"ID":"31035514-a227-4c3c-b638-baa3165746d6","Type":"ContainerDied","Data":"a4590d9a3c84f081dd0d3cbb3731e52043c93b7947c1105e359f85de069113ba"} Feb 17 13:45:03 crc kubenswrapper[4768]: I0217 13:45:03.200008 4768 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a4590d9a3c84f081dd0d3cbb3731e52043c93b7947c1105e359f85de069113ba" Feb 17 13:45:03 crc kubenswrapper[4768]: I0217 13:45:03.199751 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29522265-827m7" Feb 17 13:45:28 crc kubenswrapper[4768]: I0217 13:45:28.060069 4768 patch_prober.go:28] interesting pod/machine-config-daemon-p97z4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 13:45:28 crc kubenswrapper[4768]: I0217 13:45:28.060864 4768 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-p97z4" podUID="10c685ba-8fe0-425c-958c-3fb6754d3d84" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 13:45:58 crc kubenswrapper[4768]: I0217 13:45:58.060177 4768 patch_prober.go:28] interesting pod/machine-config-daemon-p97z4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 13:45:58 crc kubenswrapper[4768]: I0217 13:45:58.060909 4768 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-p97z4" podUID="10c685ba-8fe0-425c-958c-3fb6754d3d84" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 13:45:58 crc kubenswrapper[4768]: I0217 13:45:58.060982 4768 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-p97z4" Feb 17 13:45:58 crc kubenswrapper[4768]: I0217 13:45:58.061888 4768 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"47b6caa1099a99506831f3c5757b6a6214ede83a2d3a9d01c1a4df4a6cd207c8"} pod="openshift-machine-config-operator/machine-config-daemon-p97z4" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 17 13:45:58 crc kubenswrapper[4768]: I0217 13:45:58.061993 4768 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-p97z4" podUID="10c685ba-8fe0-425c-958c-3fb6754d3d84" containerName="machine-config-daemon" containerID="cri-o://47b6caa1099a99506831f3c5757b6a6214ede83a2d3a9d01c1a4df4a6cd207c8" gracePeriod=600 Feb 17 13:45:58 crc kubenswrapper[4768]: I0217 13:45:58.568725 4768 generic.go:334] "Generic (PLEG): container finished" podID="10c685ba-8fe0-425c-958c-3fb6754d3d84" containerID="47b6caa1099a99506831f3c5757b6a6214ede83a2d3a9d01c1a4df4a6cd207c8" exitCode=0 Feb 17 13:45:58 crc kubenswrapper[4768]: I0217 13:45:58.568850 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-p97z4" event={"ID":"10c685ba-8fe0-425c-958c-3fb6754d3d84","Type":"ContainerDied","Data":"47b6caa1099a99506831f3c5757b6a6214ede83a2d3a9d01c1a4df4a6cd207c8"} Feb 17 13:45:58 crc kubenswrapper[4768]: I0217 13:45:58.569019 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-p97z4" event={"ID":"10c685ba-8fe0-425c-958c-3fb6754d3d84","Type":"ContainerStarted","Data":"261ea7265dca6d9f9150a1c46ec950cce5894a7910bfec8d9ee8e08fac1f7c8f"} Feb 17 13:45:58 crc kubenswrapper[4768]: I0217 13:45:58.569049 4768 scope.go:117] "RemoveContainer" containerID="5027bbc0d5015c18de153045ec7b4f54fa804d4c644f283923fd2686e923444b" Feb 17 13:46:21 crc kubenswrapper[4768]: I0217 13:46:21.796937 4768 scope.go:117] "RemoveContainer" containerID="47ab018dc21478799cd53fdc410f689574630af9510f05e80a5bf5673d7b24ab" Feb 17 13:47:37 crc kubenswrapper[4768]: I0217 13:47:37.971956 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-g4sjn"] Feb 17 13:47:37 crc kubenswrapper[4768]: E0217 13:47:37.972968 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="31035514-a227-4c3c-b638-baa3165746d6" containerName="collect-profiles" Feb 17 13:47:37 crc kubenswrapper[4768]: I0217 13:47:37.972993 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="31035514-a227-4c3c-b638-baa3165746d6" containerName="collect-profiles" Feb 17 13:47:37 crc kubenswrapper[4768]: I0217 13:47:37.973211 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="31035514-a227-4c3c-b638-baa3165746d6" containerName="collect-profiles" Feb 17 13:47:37 crc kubenswrapper[4768]: I0217 13:47:37.973782 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-cf98fcc89-g4sjn" Feb 17 13:47:37 crc kubenswrapper[4768]: I0217 13:47:37.978511 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-g4sjn"] Feb 17 13:47:37 crc kubenswrapper[4768]: I0217 13:47:37.978581 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"openshift-service-ca.crt" Feb 17 13:47:37 crc kubenswrapper[4768]: I0217 13:47:37.978612 4768 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-cainjector-dockercfg-lkdwl" Feb 17 13:47:37 crc kubenswrapper[4768]: I0217 13:47:37.978736 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"kube-root-ca.crt" Feb 17 13:47:38 crc kubenswrapper[4768]: I0217 13:47:38.005990 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-8nwpk"] Feb 17 13:47:38 crc kubenswrapper[4768]: I0217 13:47:38.006704 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-687f57d79b-8nwpk" Feb 17 13:47:38 crc kubenswrapper[4768]: I0217 13:47:38.011077 4768 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-webhook-dockercfg-7hbh8" Feb 17 13:47:38 crc kubenswrapper[4768]: I0217 13:47:38.011171 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-858654f9db-ktxxp"] Feb 17 13:47:38 crc kubenswrapper[4768]: I0217 13:47:38.033527 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-8nwpk"] Feb 17 13:47:38 crc kubenswrapper[4768]: I0217 13:47:38.033563 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-858654f9db-ktxxp"] Feb 17 13:47:38 crc kubenswrapper[4768]: I0217 13:47:38.033640 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858654f9db-ktxxp" Feb 17 13:47:38 crc kubenswrapper[4768]: I0217 13:47:38.041835 4768 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-dockercfg-xvzd2" Feb 17 13:47:38 crc kubenswrapper[4768]: I0217 13:47:38.063766 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7flr9\" (UniqueName: \"kubernetes.io/projected/ddf2aeae-0541-4180-883f-a7bdfeb65a57-kube-api-access-7flr9\") pod \"cert-manager-cainjector-cf98fcc89-g4sjn\" (UID: \"ddf2aeae-0541-4180-883f-a7bdfeb65a57\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-g4sjn" Feb 17 13:47:38 crc kubenswrapper[4768]: I0217 13:47:38.063829 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xnr8x\" (UniqueName: \"kubernetes.io/projected/390428c9-7c97-428f-b609-39f72ff5e558-kube-api-access-xnr8x\") pod \"cert-manager-858654f9db-ktxxp\" (UID: \"390428c9-7c97-428f-b609-39f72ff5e558\") " pod="cert-manager/cert-manager-858654f9db-ktxxp" Feb 17 13:47:38 crc kubenswrapper[4768]: I0217 13:47:38.063857 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hnkp6\" (UniqueName: \"kubernetes.io/projected/c39c013f-68bd-4b7b-9582-2cecc55854a5-kube-api-access-hnkp6\") pod \"cert-manager-webhook-687f57d79b-8nwpk\" (UID: \"c39c013f-68bd-4b7b-9582-2cecc55854a5\") " pod="cert-manager/cert-manager-webhook-687f57d79b-8nwpk" Feb 17 13:47:38 crc kubenswrapper[4768]: I0217 13:47:38.164759 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xnr8x\" (UniqueName: \"kubernetes.io/projected/390428c9-7c97-428f-b609-39f72ff5e558-kube-api-access-xnr8x\") pod \"cert-manager-858654f9db-ktxxp\" (UID: \"390428c9-7c97-428f-b609-39f72ff5e558\") " pod="cert-manager/cert-manager-858654f9db-ktxxp" Feb 17 13:47:38 crc kubenswrapper[4768]: I0217 13:47:38.164811 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hnkp6\" (UniqueName: \"kubernetes.io/projected/c39c013f-68bd-4b7b-9582-2cecc55854a5-kube-api-access-hnkp6\") pod \"cert-manager-webhook-687f57d79b-8nwpk\" (UID: \"c39c013f-68bd-4b7b-9582-2cecc55854a5\") " pod="cert-manager/cert-manager-webhook-687f57d79b-8nwpk" Feb 17 13:47:38 crc kubenswrapper[4768]: I0217 13:47:38.164856 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7flr9\" (UniqueName: \"kubernetes.io/projected/ddf2aeae-0541-4180-883f-a7bdfeb65a57-kube-api-access-7flr9\") pod \"cert-manager-cainjector-cf98fcc89-g4sjn\" (UID: \"ddf2aeae-0541-4180-883f-a7bdfeb65a57\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-g4sjn" Feb 17 13:47:38 crc kubenswrapper[4768]: I0217 13:47:38.186879 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xnr8x\" (UniqueName: \"kubernetes.io/projected/390428c9-7c97-428f-b609-39f72ff5e558-kube-api-access-xnr8x\") pod \"cert-manager-858654f9db-ktxxp\" (UID: \"390428c9-7c97-428f-b609-39f72ff5e558\") " pod="cert-manager/cert-manager-858654f9db-ktxxp" Feb 17 13:47:38 crc kubenswrapper[4768]: I0217 13:47:38.186989 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hnkp6\" (UniqueName: \"kubernetes.io/projected/c39c013f-68bd-4b7b-9582-2cecc55854a5-kube-api-access-hnkp6\") pod \"cert-manager-webhook-687f57d79b-8nwpk\" (UID: \"c39c013f-68bd-4b7b-9582-2cecc55854a5\") " pod="cert-manager/cert-manager-webhook-687f57d79b-8nwpk" Feb 17 13:47:38 crc kubenswrapper[4768]: I0217 13:47:38.187581 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7flr9\" (UniqueName: \"kubernetes.io/projected/ddf2aeae-0541-4180-883f-a7bdfeb65a57-kube-api-access-7flr9\") pod \"cert-manager-cainjector-cf98fcc89-g4sjn\" (UID: \"ddf2aeae-0541-4180-883f-a7bdfeb65a57\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-g4sjn" Feb 17 13:47:38 crc kubenswrapper[4768]: I0217 13:47:38.290601 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-cf98fcc89-g4sjn" Feb 17 13:47:38 crc kubenswrapper[4768]: I0217 13:47:38.320904 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-687f57d79b-8nwpk" Feb 17 13:47:38 crc kubenswrapper[4768]: I0217 13:47:38.352350 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858654f9db-ktxxp" Feb 17 13:47:38 crc kubenswrapper[4768]: I0217 13:47:38.577349 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-g4sjn"] Feb 17 13:47:38 crc kubenswrapper[4768]: I0217 13:47:38.589679 4768 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 17 13:47:38 crc kubenswrapper[4768]: I0217 13:47:38.805383 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-8nwpk"] Feb 17 13:47:38 crc kubenswrapper[4768]: W0217 13:47:38.809675 4768 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc39c013f_68bd_4b7b_9582_2cecc55854a5.slice/crio-5dfce6573ccc45a585c3f7fa5d14b3cf302b756554edb31ac5b6d4c190f5a608 WatchSource:0}: Error finding container 5dfce6573ccc45a585c3f7fa5d14b3cf302b756554edb31ac5b6d4c190f5a608: Status 404 returned error can't find the container with id 5dfce6573ccc45a585c3f7fa5d14b3cf302b756554edb31ac5b6d4c190f5a608 Feb 17 13:47:38 crc kubenswrapper[4768]: I0217 13:47:38.824876 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-858654f9db-ktxxp"] Feb 17 13:47:38 crc kubenswrapper[4768]: W0217 13:47:38.831191 4768 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod390428c9_7c97_428f_b609_39f72ff5e558.slice/crio-71772e0623c54aab2c5fcf228ce8a5c8ec9de01247e7936b336e96f90febaf5f WatchSource:0}: Error finding container 71772e0623c54aab2c5fcf228ce8a5c8ec9de01247e7936b336e96f90febaf5f: Status 404 returned error can't find the container with id 71772e0623c54aab2c5fcf228ce8a5c8ec9de01247e7936b336e96f90febaf5f Feb 17 13:47:39 crc kubenswrapper[4768]: I0217 13:47:39.245187 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-cf98fcc89-g4sjn" event={"ID":"ddf2aeae-0541-4180-883f-a7bdfeb65a57","Type":"ContainerStarted","Data":"61ff0cb483cdf993e08c89267cab50f675ec4d73078863f37024acd676acb47f"} Feb 17 13:47:39 crc kubenswrapper[4768]: I0217 13:47:39.246253 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-858654f9db-ktxxp" event={"ID":"390428c9-7c97-428f-b609-39f72ff5e558","Type":"ContainerStarted","Data":"71772e0623c54aab2c5fcf228ce8a5c8ec9de01247e7936b336e96f90febaf5f"} Feb 17 13:47:39 crc kubenswrapper[4768]: I0217 13:47:39.247277 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-687f57d79b-8nwpk" event={"ID":"c39c013f-68bd-4b7b-9582-2cecc55854a5","Type":"ContainerStarted","Data":"5dfce6573ccc45a585c3f7fa5d14b3cf302b756554edb31ac5b6d4c190f5a608"} Feb 17 13:47:45 crc kubenswrapper[4768]: I0217 13:47:45.291691 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-858654f9db-ktxxp" event={"ID":"390428c9-7c97-428f-b609-39f72ff5e558","Type":"ContainerStarted","Data":"fb734ce0200b36d7ecd89d22de6a10b7551fe16aa9cd33ae2000fa4f2de5eeb1"} Feb 17 13:47:45 crc kubenswrapper[4768]: I0217 13:47:45.294577 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-687f57d79b-8nwpk" event={"ID":"c39c013f-68bd-4b7b-9582-2cecc55854a5","Type":"ContainerStarted","Data":"c51191043048b2d173f06097780249b5f37981c8d93f56000f9f748bb84bfac9"} Feb 17 13:47:45 crc kubenswrapper[4768]: I0217 13:47:45.294674 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="cert-manager/cert-manager-webhook-687f57d79b-8nwpk" Feb 17 13:47:45 crc kubenswrapper[4768]: I0217 13:47:45.296736 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-cf98fcc89-g4sjn" event={"ID":"ddf2aeae-0541-4180-883f-a7bdfeb65a57","Type":"ContainerStarted","Data":"b96c2677deb37f53a587dc4b8410db6ec1b74f1bf96e1cdf194d145cda9927a1"} Feb 17 13:47:45 crc kubenswrapper[4768]: I0217 13:47:45.314633 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-858654f9db-ktxxp" podStartSLOduration=2.9657545069999998 podStartE2EDuration="8.314617666s" podCreationTimestamp="2026-02-17 13:47:37 +0000 UTC" firstStartedPulling="2026-02-17 13:47:38.833117936 +0000 UTC m=+678.112504378" lastFinishedPulling="2026-02-17 13:47:44.181981095 +0000 UTC m=+683.461367537" observedRunningTime="2026-02-17 13:47:45.312325604 +0000 UTC m=+684.591712046" watchObservedRunningTime="2026-02-17 13:47:45.314617666 +0000 UTC m=+684.594004108" Feb 17 13:47:45 crc kubenswrapper[4768]: I0217 13:47:45.350070 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-webhook-687f57d79b-8nwpk" podStartSLOduration=2.986974134 podStartE2EDuration="8.350040582s" podCreationTimestamp="2026-02-17 13:47:37 +0000 UTC" firstStartedPulling="2026-02-17 13:47:38.811457718 +0000 UTC m=+678.090844160" lastFinishedPulling="2026-02-17 13:47:44.174524166 +0000 UTC m=+683.453910608" observedRunningTime="2026-02-17 13:47:45.33949672 +0000 UTC m=+684.618883202" watchObservedRunningTime="2026-02-17 13:47:45.350040582 +0000 UTC m=+684.629427054" Feb 17 13:47:45 crc kubenswrapper[4768]: I0217 13:47:45.361807 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-cainjector-cf98fcc89-g4sjn" podStartSLOduration=2.767586623 podStartE2EDuration="8.361781445s" podCreationTimestamp="2026-02-17 13:47:37 +0000 UTC" firstStartedPulling="2026-02-17 13:47:38.588495942 +0000 UTC m=+677.867882384" lastFinishedPulling="2026-02-17 13:47:44.182690764 +0000 UTC m=+683.462077206" observedRunningTime="2026-02-17 13:47:45.357990335 +0000 UTC m=+684.637376777" watchObservedRunningTime="2026-02-17 13:47:45.361781445 +0000 UTC m=+684.641167917" Feb 17 13:47:53 crc kubenswrapper[4768]: I0217 13:47:53.325968 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="cert-manager/cert-manager-webhook-687f57d79b-8nwpk" Feb 17 13:47:58 crc kubenswrapper[4768]: I0217 13:47:58.059851 4768 patch_prober.go:28] interesting pod/machine-config-daemon-p97z4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 13:47:58 crc kubenswrapper[4768]: I0217 13:47:58.059930 4768 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-p97z4" podUID="10c685ba-8fe0-425c-958c-3fb6754d3d84" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 13:48:12 crc kubenswrapper[4768]: I0217 13:48:12.837264 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-5cplg"] Feb 17 13:48:12 crc kubenswrapper[4768]: I0217 13:48:12.838267 4768 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-5cplg" podUID="742e6df8-2a68-426e-982c-ef825c6efca3" containerName="ovn-controller" containerID="cri-o://d0849644593ce9a9b0ce3191eb62e47c5986c4b20f93313dd9fa721c7879ede5" gracePeriod=30 Feb 17 13:48:12 crc kubenswrapper[4768]: I0217 13:48:12.838326 4768 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-5cplg" podUID="742e6df8-2a68-426e-982c-ef825c6efca3" containerName="nbdb" containerID="cri-o://bca83097ac9d4b621e9bffa8f2f815a6060db34cb0272281b765aa7705d3ab5e" gracePeriod=30 Feb 17 13:48:12 crc kubenswrapper[4768]: I0217 13:48:12.838400 4768 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-5cplg" podUID="742e6df8-2a68-426e-982c-ef825c6efca3" containerName="northd" containerID="cri-o://aaa535fc7f87d47d5bb54797f60b4adcabe1adc1d5d79720acc7a9ba7bb355c2" gracePeriod=30 Feb 17 13:48:12 crc kubenswrapper[4768]: I0217 13:48:12.838448 4768 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-5cplg" podUID="742e6df8-2a68-426e-982c-ef825c6efca3" containerName="kube-rbac-proxy-ovn-metrics" containerID="cri-o://2c29effffff4df037300ce8f32cae5337e3c60634d26fb82fde99ad7994a4fd4" gracePeriod=30 Feb 17 13:48:12 crc kubenswrapper[4768]: I0217 13:48:12.838486 4768 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-5cplg" podUID="742e6df8-2a68-426e-982c-ef825c6efca3" containerName="kube-rbac-proxy-node" containerID="cri-o://88f4574f4bb95ef3e580f5e3e98feb55cd1381df1cc14a3797f040f3a8d92e86" gracePeriod=30 Feb 17 13:48:12 crc kubenswrapper[4768]: I0217 13:48:12.838523 4768 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-5cplg" podUID="742e6df8-2a68-426e-982c-ef825c6efca3" containerName="ovn-acl-logging" containerID="cri-o://2e14dfa7f4c26b9ae731392907658bf0a72b811243137b32320a1be70a334a1f" gracePeriod=30 Feb 17 13:48:12 crc kubenswrapper[4768]: I0217 13:48:12.838690 4768 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-5cplg" podUID="742e6df8-2a68-426e-982c-ef825c6efca3" containerName="sbdb" containerID="cri-o://a8e5ca206927b1e2a6968ce42ecdb1441da4c68ad11b65c657a68889ac73af37" gracePeriod=30 Feb 17 13:48:12 crc kubenswrapper[4768]: I0217 13:48:12.878225 4768 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-5cplg" podUID="742e6df8-2a68-426e-982c-ef825c6efca3" containerName="ovnkube-controller" containerID="cri-o://d61025c117feb912aa62fab47e8cea3a9df5a2d323840571cec1ecd64c4b3fbe" gracePeriod=30 Feb 17 13:48:13 crc kubenswrapper[4768]: I0217 13:48:13.181593 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-5cplg_742e6df8-2a68-426e-982c-ef825c6efca3/ovnkube-controller/3.log" Feb 17 13:48:13 crc kubenswrapper[4768]: I0217 13:48:13.183773 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-5cplg_742e6df8-2a68-426e-982c-ef825c6efca3/ovn-acl-logging/0.log" Feb 17 13:48:13 crc kubenswrapper[4768]: I0217 13:48:13.184241 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-5cplg_742e6df8-2a68-426e-982c-ef825c6efca3/ovn-controller/0.log" Feb 17 13:48:13 crc kubenswrapper[4768]: I0217 13:48:13.184963 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-5cplg" Feb 17 13:48:13 crc kubenswrapper[4768]: I0217 13:48:13.248551 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-qt2lc"] Feb 17 13:48:13 crc kubenswrapper[4768]: E0217 13:48:13.248783 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="742e6df8-2a68-426e-982c-ef825c6efca3" containerName="kube-rbac-proxy-ovn-metrics" Feb 17 13:48:13 crc kubenswrapper[4768]: I0217 13:48:13.248798 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="742e6df8-2a68-426e-982c-ef825c6efca3" containerName="kube-rbac-proxy-ovn-metrics" Feb 17 13:48:13 crc kubenswrapper[4768]: E0217 13:48:13.248808 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="742e6df8-2a68-426e-982c-ef825c6efca3" containerName="ovn-acl-logging" Feb 17 13:48:13 crc kubenswrapper[4768]: I0217 13:48:13.248815 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="742e6df8-2a68-426e-982c-ef825c6efca3" containerName="ovn-acl-logging" Feb 17 13:48:13 crc kubenswrapper[4768]: E0217 13:48:13.248828 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="742e6df8-2a68-426e-982c-ef825c6efca3" containerName="sbdb" Feb 17 13:48:13 crc kubenswrapper[4768]: I0217 13:48:13.248837 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="742e6df8-2a68-426e-982c-ef825c6efca3" containerName="sbdb" Feb 17 13:48:13 crc kubenswrapper[4768]: E0217 13:48:13.248848 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="742e6df8-2a68-426e-982c-ef825c6efca3" containerName="ovnkube-controller" Feb 17 13:48:13 crc kubenswrapper[4768]: I0217 13:48:13.248856 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="742e6df8-2a68-426e-982c-ef825c6efca3" containerName="ovnkube-controller" Feb 17 13:48:13 crc kubenswrapper[4768]: E0217 13:48:13.248864 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="742e6df8-2a68-426e-982c-ef825c6efca3" containerName="ovnkube-controller" Feb 17 13:48:13 crc kubenswrapper[4768]: I0217 13:48:13.248873 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="742e6df8-2a68-426e-982c-ef825c6efca3" containerName="ovnkube-controller" Feb 17 13:48:13 crc kubenswrapper[4768]: E0217 13:48:13.248886 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="742e6df8-2a68-426e-982c-ef825c6efca3" containerName="kubecfg-setup" Feb 17 13:48:13 crc kubenswrapper[4768]: I0217 13:48:13.248894 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="742e6df8-2a68-426e-982c-ef825c6efca3" containerName="kubecfg-setup" Feb 17 13:48:13 crc kubenswrapper[4768]: E0217 13:48:13.251259 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="742e6df8-2a68-426e-982c-ef825c6efca3" containerName="ovn-controller" Feb 17 13:48:13 crc kubenswrapper[4768]: I0217 13:48:13.251330 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="742e6df8-2a68-426e-982c-ef825c6efca3" containerName="ovn-controller" Feb 17 13:48:13 crc kubenswrapper[4768]: E0217 13:48:13.251357 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="742e6df8-2a68-426e-982c-ef825c6efca3" containerName="kube-rbac-proxy-node" Feb 17 13:48:13 crc kubenswrapper[4768]: I0217 13:48:13.251371 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="742e6df8-2a68-426e-982c-ef825c6efca3" containerName="kube-rbac-proxy-node" Feb 17 13:48:13 crc kubenswrapper[4768]: E0217 13:48:13.251386 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="742e6df8-2a68-426e-982c-ef825c6efca3" containerName="northd" Feb 17 13:48:13 crc kubenswrapper[4768]: I0217 13:48:13.251399 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="742e6df8-2a68-426e-982c-ef825c6efca3" containerName="northd" Feb 17 13:48:13 crc kubenswrapper[4768]: E0217 13:48:13.251416 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="742e6df8-2a68-426e-982c-ef825c6efca3" containerName="nbdb" Feb 17 13:48:13 crc kubenswrapper[4768]: I0217 13:48:13.251429 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="742e6df8-2a68-426e-982c-ef825c6efca3" containerName="nbdb" Feb 17 13:48:13 crc kubenswrapper[4768]: E0217 13:48:13.251453 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="742e6df8-2a68-426e-982c-ef825c6efca3" containerName="ovnkube-controller" Feb 17 13:48:13 crc kubenswrapper[4768]: I0217 13:48:13.251466 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="742e6df8-2a68-426e-982c-ef825c6efca3" containerName="ovnkube-controller" Feb 17 13:48:13 crc kubenswrapper[4768]: I0217 13:48:13.251741 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="742e6df8-2a68-426e-982c-ef825c6efca3" containerName="ovnkube-controller" Feb 17 13:48:13 crc kubenswrapper[4768]: I0217 13:48:13.251779 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="742e6df8-2a68-426e-982c-ef825c6efca3" containerName="ovnkube-controller" Feb 17 13:48:13 crc kubenswrapper[4768]: I0217 13:48:13.251796 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="742e6df8-2a68-426e-982c-ef825c6efca3" containerName="ovn-controller" Feb 17 13:48:13 crc kubenswrapper[4768]: I0217 13:48:13.251809 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="742e6df8-2a68-426e-982c-ef825c6efca3" containerName="kube-rbac-proxy-ovn-metrics" Feb 17 13:48:13 crc kubenswrapper[4768]: I0217 13:48:13.251827 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="742e6df8-2a68-426e-982c-ef825c6efca3" containerName="sbdb" Feb 17 13:48:13 crc kubenswrapper[4768]: I0217 13:48:13.251846 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="742e6df8-2a68-426e-982c-ef825c6efca3" containerName="ovn-acl-logging" Feb 17 13:48:13 crc kubenswrapper[4768]: I0217 13:48:13.251862 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="742e6df8-2a68-426e-982c-ef825c6efca3" containerName="ovnkube-controller" Feb 17 13:48:13 crc kubenswrapper[4768]: I0217 13:48:13.251879 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="742e6df8-2a68-426e-982c-ef825c6efca3" containerName="northd" Feb 17 13:48:13 crc kubenswrapper[4768]: I0217 13:48:13.251902 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="742e6df8-2a68-426e-982c-ef825c6efca3" containerName="nbdb" Feb 17 13:48:13 crc kubenswrapper[4768]: I0217 13:48:13.251919 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="742e6df8-2a68-426e-982c-ef825c6efca3" containerName="kube-rbac-proxy-node" Feb 17 13:48:13 crc kubenswrapper[4768]: E0217 13:48:13.252086 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="742e6df8-2a68-426e-982c-ef825c6efca3" containerName="ovnkube-controller" Feb 17 13:48:13 crc kubenswrapper[4768]: I0217 13:48:13.252124 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="742e6df8-2a68-426e-982c-ef825c6efca3" containerName="ovnkube-controller" Feb 17 13:48:13 crc kubenswrapper[4768]: I0217 13:48:13.252298 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="742e6df8-2a68-426e-982c-ef825c6efca3" containerName="ovnkube-controller" Feb 17 13:48:13 crc kubenswrapper[4768]: E0217 13:48:13.252479 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="742e6df8-2a68-426e-982c-ef825c6efca3" containerName="ovnkube-controller" Feb 17 13:48:13 crc kubenswrapper[4768]: I0217 13:48:13.252502 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="742e6df8-2a68-426e-982c-ef825c6efca3" containerName="ovnkube-controller" Feb 17 13:48:13 crc kubenswrapper[4768]: I0217 13:48:13.252715 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="742e6df8-2a68-426e-982c-ef825c6efca3" containerName="ovnkube-controller" Feb 17 13:48:13 crc kubenswrapper[4768]: I0217 13:48:13.255290 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-qt2lc" Feb 17 13:48:13 crc kubenswrapper[4768]: I0217 13:48:13.260997 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/eea3d023-6616-48ab-8208-6e62f936e840-ovnkube-config\") pod \"ovnkube-node-qt2lc\" (UID: \"eea3d023-6616-48ab-8208-6e62f936e840\") " pod="openshift-ovn-kubernetes/ovnkube-node-qt2lc" Feb 17 13:48:13 crc kubenswrapper[4768]: I0217 13:48:13.261080 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/eea3d023-6616-48ab-8208-6e62f936e840-run-systemd\") pod \"ovnkube-node-qt2lc\" (UID: \"eea3d023-6616-48ab-8208-6e62f936e840\") " pod="openshift-ovn-kubernetes/ovnkube-node-qt2lc" Feb 17 13:48:13 crc kubenswrapper[4768]: I0217 13:48:13.261231 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/eea3d023-6616-48ab-8208-6e62f936e840-var-lib-openvswitch\") pod \"ovnkube-node-qt2lc\" (UID: \"eea3d023-6616-48ab-8208-6e62f936e840\") " pod="openshift-ovn-kubernetes/ovnkube-node-qt2lc" Feb 17 13:48:13 crc kubenswrapper[4768]: I0217 13:48:13.261316 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/eea3d023-6616-48ab-8208-6e62f936e840-node-log\") pod \"ovnkube-node-qt2lc\" (UID: \"eea3d023-6616-48ab-8208-6e62f936e840\") " pod="openshift-ovn-kubernetes/ovnkube-node-qt2lc" Feb 17 13:48:13 crc kubenswrapper[4768]: I0217 13:48:13.261374 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/eea3d023-6616-48ab-8208-6e62f936e840-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-qt2lc\" (UID: \"eea3d023-6616-48ab-8208-6e62f936e840\") " pod="openshift-ovn-kubernetes/ovnkube-node-qt2lc" Feb 17 13:48:13 crc kubenswrapper[4768]: I0217 13:48:13.261480 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/eea3d023-6616-48ab-8208-6e62f936e840-run-ovn\") pod \"ovnkube-node-qt2lc\" (UID: \"eea3d023-6616-48ab-8208-6e62f936e840\") " pod="openshift-ovn-kubernetes/ovnkube-node-qt2lc" Feb 17 13:48:13 crc kubenswrapper[4768]: I0217 13:48:13.261621 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/eea3d023-6616-48ab-8208-6e62f936e840-host-cni-netd\") pod \"ovnkube-node-qt2lc\" (UID: \"eea3d023-6616-48ab-8208-6e62f936e840\") " pod="openshift-ovn-kubernetes/ovnkube-node-qt2lc" Feb 17 13:48:13 crc kubenswrapper[4768]: I0217 13:48:13.261759 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/eea3d023-6616-48ab-8208-6e62f936e840-ovnkube-script-lib\") pod \"ovnkube-node-qt2lc\" (UID: \"eea3d023-6616-48ab-8208-6e62f936e840\") " pod="openshift-ovn-kubernetes/ovnkube-node-qt2lc" Feb 17 13:48:13 crc kubenswrapper[4768]: I0217 13:48:13.261855 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5lw7l\" (UniqueName: \"kubernetes.io/projected/eea3d023-6616-48ab-8208-6e62f936e840-kube-api-access-5lw7l\") pod \"ovnkube-node-qt2lc\" (UID: \"eea3d023-6616-48ab-8208-6e62f936e840\") " pod="openshift-ovn-kubernetes/ovnkube-node-qt2lc" Feb 17 13:48:13 crc kubenswrapper[4768]: I0217 13:48:13.261943 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/eea3d023-6616-48ab-8208-6e62f936e840-host-cni-bin\") pod \"ovnkube-node-qt2lc\" (UID: \"eea3d023-6616-48ab-8208-6e62f936e840\") " pod="openshift-ovn-kubernetes/ovnkube-node-qt2lc" Feb 17 13:48:13 crc kubenswrapper[4768]: I0217 13:48:13.262040 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/eea3d023-6616-48ab-8208-6e62f936e840-ovn-node-metrics-cert\") pod \"ovnkube-node-qt2lc\" (UID: \"eea3d023-6616-48ab-8208-6e62f936e840\") " pod="openshift-ovn-kubernetes/ovnkube-node-qt2lc" Feb 17 13:48:13 crc kubenswrapper[4768]: I0217 13:48:13.262153 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/eea3d023-6616-48ab-8208-6e62f936e840-host-run-netns\") pod \"ovnkube-node-qt2lc\" (UID: \"eea3d023-6616-48ab-8208-6e62f936e840\") " pod="openshift-ovn-kubernetes/ovnkube-node-qt2lc" Feb 17 13:48:13 crc kubenswrapper[4768]: I0217 13:48:13.262248 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/eea3d023-6616-48ab-8208-6e62f936e840-host-slash\") pod \"ovnkube-node-qt2lc\" (UID: \"eea3d023-6616-48ab-8208-6e62f936e840\") " pod="openshift-ovn-kubernetes/ovnkube-node-qt2lc" Feb 17 13:48:13 crc kubenswrapper[4768]: I0217 13:48:13.262412 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/eea3d023-6616-48ab-8208-6e62f936e840-etc-openvswitch\") pod \"ovnkube-node-qt2lc\" (UID: \"eea3d023-6616-48ab-8208-6e62f936e840\") " pod="openshift-ovn-kubernetes/ovnkube-node-qt2lc" Feb 17 13:48:13 crc kubenswrapper[4768]: I0217 13:48:13.262464 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/eea3d023-6616-48ab-8208-6e62f936e840-systemd-units\") pod \"ovnkube-node-qt2lc\" (UID: \"eea3d023-6616-48ab-8208-6e62f936e840\") " pod="openshift-ovn-kubernetes/ovnkube-node-qt2lc" Feb 17 13:48:13 crc kubenswrapper[4768]: I0217 13:48:13.262492 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/eea3d023-6616-48ab-8208-6e62f936e840-run-openvswitch\") pod \"ovnkube-node-qt2lc\" (UID: \"eea3d023-6616-48ab-8208-6e62f936e840\") " pod="openshift-ovn-kubernetes/ovnkube-node-qt2lc" Feb 17 13:48:13 crc kubenswrapper[4768]: I0217 13:48:13.262521 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/eea3d023-6616-48ab-8208-6e62f936e840-host-run-ovn-kubernetes\") pod \"ovnkube-node-qt2lc\" (UID: \"eea3d023-6616-48ab-8208-6e62f936e840\") " pod="openshift-ovn-kubernetes/ovnkube-node-qt2lc" Feb 17 13:48:13 crc kubenswrapper[4768]: I0217 13:48:13.262550 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/eea3d023-6616-48ab-8208-6e62f936e840-host-kubelet\") pod \"ovnkube-node-qt2lc\" (UID: \"eea3d023-6616-48ab-8208-6e62f936e840\") " pod="openshift-ovn-kubernetes/ovnkube-node-qt2lc" Feb 17 13:48:13 crc kubenswrapper[4768]: I0217 13:48:13.262578 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/eea3d023-6616-48ab-8208-6e62f936e840-env-overrides\") pod \"ovnkube-node-qt2lc\" (UID: \"eea3d023-6616-48ab-8208-6e62f936e840\") " pod="openshift-ovn-kubernetes/ovnkube-node-qt2lc" Feb 17 13:48:13 crc kubenswrapper[4768]: I0217 13:48:13.262611 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/eea3d023-6616-48ab-8208-6e62f936e840-log-socket\") pod \"ovnkube-node-qt2lc\" (UID: \"eea3d023-6616-48ab-8208-6e62f936e840\") " pod="openshift-ovn-kubernetes/ovnkube-node-qt2lc" Feb 17 13:48:13 crc kubenswrapper[4768]: I0217 13:48:13.363199 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/742e6df8-2a68-426e-982c-ef825c6efca3-ovnkube-config\") pod \"742e6df8-2a68-426e-982c-ef825c6efca3\" (UID: \"742e6df8-2a68-426e-982c-ef825c6efca3\") " Feb 17 13:48:13 crc kubenswrapper[4768]: I0217 13:48:13.363509 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/742e6df8-2a68-426e-982c-ef825c6efca3-var-lib-openvswitch\") pod \"742e6df8-2a68-426e-982c-ef825c6efca3\" (UID: \"742e6df8-2a68-426e-982c-ef825c6efca3\") " Feb 17 13:48:13 crc kubenswrapper[4768]: I0217 13:48:13.363611 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tg6ql\" (UniqueName: \"kubernetes.io/projected/742e6df8-2a68-426e-982c-ef825c6efca3-kube-api-access-tg6ql\") pod \"742e6df8-2a68-426e-982c-ef825c6efca3\" (UID: \"742e6df8-2a68-426e-982c-ef825c6efca3\") " Feb 17 13:48:13 crc kubenswrapper[4768]: I0217 13:48:13.363716 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/742e6df8-2a68-426e-982c-ef825c6efca3-ovn-node-metrics-cert\") pod \"742e6df8-2a68-426e-982c-ef825c6efca3\" (UID: \"742e6df8-2a68-426e-982c-ef825c6efca3\") " Feb 17 13:48:13 crc kubenswrapper[4768]: I0217 13:48:13.363808 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/742e6df8-2a68-426e-982c-ef825c6efca3-host-cni-netd\") pod \"742e6df8-2a68-426e-982c-ef825c6efca3\" (UID: \"742e6df8-2a68-426e-982c-ef825c6efca3\") " Feb 17 13:48:13 crc kubenswrapper[4768]: I0217 13:48:13.363881 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/742e6df8-2a68-426e-982c-ef825c6efca3-ovnkube-script-lib\") pod \"742e6df8-2a68-426e-982c-ef825c6efca3\" (UID: \"742e6df8-2a68-426e-982c-ef825c6efca3\") " Feb 17 13:48:13 crc kubenswrapper[4768]: I0217 13:48:13.363962 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/742e6df8-2a68-426e-982c-ef825c6efca3-run-ovn\") pod \"742e6df8-2a68-426e-982c-ef825c6efca3\" (UID: \"742e6df8-2a68-426e-982c-ef825c6efca3\") " Feb 17 13:48:13 crc kubenswrapper[4768]: I0217 13:48:13.364037 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/742e6df8-2a68-426e-982c-ef825c6efca3-host-cni-bin\") pod \"742e6df8-2a68-426e-982c-ef825c6efca3\" (UID: \"742e6df8-2a68-426e-982c-ef825c6efca3\") " Feb 17 13:48:13 crc kubenswrapper[4768]: I0217 13:48:13.364163 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/742e6df8-2a68-426e-982c-ef825c6efca3-etc-openvswitch\") pod \"742e6df8-2a68-426e-982c-ef825c6efca3\" (UID: \"742e6df8-2a68-426e-982c-ef825c6efca3\") " Feb 17 13:48:13 crc kubenswrapper[4768]: I0217 13:48:13.364271 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/742e6df8-2a68-426e-982c-ef825c6efca3-env-overrides\") pod \"742e6df8-2a68-426e-982c-ef825c6efca3\" (UID: \"742e6df8-2a68-426e-982c-ef825c6efca3\") " Feb 17 13:48:13 crc kubenswrapper[4768]: I0217 13:48:13.364373 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/742e6df8-2a68-426e-982c-ef825c6efca3-node-log\") pod \"742e6df8-2a68-426e-982c-ef825c6efca3\" (UID: \"742e6df8-2a68-426e-982c-ef825c6efca3\") " Feb 17 13:48:13 crc kubenswrapper[4768]: I0217 13:48:13.364450 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/742e6df8-2a68-426e-982c-ef825c6efca3-host-slash\") pod \"742e6df8-2a68-426e-982c-ef825c6efca3\" (UID: \"742e6df8-2a68-426e-982c-ef825c6efca3\") " Feb 17 13:48:13 crc kubenswrapper[4768]: I0217 13:48:13.364521 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/742e6df8-2a68-426e-982c-ef825c6efca3-host-run-ovn-kubernetes\") pod \"742e6df8-2a68-426e-982c-ef825c6efca3\" (UID: \"742e6df8-2a68-426e-982c-ef825c6efca3\") " Feb 17 13:48:13 crc kubenswrapper[4768]: I0217 13:48:13.364607 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/742e6df8-2a68-426e-982c-ef825c6efca3-host-var-lib-cni-networks-ovn-kubernetes\") pod \"742e6df8-2a68-426e-982c-ef825c6efca3\" (UID: \"742e6df8-2a68-426e-982c-ef825c6efca3\") " Feb 17 13:48:13 crc kubenswrapper[4768]: I0217 13:48:13.364696 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/742e6df8-2a68-426e-982c-ef825c6efca3-log-socket\") pod \"742e6df8-2a68-426e-982c-ef825c6efca3\" (UID: \"742e6df8-2a68-426e-982c-ef825c6efca3\") " Feb 17 13:48:13 crc kubenswrapper[4768]: I0217 13:48:13.364777 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/742e6df8-2a68-426e-982c-ef825c6efca3-run-openvswitch\") pod \"742e6df8-2a68-426e-982c-ef825c6efca3\" (UID: \"742e6df8-2a68-426e-982c-ef825c6efca3\") " Feb 17 13:48:13 crc kubenswrapper[4768]: I0217 13:48:13.364858 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/742e6df8-2a68-426e-982c-ef825c6efca3-host-kubelet\") pod \"742e6df8-2a68-426e-982c-ef825c6efca3\" (UID: \"742e6df8-2a68-426e-982c-ef825c6efca3\") " Feb 17 13:48:13 crc kubenswrapper[4768]: I0217 13:48:13.364936 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/742e6df8-2a68-426e-982c-ef825c6efca3-systemd-units\") pod \"742e6df8-2a68-426e-982c-ef825c6efca3\" (UID: \"742e6df8-2a68-426e-982c-ef825c6efca3\") " Feb 17 13:48:13 crc kubenswrapper[4768]: I0217 13:48:13.365020 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/742e6df8-2a68-426e-982c-ef825c6efca3-host-run-netns\") pod \"742e6df8-2a68-426e-982c-ef825c6efca3\" (UID: \"742e6df8-2a68-426e-982c-ef825c6efca3\") " Feb 17 13:48:13 crc kubenswrapper[4768]: I0217 13:48:13.365125 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/742e6df8-2a68-426e-982c-ef825c6efca3-run-systemd\") pod \"742e6df8-2a68-426e-982c-ef825c6efca3\" (UID: \"742e6df8-2a68-426e-982c-ef825c6efca3\") " Feb 17 13:48:13 crc kubenswrapper[4768]: I0217 13:48:13.365316 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/eea3d023-6616-48ab-8208-6e62f936e840-etc-openvswitch\") pod \"ovnkube-node-qt2lc\" (UID: \"eea3d023-6616-48ab-8208-6e62f936e840\") " pod="openshift-ovn-kubernetes/ovnkube-node-qt2lc" Feb 17 13:48:13 crc kubenswrapper[4768]: I0217 13:48:13.363621 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/742e6df8-2a68-426e-982c-ef825c6efca3-var-lib-openvswitch" (OuterVolumeSpecName: "var-lib-openvswitch") pod "742e6df8-2a68-426e-982c-ef825c6efca3" (UID: "742e6df8-2a68-426e-982c-ef825c6efca3"). InnerVolumeSpecName "var-lib-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 13:48:13 crc kubenswrapper[4768]: I0217 13:48:13.364183 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/742e6df8-2a68-426e-982c-ef825c6efca3-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "742e6df8-2a68-426e-982c-ef825c6efca3" (UID: "742e6df8-2a68-426e-982c-ef825c6efca3"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 13:48:13 crc kubenswrapper[4768]: I0217 13:48:13.364253 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/742e6df8-2a68-426e-982c-ef825c6efca3-host-cni-netd" (OuterVolumeSpecName: "host-cni-netd") pod "742e6df8-2a68-426e-982c-ef825c6efca3" (UID: "742e6df8-2a68-426e-982c-ef825c6efca3"). InnerVolumeSpecName "host-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 13:48:13 crc kubenswrapper[4768]: I0217 13:48:13.364254 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/742e6df8-2a68-426e-982c-ef825c6efca3-run-ovn" (OuterVolumeSpecName: "run-ovn") pod "742e6df8-2a68-426e-982c-ef825c6efca3" (UID: "742e6df8-2a68-426e-982c-ef825c6efca3"). InnerVolumeSpecName "run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 13:48:13 crc kubenswrapper[4768]: I0217 13:48:13.364615 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/742e6df8-2a68-426e-982c-ef825c6efca3-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "742e6df8-2a68-426e-982c-ef825c6efca3" (UID: "742e6df8-2a68-426e-982c-ef825c6efca3"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 13:48:13 crc kubenswrapper[4768]: I0217 13:48:13.364895 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/742e6df8-2a68-426e-982c-ef825c6efca3-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "742e6df8-2a68-426e-982c-ef825c6efca3" (UID: "742e6df8-2a68-426e-982c-ef825c6efca3"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 13:48:13 crc kubenswrapper[4768]: I0217 13:48:13.365403 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/eea3d023-6616-48ab-8208-6e62f936e840-run-openvswitch\") pod \"ovnkube-node-qt2lc\" (UID: \"eea3d023-6616-48ab-8208-6e62f936e840\") " pod="openshift-ovn-kubernetes/ovnkube-node-qt2lc" Feb 17 13:48:13 crc kubenswrapper[4768]: I0217 13:48:13.365763 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/eea3d023-6616-48ab-8208-6e62f936e840-systemd-units\") pod \"ovnkube-node-qt2lc\" (UID: \"eea3d023-6616-48ab-8208-6e62f936e840\") " pod="openshift-ovn-kubernetes/ovnkube-node-qt2lc" Feb 17 13:48:13 crc kubenswrapper[4768]: I0217 13:48:13.365801 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/eea3d023-6616-48ab-8208-6e62f936e840-host-run-ovn-kubernetes\") pod \"ovnkube-node-qt2lc\" (UID: \"eea3d023-6616-48ab-8208-6e62f936e840\") " pod="openshift-ovn-kubernetes/ovnkube-node-qt2lc" Feb 17 13:48:13 crc kubenswrapper[4768]: I0217 13:48:13.365830 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/eea3d023-6616-48ab-8208-6e62f936e840-host-kubelet\") pod \"ovnkube-node-qt2lc\" (UID: \"eea3d023-6616-48ab-8208-6e62f936e840\") " pod="openshift-ovn-kubernetes/ovnkube-node-qt2lc" Feb 17 13:48:13 crc kubenswrapper[4768]: I0217 13:48:13.365853 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/eea3d023-6616-48ab-8208-6e62f936e840-env-overrides\") pod \"ovnkube-node-qt2lc\" (UID: \"eea3d023-6616-48ab-8208-6e62f936e840\") " pod="openshift-ovn-kubernetes/ovnkube-node-qt2lc" Feb 17 13:48:13 crc kubenswrapper[4768]: I0217 13:48:13.365876 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/eea3d023-6616-48ab-8208-6e62f936e840-log-socket\") pod \"ovnkube-node-qt2lc\" (UID: \"eea3d023-6616-48ab-8208-6e62f936e840\") " pod="openshift-ovn-kubernetes/ovnkube-node-qt2lc" Feb 17 13:48:13 crc kubenswrapper[4768]: I0217 13:48:13.365934 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/eea3d023-6616-48ab-8208-6e62f936e840-ovnkube-config\") pod \"ovnkube-node-qt2lc\" (UID: \"eea3d023-6616-48ab-8208-6e62f936e840\") " pod="openshift-ovn-kubernetes/ovnkube-node-qt2lc" Feb 17 13:48:13 crc kubenswrapper[4768]: I0217 13:48:13.365969 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/eea3d023-6616-48ab-8208-6e62f936e840-run-systemd\") pod \"ovnkube-node-qt2lc\" (UID: \"eea3d023-6616-48ab-8208-6e62f936e840\") " pod="openshift-ovn-kubernetes/ovnkube-node-qt2lc" Feb 17 13:48:13 crc kubenswrapper[4768]: I0217 13:48:13.365992 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/eea3d023-6616-48ab-8208-6e62f936e840-var-lib-openvswitch\") pod \"ovnkube-node-qt2lc\" (UID: \"eea3d023-6616-48ab-8208-6e62f936e840\") " pod="openshift-ovn-kubernetes/ovnkube-node-qt2lc" Feb 17 13:48:13 crc kubenswrapper[4768]: I0217 13:48:13.366037 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/eea3d023-6616-48ab-8208-6e62f936e840-node-log\") pod \"ovnkube-node-qt2lc\" (UID: \"eea3d023-6616-48ab-8208-6e62f936e840\") " pod="openshift-ovn-kubernetes/ovnkube-node-qt2lc" Feb 17 13:48:13 crc kubenswrapper[4768]: I0217 13:48:13.366073 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/eea3d023-6616-48ab-8208-6e62f936e840-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-qt2lc\" (UID: \"eea3d023-6616-48ab-8208-6e62f936e840\") " pod="openshift-ovn-kubernetes/ovnkube-node-qt2lc" Feb 17 13:48:13 crc kubenswrapper[4768]: I0217 13:48:13.366130 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/eea3d023-6616-48ab-8208-6e62f936e840-run-ovn\") pod \"ovnkube-node-qt2lc\" (UID: \"eea3d023-6616-48ab-8208-6e62f936e840\") " pod="openshift-ovn-kubernetes/ovnkube-node-qt2lc" Feb 17 13:48:13 crc kubenswrapper[4768]: I0217 13:48:13.366149 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/eea3d023-6616-48ab-8208-6e62f936e840-host-cni-netd\") pod \"ovnkube-node-qt2lc\" (UID: \"eea3d023-6616-48ab-8208-6e62f936e840\") " pod="openshift-ovn-kubernetes/ovnkube-node-qt2lc" Feb 17 13:48:13 crc kubenswrapper[4768]: I0217 13:48:13.366170 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/eea3d023-6616-48ab-8208-6e62f936e840-ovnkube-script-lib\") pod \"ovnkube-node-qt2lc\" (UID: \"eea3d023-6616-48ab-8208-6e62f936e840\") " pod="openshift-ovn-kubernetes/ovnkube-node-qt2lc" Feb 17 13:48:13 crc kubenswrapper[4768]: I0217 13:48:13.366202 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5lw7l\" (UniqueName: \"kubernetes.io/projected/eea3d023-6616-48ab-8208-6e62f936e840-kube-api-access-5lw7l\") pod \"ovnkube-node-qt2lc\" (UID: \"eea3d023-6616-48ab-8208-6e62f936e840\") " pod="openshift-ovn-kubernetes/ovnkube-node-qt2lc" Feb 17 13:48:13 crc kubenswrapper[4768]: I0217 13:48:13.366232 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/eea3d023-6616-48ab-8208-6e62f936e840-host-cni-bin\") pod \"ovnkube-node-qt2lc\" (UID: \"eea3d023-6616-48ab-8208-6e62f936e840\") " pod="openshift-ovn-kubernetes/ovnkube-node-qt2lc" Feb 17 13:48:13 crc kubenswrapper[4768]: I0217 13:48:13.366273 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/eea3d023-6616-48ab-8208-6e62f936e840-ovn-node-metrics-cert\") pod \"ovnkube-node-qt2lc\" (UID: \"eea3d023-6616-48ab-8208-6e62f936e840\") " pod="openshift-ovn-kubernetes/ovnkube-node-qt2lc" Feb 17 13:48:13 crc kubenswrapper[4768]: I0217 13:48:13.366301 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/eea3d023-6616-48ab-8208-6e62f936e840-host-run-netns\") pod \"ovnkube-node-qt2lc\" (UID: \"eea3d023-6616-48ab-8208-6e62f936e840\") " pod="openshift-ovn-kubernetes/ovnkube-node-qt2lc" Feb 17 13:48:13 crc kubenswrapper[4768]: I0217 13:48:13.366332 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/eea3d023-6616-48ab-8208-6e62f936e840-host-slash\") pod \"ovnkube-node-qt2lc\" (UID: \"eea3d023-6616-48ab-8208-6e62f936e840\") " pod="openshift-ovn-kubernetes/ovnkube-node-qt2lc" Feb 17 13:48:13 crc kubenswrapper[4768]: I0217 13:48:13.366394 4768 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/742e6df8-2a68-426e-982c-ef825c6efca3-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Feb 17 13:48:13 crc kubenswrapper[4768]: I0217 13:48:13.366410 4768 reconciler_common.go:293] "Volume detached for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/742e6df8-2a68-426e-982c-ef825c6efca3-host-cni-netd\") on node \"crc\" DevicePath \"\"" Feb 17 13:48:13 crc kubenswrapper[4768]: I0217 13:48:13.366423 4768 reconciler_common.go:293] "Volume detached for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/742e6df8-2a68-426e-982c-ef825c6efca3-run-ovn\") on node \"crc\" DevicePath \"\"" Feb 17 13:48:13 crc kubenswrapper[4768]: I0217 13:48:13.366435 4768 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/742e6df8-2a68-426e-982c-ef825c6efca3-env-overrides\") on node \"crc\" DevicePath \"\"" Feb 17 13:48:13 crc kubenswrapper[4768]: I0217 13:48:13.366449 4768 reconciler_common.go:293] "Volume detached for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/742e6df8-2a68-426e-982c-ef825c6efca3-var-lib-openvswitch\") on node \"crc\" DevicePath \"\"" Feb 17 13:48:13 crc kubenswrapper[4768]: I0217 13:48:13.366459 4768 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/742e6df8-2a68-426e-982c-ef825c6efca3-ovnkube-config\") on node \"crc\" DevicePath \"\"" Feb 17 13:48:13 crc kubenswrapper[4768]: I0217 13:48:13.366503 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/eea3d023-6616-48ab-8208-6e62f936e840-host-slash\") pod \"ovnkube-node-qt2lc\" (UID: \"eea3d023-6616-48ab-8208-6e62f936e840\") " pod="openshift-ovn-kubernetes/ovnkube-node-qt2lc" Feb 17 13:48:13 crc kubenswrapper[4768]: I0217 13:48:13.364921 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/742e6df8-2a68-426e-982c-ef825c6efca3-host-cni-bin" (OuterVolumeSpecName: "host-cni-bin") pod "742e6df8-2a68-426e-982c-ef825c6efca3" (UID: "742e6df8-2a68-426e-982c-ef825c6efca3"). InnerVolumeSpecName "host-cni-bin". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 13:48:13 crc kubenswrapper[4768]: I0217 13:48:13.364940 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/742e6df8-2a68-426e-982c-ef825c6efca3-etc-openvswitch" (OuterVolumeSpecName: "etc-openvswitch") pod "742e6df8-2a68-426e-982c-ef825c6efca3" (UID: "742e6df8-2a68-426e-982c-ef825c6efca3"). InnerVolumeSpecName "etc-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 13:48:13 crc kubenswrapper[4768]: I0217 13:48:13.364968 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/742e6df8-2a68-426e-982c-ef825c6efca3-host-var-lib-cni-networks-ovn-kubernetes" (OuterVolumeSpecName: "host-var-lib-cni-networks-ovn-kubernetes") pod "742e6df8-2a68-426e-982c-ef825c6efca3" (UID: "742e6df8-2a68-426e-982c-ef825c6efca3"). InnerVolumeSpecName "host-var-lib-cni-networks-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 13:48:13 crc kubenswrapper[4768]: I0217 13:48:13.364988 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/742e6df8-2a68-426e-982c-ef825c6efca3-node-log" (OuterVolumeSpecName: "node-log") pod "742e6df8-2a68-426e-982c-ef825c6efca3" (UID: "742e6df8-2a68-426e-982c-ef825c6efca3"). InnerVolumeSpecName "node-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 13:48:13 crc kubenswrapper[4768]: I0217 13:48:13.365006 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/742e6df8-2a68-426e-982c-ef825c6efca3-host-slash" (OuterVolumeSpecName: "host-slash") pod "742e6df8-2a68-426e-982c-ef825c6efca3" (UID: "742e6df8-2a68-426e-982c-ef825c6efca3"). InnerVolumeSpecName "host-slash". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 13:48:13 crc kubenswrapper[4768]: I0217 13:48:13.365027 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/742e6df8-2a68-426e-982c-ef825c6efca3-host-run-ovn-kubernetes" (OuterVolumeSpecName: "host-run-ovn-kubernetes") pod "742e6df8-2a68-426e-982c-ef825c6efca3" (UID: "742e6df8-2a68-426e-982c-ef825c6efca3"). InnerVolumeSpecName "host-run-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 13:48:13 crc kubenswrapper[4768]: I0217 13:48:13.365057 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/742e6df8-2a68-426e-982c-ef825c6efca3-host-kubelet" (OuterVolumeSpecName: "host-kubelet") pod "742e6df8-2a68-426e-982c-ef825c6efca3" (UID: "742e6df8-2a68-426e-982c-ef825c6efca3"). InnerVolumeSpecName "host-kubelet". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 13:48:13 crc kubenswrapper[4768]: I0217 13:48:13.365078 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/742e6df8-2a68-426e-982c-ef825c6efca3-log-socket" (OuterVolumeSpecName: "log-socket") pod "742e6df8-2a68-426e-982c-ef825c6efca3" (UID: "742e6df8-2a68-426e-982c-ef825c6efca3"). InnerVolumeSpecName "log-socket". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 13:48:13 crc kubenswrapper[4768]: I0217 13:48:13.365124 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/742e6df8-2a68-426e-982c-ef825c6efca3-run-openvswitch" (OuterVolumeSpecName: "run-openvswitch") pod "742e6df8-2a68-426e-982c-ef825c6efca3" (UID: "742e6df8-2a68-426e-982c-ef825c6efca3"). InnerVolumeSpecName "run-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 13:48:13 crc kubenswrapper[4768]: I0217 13:48:13.365158 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/742e6df8-2a68-426e-982c-ef825c6efca3-host-run-netns" (OuterVolumeSpecName: "host-run-netns") pod "742e6df8-2a68-426e-982c-ef825c6efca3" (UID: "742e6df8-2a68-426e-982c-ef825c6efca3"). InnerVolumeSpecName "host-run-netns". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 13:48:13 crc kubenswrapper[4768]: I0217 13:48:13.365219 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/742e6df8-2a68-426e-982c-ef825c6efca3-systemd-units" (OuterVolumeSpecName: "systemd-units") pod "742e6df8-2a68-426e-982c-ef825c6efca3" (UID: "742e6df8-2a68-426e-982c-ef825c6efca3"). InnerVolumeSpecName "systemd-units". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 13:48:13 crc kubenswrapper[4768]: I0217 13:48:13.366612 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/eea3d023-6616-48ab-8208-6e62f936e840-systemd-units\") pod \"ovnkube-node-qt2lc\" (UID: \"eea3d023-6616-48ab-8208-6e62f936e840\") " pod="openshift-ovn-kubernetes/ovnkube-node-qt2lc" Feb 17 13:48:13 crc kubenswrapper[4768]: I0217 13:48:13.365507 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/eea3d023-6616-48ab-8208-6e62f936e840-run-openvswitch\") pod \"ovnkube-node-qt2lc\" (UID: \"eea3d023-6616-48ab-8208-6e62f936e840\") " pod="openshift-ovn-kubernetes/ovnkube-node-qt2lc" Feb 17 13:48:13 crc kubenswrapper[4768]: I0217 13:48:13.366642 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/eea3d023-6616-48ab-8208-6e62f936e840-host-run-ovn-kubernetes\") pod \"ovnkube-node-qt2lc\" (UID: \"eea3d023-6616-48ab-8208-6e62f936e840\") " pod="openshift-ovn-kubernetes/ovnkube-node-qt2lc" Feb 17 13:48:13 crc kubenswrapper[4768]: I0217 13:48:13.365448 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/eea3d023-6616-48ab-8208-6e62f936e840-etc-openvswitch\") pod \"ovnkube-node-qt2lc\" (UID: \"eea3d023-6616-48ab-8208-6e62f936e840\") " pod="openshift-ovn-kubernetes/ovnkube-node-qt2lc" Feb 17 13:48:13 crc kubenswrapper[4768]: I0217 13:48:13.366671 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/eea3d023-6616-48ab-8208-6e62f936e840-host-kubelet\") pod \"ovnkube-node-qt2lc\" (UID: \"eea3d023-6616-48ab-8208-6e62f936e840\") " pod="openshift-ovn-kubernetes/ovnkube-node-qt2lc" Feb 17 13:48:13 crc kubenswrapper[4768]: I0217 13:48:13.367246 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/eea3d023-6616-48ab-8208-6e62f936e840-env-overrides\") pod \"ovnkube-node-qt2lc\" (UID: \"eea3d023-6616-48ab-8208-6e62f936e840\") " pod="openshift-ovn-kubernetes/ovnkube-node-qt2lc" Feb 17 13:48:13 crc kubenswrapper[4768]: I0217 13:48:13.367304 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/eea3d023-6616-48ab-8208-6e62f936e840-host-cni-netd\") pod \"ovnkube-node-qt2lc\" (UID: \"eea3d023-6616-48ab-8208-6e62f936e840\") " pod="openshift-ovn-kubernetes/ovnkube-node-qt2lc" Feb 17 13:48:13 crc kubenswrapper[4768]: I0217 13:48:13.367393 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/eea3d023-6616-48ab-8208-6e62f936e840-run-ovn\") pod \"ovnkube-node-qt2lc\" (UID: \"eea3d023-6616-48ab-8208-6e62f936e840\") " pod="openshift-ovn-kubernetes/ovnkube-node-qt2lc" Feb 17 13:48:13 crc kubenswrapper[4768]: I0217 13:48:13.367454 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/eea3d023-6616-48ab-8208-6e62f936e840-host-run-netns\") pod \"ovnkube-node-qt2lc\" (UID: \"eea3d023-6616-48ab-8208-6e62f936e840\") " pod="openshift-ovn-kubernetes/ovnkube-node-qt2lc" Feb 17 13:48:13 crc kubenswrapper[4768]: I0217 13:48:13.367550 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/eea3d023-6616-48ab-8208-6e62f936e840-var-lib-openvswitch\") pod \"ovnkube-node-qt2lc\" (UID: \"eea3d023-6616-48ab-8208-6e62f936e840\") " pod="openshift-ovn-kubernetes/ovnkube-node-qt2lc" Feb 17 13:48:13 crc kubenswrapper[4768]: I0217 13:48:13.367734 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/eea3d023-6616-48ab-8208-6e62f936e840-node-log\") pod \"ovnkube-node-qt2lc\" (UID: \"eea3d023-6616-48ab-8208-6e62f936e840\") " pod="openshift-ovn-kubernetes/ovnkube-node-qt2lc" Feb 17 13:48:13 crc kubenswrapper[4768]: I0217 13:48:13.367902 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/eea3d023-6616-48ab-8208-6e62f936e840-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-qt2lc\" (UID: \"eea3d023-6616-48ab-8208-6e62f936e840\") " pod="openshift-ovn-kubernetes/ovnkube-node-qt2lc" Feb 17 13:48:13 crc kubenswrapper[4768]: I0217 13:48:13.367757 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/eea3d023-6616-48ab-8208-6e62f936e840-run-systemd\") pod \"ovnkube-node-qt2lc\" (UID: \"eea3d023-6616-48ab-8208-6e62f936e840\") " pod="openshift-ovn-kubernetes/ovnkube-node-qt2lc" Feb 17 13:48:13 crc kubenswrapper[4768]: I0217 13:48:13.367496 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/eea3d023-6616-48ab-8208-6e62f936e840-log-socket\") pod \"ovnkube-node-qt2lc\" (UID: \"eea3d023-6616-48ab-8208-6e62f936e840\") " pod="openshift-ovn-kubernetes/ovnkube-node-qt2lc" Feb 17 13:48:13 crc kubenswrapper[4768]: I0217 13:48:13.368186 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/eea3d023-6616-48ab-8208-6e62f936e840-host-cni-bin\") pod \"ovnkube-node-qt2lc\" (UID: \"eea3d023-6616-48ab-8208-6e62f936e840\") " pod="openshift-ovn-kubernetes/ovnkube-node-qt2lc" Feb 17 13:48:13 crc kubenswrapper[4768]: I0217 13:48:13.368211 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/eea3d023-6616-48ab-8208-6e62f936e840-ovnkube-config\") pod \"ovnkube-node-qt2lc\" (UID: \"eea3d023-6616-48ab-8208-6e62f936e840\") " pod="openshift-ovn-kubernetes/ovnkube-node-qt2lc" Feb 17 13:48:13 crc kubenswrapper[4768]: I0217 13:48:13.368450 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/eea3d023-6616-48ab-8208-6e62f936e840-ovnkube-script-lib\") pod \"ovnkube-node-qt2lc\" (UID: \"eea3d023-6616-48ab-8208-6e62f936e840\") " pod="openshift-ovn-kubernetes/ovnkube-node-qt2lc" Feb 17 13:48:13 crc kubenswrapper[4768]: I0217 13:48:13.377838 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/742e6df8-2a68-426e-982c-ef825c6efca3-kube-api-access-tg6ql" (OuterVolumeSpecName: "kube-api-access-tg6ql") pod "742e6df8-2a68-426e-982c-ef825c6efca3" (UID: "742e6df8-2a68-426e-982c-ef825c6efca3"). InnerVolumeSpecName "kube-api-access-tg6ql". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 13:48:13 crc kubenswrapper[4768]: I0217 13:48:13.378655 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/eea3d023-6616-48ab-8208-6e62f936e840-ovn-node-metrics-cert\") pod \"ovnkube-node-qt2lc\" (UID: \"eea3d023-6616-48ab-8208-6e62f936e840\") " pod="openshift-ovn-kubernetes/ovnkube-node-qt2lc" Feb 17 13:48:13 crc kubenswrapper[4768]: I0217 13:48:13.378865 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/742e6df8-2a68-426e-982c-ef825c6efca3-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "742e6df8-2a68-426e-982c-ef825c6efca3" (UID: "742e6df8-2a68-426e-982c-ef825c6efca3"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 13:48:13 crc kubenswrapper[4768]: I0217 13:48:13.387736 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5lw7l\" (UniqueName: \"kubernetes.io/projected/eea3d023-6616-48ab-8208-6e62f936e840-kube-api-access-5lw7l\") pod \"ovnkube-node-qt2lc\" (UID: \"eea3d023-6616-48ab-8208-6e62f936e840\") " pod="openshift-ovn-kubernetes/ovnkube-node-qt2lc" Feb 17 13:48:13 crc kubenswrapper[4768]: I0217 13:48:13.392970 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/742e6df8-2a68-426e-982c-ef825c6efca3-run-systemd" (OuterVolumeSpecName: "run-systemd") pod "742e6df8-2a68-426e-982c-ef825c6efca3" (UID: "742e6df8-2a68-426e-982c-ef825c6efca3"). InnerVolumeSpecName "run-systemd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 13:48:13 crc kubenswrapper[4768]: I0217 13:48:13.467452 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-jjjqj_e044bf1f-26b2-4a39-86e6-0440eff3eaa9/kube-multus/2.log" Feb 17 13:48:13 crc kubenswrapper[4768]: I0217 13:48:13.467476 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tg6ql\" (UniqueName: \"kubernetes.io/projected/742e6df8-2a68-426e-982c-ef825c6efca3-kube-api-access-tg6ql\") on node \"crc\" DevicePath \"\"" Feb 17 13:48:13 crc kubenswrapper[4768]: I0217 13:48:13.467540 4768 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/742e6df8-2a68-426e-982c-ef825c6efca3-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Feb 17 13:48:13 crc kubenswrapper[4768]: I0217 13:48:13.467572 4768 reconciler_common.go:293] "Volume detached for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/742e6df8-2a68-426e-982c-ef825c6efca3-host-cni-bin\") on node \"crc\" DevicePath \"\"" Feb 17 13:48:13 crc kubenswrapper[4768]: I0217 13:48:13.467597 4768 reconciler_common.go:293] "Volume detached for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/742e6df8-2a68-426e-982c-ef825c6efca3-node-log\") on node \"crc\" DevicePath \"\"" Feb 17 13:48:13 crc kubenswrapper[4768]: I0217 13:48:13.467622 4768 reconciler_common.go:293] "Volume detached for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/742e6df8-2a68-426e-982c-ef825c6efca3-etc-openvswitch\") on node \"crc\" DevicePath \"\"" Feb 17 13:48:13 crc kubenswrapper[4768]: I0217 13:48:13.467645 4768 reconciler_common.go:293] "Volume detached for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/742e6df8-2a68-426e-982c-ef825c6efca3-host-slash\") on node \"crc\" DevicePath \"\"" Feb 17 13:48:13 crc kubenswrapper[4768]: I0217 13:48:13.467668 4768 reconciler_common.go:293] "Volume detached for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/742e6df8-2a68-426e-982c-ef825c6efca3-host-run-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Feb 17 13:48:13 crc kubenswrapper[4768]: I0217 13:48:13.467694 4768 reconciler_common.go:293] "Volume detached for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/742e6df8-2a68-426e-982c-ef825c6efca3-host-var-lib-cni-networks-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Feb 17 13:48:13 crc kubenswrapper[4768]: I0217 13:48:13.467721 4768 reconciler_common.go:293] "Volume detached for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/742e6df8-2a68-426e-982c-ef825c6efca3-log-socket\") on node \"crc\" DevicePath \"\"" Feb 17 13:48:13 crc kubenswrapper[4768]: I0217 13:48:13.467746 4768 reconciler_common.go:293] "Volume detached for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/742e6df8-2a68-426e-982c-ef825c6efca3-run-openvswitch\") on node \"crc\" DevicePath \"\"" Feb 17 13:48:13 crc kubenswrapper[4768]: I0217 13:48:13.467770 4768 reconciler_common.go:293] "Volume detached for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/742e6df8-2a68-426e-982c-ef825c6efca3-host-kubelet\") on node \"crc\" DevicePath \"\"" Feb 17 13:48:13 crc kubenswrapper[4768]: I0217 13:48:13.467799 4768 reconciler_common.go:293] "Volume detached for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/742e6df8-2a68-426e-982c-ef825c6efca3-systemd-units\") on node \"crc\" DevicePath \"\"" Feb 17 13:48:13 crc kubenswrapper[4768]: I0217 13:48:13.467823 4768 reconciler_common.go:293] "Volume detached for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/742e6df8-2a68-426e-982c-ef825c6efca3-host-run-netns\") on node \"crc\" DevicePath \"\"" Feb 17 13:48:13 crc kubenswrapper[4768]: I0217 13:48:13.467848 4768 reconciler_common.go:293] "Volume detached for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/742e6df8-2a68-426e-982c-ef825c6efca3-run-systemd\") on node \"crc\" DevicePath \"\"" Feb 17 13:48:13 crc kubenswrapper[4768]: I0217 13:48:13.467944 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-jjjqj_e044bf1f-26b2-4a39-86e6-0440eff3eaa9/kube-multus/1.log" Feb 17 13:48:13 crc kubenswrapper[4768]: I0217 13:48:13.467986 4768 generic.go:334] "Generic (PLEG): container finished" podID="e044bf1f-26b2-4a39-86e6-0440eff3eaa9" containerID="f0eabe9e6b5551e88ed34f7b32f5573dd3d736e0c52761e08b1a6b74957522ef" exitCode=2 Feb 17 13:48:13 crc kubenswrapper[4768]: I0217 13:48:13.468053 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-jjjqj" event={"ID":"e044bf1f-26b2-4a39-86e6-0440eff3eaa9","Type":"ContainerDied","Data":"f0eabe9e6b5551e88ed34f7b32f5573dd3d736e0c52761e08b1a6b74957522ef"} Feb 17 13:48:13 crc kubenswrapper[4768]: I0217 13:48:13.468091 4768 scope.go:117] "RemoveContainer" containerID="8710b3ee2ff8d7aa849b863cfe8b99fd97e02f57ece68e528c7c23994608bedd" Feb 17 13:48:13 crc kubenswrapper[4768]: I0217 13:48:13.468993 4768 scope.go:117] "RemoveContainer" containerID="f0eabe9e6b5551e88ed34f7b32f5573dd3d736e0c52761e08b1a6b74957522ef" Feb 17 13:48:13 crc kubenswrapper[4768]: E0217 13:48:13.469484 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-jjjqj_openshift-multus(e044bf1f-26b2-4a39-86e6-0440eff3eaa9)\"" pod="openshift-multus/multus-jjjqj" podUID="e044bf1f-26b2-4a39-86e6-0440eff3eaa9" Feb 17 13:48:13 crc kubenswrapper[4768]: I0217 13:48:13.472756 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-5cplg_742e6df8-2a68-426e-982c-ef825c6efca3/ovnkube-controller/3.log" Feb 17 13:48:13 crc kubenswrapper[4768]: I0217 13:48:13.475888 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-5cplg_742e6df8-2a68-426e-982c-ef825c6efca3/ovn-acl-logging/0.log" Feb 17 13:48:13 crc kubenswrapper[4768]: I0217 13:48:13.476804 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-5cplg_742e6df8-2a68-426e-982c-ef825c6efca3/ovn-controller/0.log" Feb 17 13:48:13 crc kubenswrapper[4768]: I0217 13:48:13.477418 4768 generic.go:334] "Generic (PLEG): container finished" podID="742e6df8-2a68-426e-982c-ef825c6efca3" containerID="d61025c117feb912aa62fab47e8cea3a9df5a2d323840571cec1ecd64c4b3fbe" exitCode=0 Feb 17 13:48:13 crc kubenswrapper[4768]: I0217 13:48:13.477469 4768 generic.go:334] "Generic (PLEG): container finished" podID="742e6df8-2a68-426e-982c-ef825c6efca3" containerID="a8e5ca206927b1e2a6968ce42ecdb1441da4c68ad11b65c657a68889ac73af37" exitCode=0 Feb 17 13:48:13 crc kubenswrapper[4768]: I0217 13:48:13.477499 4768 generic.go:334] "Generic (PLEG): container finished" podID="742e6df8-2a68-426e-982c-ef825c6efca3" containerID="bca83097ac9d4b621e9bffa8f2f815a6060db34cb0272281b765aa7705d3ab5e" exitCode=0 Feb 17 13:48:13 crc kubenswrapper[4768]: I0217 13:48:13.477520 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-5cplg" Feb 17 13:48:13 crc kubenswrapper[4768]: I0217 13:48:13.477505 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5cplg" event={"ID":"742e6df8-2a68-426e-982c-ef825c6efca3","Type":"ContainerDied","Data":"d61025c117feb912aa62fab47e8cea3a9df5a2d323840571cec1ecd64c4b3fbe"} Feb 17 13:48:13 crc kubenswrapper[4768]: I0217 13:48:13.477592 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5cplg" event={"ID":"742e6df8-2a68-426e-982c-ef825c6efca3","Type":"ContainerDied","Data":"a8e5ca206927b1e2a6968ce42ecdb1441da4c68ad11b65c657a68889ac73af37"} Feb 17 13:48:13 crc kubenswrapper[4768]: I0217 13:48:13.477622 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5cplg" event={"ID":"742e6df8-2a68-426e-982c-ef825c6efca3","Type":"ContainerDied","Data":"bca83097ac9d4b621e9bffa8f2f815a6060db34cb0272281b765aa7705d3ab5e"} Feb 17 13:48:13 crc kubenswrapper[4768]: I0217 13:48:13.477644 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5cplg" event={"ID":"742e6df8-2a68-426e-982c-ef825c6efca3","Type":"ContainerDied","Data":"aaa535fc7f87d47d5bb54797f60b4adcabe1adc1d5d79720acc7a9ba7bb355c2"} Feb 17 13:48:13 crc kubenswrapper[4768]: I0217 13:48:13.477523 4768 generic.go:334] "Generic (PLEG): container finished" podID="742e6df8-2a68-426e-982c-ef825c6efca3" containerID="aaa535fc7f87d47d5bb54797f60b4adcabe1adc1d5d79720acc7a9ba7bb355c2" exitCode=0 Feb 17 13:48:13 crc kubenswrapper[4768]: I0217 13:48:13.477683 4768 generic.go:334] "Generic (PLEG): container finished" podID="742e6df8-2a68-426e-982c-ef825c6efca3" containerID="2c29effffff4df037300ce8f32cae5337e3c60634d26fb82fde99ad7994a4fd4" exitCode=0 Feb 17 13:48:13 crc kubenswrapper[4768]: I0217 13:48:13.477697 4768 generic.go:334] "Generic (PLEG): container finished" podID="742e6df8-2a68-426e-982c-ef825c6efca3" containerID="88f4574f4bb95ef3e580f5e3e98feb55cd1381df1cc14a3797f040f3a8d92e86" exitCode=0 Feb 17 13:48:13 crc kubenswrapper[4768]: I0217 13:48:13.477711 4768 generic.go:334] "Generic (PLEG): container finished" podID="742e6df8-2a68-426e-982c-ef825c6efca3" containerID="2e14dfa7f4c26b9ae731392907658bf0a72b811243137b32320a1be70a334a1f" exitCode=143 Feb 17 13:48:13 crc kubenswrapper[4768]: I0217 13:48:13.477729 4768 generic.go:334] "Generic (PLEG): container finished" podID="742e6df8-2a68-426e-982c-ef825c6efca3" containerID="d0849644593ce9a9b0ce3191eb62e47c5986c4b20f93313dd9fa721c7879ede5" exitCode=143 Feb 17 13:48:13 crc kubenswrapper[4768]: I0217 13:48:13.477796 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5cplg" event={"ID":"742e6df8-2a68-426e-982c-ef825c6efca3","Type":"ContainerDied","Data":"2c29effffff4df037300ce8f32cae5337e3c60634d26fb82fde99ad7994a4fd4"} Feb 17 13:48:13 crc kubenswrapper[4768]: I0217 13:48:13.477882 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5cplg" event={"ID":"742e6df8-2a68-426e-982c-ef825c6efca3","Type":"ContainerDied","Data":"88f4574f4bb95ef3e580f5e3e98feb55cd1381df1cc14a3797f040f3a8d92e86"} Feb 17 13:48:13 crc kubenswrapper[4768]: I0217 13:48:13.477901 4768 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"d61025c117feb912aa62fab47e8cea3a9df5a2d323840571cec1ecd64c4b3fbe"} Feb 17 13:48:13 crc kubenswrapper[4768]: I0217 13:48:13.477916 4768 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"115c47b6c343076ece428facfb56ccbf0b490cbfe7eff1470c731d6c304df3b9"} Feb 17 13:48:13 crc kubenswrapper[4768]: I0217 13:48:13.477923 4768 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"a8e5ca206927b1e2a6968ce42ecdb1441da4c68ad11b65c657a68889ac73af37"} Feb 17 13:48:13 crc kubenswrapper[4768]: I0217 13:48:13.477930 4768 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"bca83097ac9d4b621e9bffa8f2f815a6060db34cb0272281b765aa7705d3ab5e"} Feb 17 13:48:13 crc kubenswrapper[4768]: I0217 13:48:13.477941 4768 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"aaa535fc7f87d47d5bb54797f60b4adcabe1adc1d5d79720acc7a9ba7bb355c2"} Feb 17 13:48:13 crc kubenswrapper[4768]: I0217 13:48:13.477948 4768 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"2c29effffff4df037300ce8f32cae5337e3c60634d26fb82fde99ad7994a4fd4"} Feb 17 13:48:13 crc kubenswrapper[4768]: I0217 13:48:13.477954 4768 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"88f4574f4bb95ef3e580f5e3e98feb55cd1381df1cc14a3797f040f3a8d92e86"} Feb 17 13:48:13 crc kubenswrapper[4768]: I0217 13:48:13.477961 4768 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"2e14dfa7f4c26b9ae731392907658bf0a72b811243137b32320a1be70a334a1f"} Feb 17 13:48:13 crc kubenswrapper[4768]: I0217 13:48:13.477967 4768 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"d0849644593ce9a9b0ce3191eb62e47c5986c4b20f93313dd9fa721c7879ede5"} Feb 17 13:48:13 crc kubenswrapper[4768]: I0217 13:48:13.477974 4768 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"6fcb3dc45db5b2c74f9bdd4c7c453815a392de1f2afe91ebed618327cfe5cb7c"} Feb 17 13:48:13 crc kubenswrapper[4768]: I0217 13:48:13.477984 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5cplg" event={"ID":"742e6df8-2a68-426e-982c-ef825c6efca3","Type":"ContainerDied","Data":"2e14dfa7f4c26b9ae731392907658bf0a72b811243137b32320a1be70a334a1f"} Feb 17 13:48:13 crc kubenswrapper[4768]: I0217 13:48:13.477995 4768 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"d61025c117feb912aa62fab47e8cea3a9df5a2d323840571cec1ecd64c4b3fbe"} Feb 17 13:48:13 crc kubenswrapper[4768]: I0217 13:48:13.478004 4768 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"115c47b6c343076ece428facfb56ccbf0b490cbfe7eff1470c731d6c304df3b9"} Feb 17 13:48:13 crc kubenswrapper[4768]: I0217 13:48:13.478010 4768 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"a8e5ca206927b1e2a6968ce42ecdb1441da4c68ad11b65c657a68889ac73af37"} Feb 17 13:48:13 crc kubenswrapper[4768]: I0217 13:48:13.478017 4768 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"bca83097ac9d4b621e9bffa8f2f815a6060db34cb0272281b765aa7705d3ab5e"} Feb 17 13:48:13 crc kubenswrapper[4768]: I0217 13:48:13.478023 4768 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"aaa535fc7f87d47d5bb54797f60b4adcabe1adc1d5d79720acc7a9ba7bb355c2"} Feb 17 13:48:13 crc kubenswrapper[4768]: I0217 13:48:13.478030 4768 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"2c29effffff4df037300ce8f32cae5337e3c60634d26fb82fde99ad7994a4fd4"} Feb 17 13:48:13 crc kubenswrapper[4768]: I0217 13:48:13.478036 4768 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"88f4574f4bb95ef3e580f5e3e98feb55cd1381df1cc14a3797f040f3a8d92e86"} Feb 17 13:48:13 crc kubenswrapper[4768]: I0217 13:48:13.478042 4768 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"2e14dfa7f4c26b9ae731392907658bf0a72b811243137b32320a1be70a334a1f"} Feb 17 13:48:13 crc kubenswrapper[4768]: I0217 13:48:13.478048 4768 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"d0849644593ce9a9b0ce3191eb62e47c5986c4b20f93313dd9fa721c7879ede5"} Feb 17 13:48:13 crc kubenswrapper[4768]: I0217 13:48:13.478055 4768 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"6fcb3dc45db5b2c74f9bdd4c7c453815a392de1f2afe91ebed618327cfe5cb7c"} Feb 17 13:48:13 crc kubenswrapper[4768]: I0217 13:48:13.478064 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5cplg" event={"ID":"742e6df8-2a68-426e-982c-ef825c6efca3","Type":"ContainerDied","Data":"d0849644593ce9a9b0ce3191eb62e47c5986c4b20f93313dd9fa721c7879ede5"} Feb 17 13:48:13 crc kubenswrapper[4768]: I0217 13:48:13.478075 4768 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"d61025c117feb912aa62fab47e8cea3a9df5a2d323840571cec1ecd64c4b3fbe"} Feb 17 13:48:13 crc kubenswrapper[4768]: I0217 13:48:13.478082 4768 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"115c47b6c343076ece428facfb56ccbf0b490cbfe7eff1470c731d6c304df3b9"} Feb 17 13:48:13 crc kubenswrapper[4768]: I0217 13:48:13.478089 4768 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"a8e5ca206927b1e2a6968ce42ecdb1441da4c68ad11b65c657a68889ac73af37"} Feb 17 13:48:13 crc kubenswrapper[4768]: I0217 13:48:13.478095 4768 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"bca83097ac9d4b621e9bffa8f2f815a6060db34cb0272281b765aa7705d3ab5e"} Feb 17 13:48:13 crc kubenswrapper[4768]: I0217 13:48:13.478121 4768 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"aaa535fc7f87d47d5bb54797f60b4adcabe1adc1d5d79720acc7a9ba7bb355c2"} Feb 17 13:48:13 crc kubenswrapper[4768]: I0217 13:48:13.478132 4768 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"2c29effffff4df037300ce8f32cae5337e3c60634d26fb82fde99ad7994a4fd4"} Feb 17 13:48:13 crc kubenswrapper[4768]: I0217 13:48:13.478138 4768 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"88f4574f4bb95ef3e580f5e3e98feb55cd1381df1cc14a3797f040f3a8d92e86"} Feb 17 13:48:13 crc kubenswrapper[4768]: I0217 13:48:13.478145 4768 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"2e14dfa7f4c26b9ae731392907658bf0a72b811243137b32320a1be70a334a1f"} Feb 17 13:48:13 crc kubenswrapper[4768]: I0217 13:48:13.478151 4768 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"d0849644593ce9a9b0ce3191eb62e47c5986c4b20f93313dd9fa721c7879ede5"} Feb 17 13:48:13 crc kubenswrapper[4768]: I0217 13:48:13.478157 4768 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"6fcb3dc45db5b2c74f9bdd4c7c453815a392de1f2afe91ebed618327cfe5cb7c"} Feb 17 13:48:13 crc kubenswrapper[4768]: I0217 13:48:13.478167 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5cplg" event={"ID":"742e6df8-2a68-426e-982c-ef825c6efca3","Type":"ContainerDied","Data":"4233bdbe1d37ecf7f91ff245413808d8b89fdabf69050a2c99a687098eff401f"} Feb 17 13:48:13 crc kubenswrapper[4768]: I0217 13:48:13.478181 4768 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"d61025c117feb912aa62fab47e8cea3a9df5a2d323840571cec1ecd64c4b3fbe"} Feb 17 13:48:13 crc kubenswrapper[4768]: I0217 13:48:13.478189 4768 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"115c47b6c343076ece428facfb56ccbf0b490cbfe7eff1470c731d6c304df3b9"} Feb 17 13:48:13 crc kubenswrapper[4768]: I0217 13:48:13.478195 4768 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"a8e5ca206927b1e2a6968ce42ecdb1441da4c68ad11b65c657a68889ac73af37"} Feb 17 13:48:13 crc kubenswrapper[4768]: I0217 13:48:13.478202 4768 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"bca83097ac9d4b621e9bffa8f2f815a6060db34cb0272281b765aa7705d3ab5e"} Feb 17 13:48:13 crc kubenswrapper[4768]: I0217 13:48:13.478209 4768 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"aaa535fc7f87d47d5bb54797f60b4adcabe1adc1d5d79720acc7a9ba7bb355c2"} Feb 17 13:48:13 crc kubenswrapper[4768]: I0217 13:48:13.478215 4768 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"2c29effffff4df037300ce8f32cae5337e3c60634d26fb82fde99ad7994a4fd4"} Feb 17 13:48:13 crc kubenswrapper[4768]: I0217 13:48:13.478222 4768 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"88f4574f4bb95ef3e580f5e3e98feb55cd1381df1cc14a3797f040f3a8d92e86"} Feb 17 13:48:13 crc kubenswrapper[4768]: I0217 13:48:13.478231 4768 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"2e14dfa7f4c26b9ae731392907658bf0a72b811243137b32320a1be70a334a1f"} Feb 17 13:48:13 crc kubenswrapper[4768]: I0217 13:48:13.478239 4768 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"d0849644593ce9a9b0ce3191eb62e47c5986c4b20f93313dd9fa721c7879ede5"} Feb 17 13:48:13 crc kubenswrapper[4768]: I0217 13:48:13.478245 4768 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"6fcb3dc45db5b2c74f9bdd4c7c453815a392de1f2afe91ebed618327cfe5cb7c"} Feb 17 13:48:13 crc kubenswrapper[4768]: I0217 13:48:13.500313 4768 scope.go:117] "RemoveContainer" containerID="d61025c117feb912aa62fab47e8cea3a9df5a2d323840571cec1ecd64c4b3fbe" Feb 17 13:48:13 crc kubenswrapper[4768]: I0217 13:48:13.521273 4768 scope.go:117] "RemoveContainer" containerID="115c47b6c343076ece428facfb56ccbf0b490cbfe7eff1470c731d6c304df3b9" Feb 17 13:48:13 crc kubenswrapper[4768]: I0217 13:48:13.533447 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-5cplg"] Feb 17 13:48:13 crc kubenswrapper[4768]: I0217 13:48:13.542817 4768 scope.go:117] "RemoveContainer" containerID="a8e5ca206927b1e2a6968ce42ecdb1441da4c68ad11b65c657a68889ac73af37" Feb 17 13:48:13 crc kubenswrapper[4768]: I0217 13:48:13.542987 4768 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-5cplg"] Feb 17 13:48:13 crc kubenswrapper[4768]: I0217 13:48:13.559152 4768 scope.go:117] "RemoveContainer" containerID="bca83097ac9d4b621e9bffa8f2f815a6060db34cb0272281b765aa7705d3ab5e" Feb 17 13:48:13 crc kubenswrapper[4768]: I0217 13:48:13.571969 4768 scope.go:117] "RemoveContainer" containerID="aaa535fc7f87d47d5bb54797f60b4adcabe1adc1d5d79720acc7a9ba7bb355c2" Feb 17 13:48:13 crc kubenswrapper[4768]: I0217 13:48:13.575256 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-qt2lc" Feb 17 13:48:13 crc kubenswrapper[4768]: I0217 13:48:13.605573 4768 scope.go:117] "RemoveContainer" containerID="2c29effffff4df037300ce8f32cae5337e3c60634d26fb82fde99ad7994a4fd4" Feb 17 13:48:13 crc kubenswrapper[4768]: I0217 13:48:13.623003 4768 scope.go:117] "RemoveContainer" containerID="88f4574f4bb95ef3e580f5e3e98feb55cd1381df1cc14a3797f040f3a8d92e86" Feb 17 13:48:13 crc kubenswrapper[4768]: I0217 13:48:13.648331 4768 scope.go:117] "RemoveContainer" containerID="2e14dfa7f4c26b9ae731392907658bf0a72b811243137b32320a1be70a334a1f" Feb 17 13:48:13 crc kubenswrapper[4768]: I0217 13:48:13.661254 4768 scope.go:117] "RemoveContainer" containerID="d0849644593ce9a9b0ce3191eb62e47c5986c4b20f93313dd9fa721c7879ede5" Feb 17 13:48:13 crc kubenswrapper[4768]: I0217 13:48:13.690917 4768 scope.go:117] "RemoveContainer" containerID="6fcb3dc45db5b2c74f9bdd4c7c453815a392de1f2afe91ebed618327cfe5cb7c" Feb 17 13:48:13 crc kubenswrapper[4768]: I0217 13:48:13.717591 4768 scope.go:117] "RemoveContainer" containerID="d61025c117feb912aa62fab47e8cea3a9df5a2d323840571cec1ecd64c4b3fbe" Feb 17 13:48:13 crc kubenswrapper[4768]: E0217 13:48:13.718780 4768 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d61025c117feb912aa62fab47e8cea3a9df5a2d323840571cec1ecd64c4b3fbe\": container with ID starting with d61025c117feb912aa62fab47e8cea3a9df5a2d323840571cec1ecd64c4b3fbe not found: ID does not exist" containerID="d61025c117feb912aa62fab47e8cea3a9df5a2d323840571cec1ecd64c4b3fbe" Feb 17 13:48:13 crc kubenswrapper[4768]: I0217 13:48:13.718867 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d61025c117feb912aa62fab47e8cea3a9df5a2d323840571cec1ecd64c4b3fbe"} err="failed to get container status \"d61025c117feb912aa62fab47e8cea3a9df5a2d323840571cec1ecd64c4b3fbe\": rpc error: code = NotFound desc = could not find container \"d61025c117feb912aa62fab47e8cea3a9df5a2d323840571cec1ecd64c4b3fbe\": container with ID starting with d61025c117feb912aa62fab47e8cea3a9df5a2d323840571cec1ecd64c4b3fbe not found: ID does not exist" Feb 17 13:48:13 crc kubenswrapper[4768]: I0217 13:48:13.718915 4768 scope.go:117] "RemoveContainer" containerID="115c47b6c343076ece428facfb56ccbf0b490cbfe7eff1470c731d6c304df3b9" Feb 17 13:48:13 crc kubenswrapper[4768]: E0217 13:48:13.719591 4768 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"115c47b6c343076ece428facfb56ccbf0b490cbfe7eff1470c731d6c304df3b9\": container with ID starting with 115c47b6c343076ece428facfb56ccbf0b490cbfe7eff1470c731d6c304df3b9 not found: ID does not exist" containerID="115c47b6c343076ece428facfb56ccbf0b490cbfe7eff1470c731d6c304df3b9" Feb 17 13:48:13 crc kubenswrapper[4768]: I0217 13:48:13.719651 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"115c47b6c343076ece428facfb56ccbf0b490cbfe7eff1470c731d6c304df3b9"} err="failed to get container status \"115c47b6c343076ece428facfb56ccbf0b490cbfe7eff1470c731d6c304df3b9\": rpc error: code = NotFound desc = could not find container \"115c47b6c343076ece428facfb56ccbf0b490cbfe7eff1470c731d6c304df3b9\": container with ID starting with 115c47b6c343076ece428facfb56ccbf0b490cbfe7eff1470c731d6c304df3b9 not found: ID does not exist" Feb 17 13:48:13 crc kubenswrapper[4768]: I0217 13:48:13.719686 4768 scope.go:117] "RemoveContainer" containerID="a8e5ca206927b1e2a6968ce42ecdb1441da4c68ad11b65c657a68889ac73af37" Feb 17 13:48:13 crc kubenswrapper[4768]: E0217 13:48:13.720228 4768 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a8e5ca206927b1e2a6968ce42ecdb1441da4c68ad11b65c657a68889ac73af37\": container with ID starting with a8e5ca206927b1e2a6968ce42ecdb1441da4c68ad11b65c657a68889ac73af37 not found: ID does not exist" containerID="a8e5ca206927b1e2a6968ce42ecdb1441da4c68ad11b65c657a68889ac73af37" Feb 17 13:48:13 crc kubenswrapper[4768]: I0217 13:48:13.720311 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a8e5ca206927b1e2a6968ce42ecdb1441da4c68ad11b65c657a68889ac73af37"} err="failed to get container status \"a8e5ca206927b1e2a6968ce42ecdb1441da4c68ad11b65c657a68889ac73af37\": rpc error: code = NotFound desc = could not find container \"a8e5ca206927b1e2a6968ce42ecdb1441da4c68ad11b65c657a68889ac73af37\": container with ID starting with a8e5ca206927b1e2a6968ce42ecdb1441da4c68ad11b65c657a68889ac73af37 not found: ID does not exist" Feb 17 13:48:13 crc kubenswrapper[4768]: I0217 13:48:13.720385 4768 scope.go:117] "RemoveContainer" containerID="bca83097ac9d4b621e9bffa8f2f815a6060db34cb0272281b765aa7705d3ab5e" Feb 17 13:48:13 crc kubenswrapper[4768]: E0217 13:48:13.721033 4768 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bca83097ac9d4b621e9bffa8f2f815a6060db34cb0272281b765aa7705d3ab5e\": container with ID starting with bca83097ac9d4b621e9bffa8f2f815a6060db34cb0272281b765aa7705d3ab5e not found: ID does not exist" containerID="bca83097ac9d4b621e9bffa8f2f815a6060db34cb0272281b765aa7705d3ab5e" Feb 17 13:48:13 crc kubenswrapper[4768]: I0217 13:48:13.721081 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bca83097ac9d4b621e9bffa8f2f815a6060db34cb0272281b765aa7705d3ab5e"} err="failed to get container status \"bca83097ac9d4b621e9bffa8f2f815a6060db34cb0272281b765aa7705d3ab5e\": rpc error: code = NotFound desc = could not find container \"bca83097ac9d4b621e9bffa8f2f815a6060db34cb0272281b765aa7705d3ab5e\": container with ID starting with bca83097ac9d4b621e9bffa8f2f815a6060db34cb0272281b765aa7705d3ab5e not found: ID does not exist" Feb 17 13:48:13 crc kubenswrapper[4768]: I0217 13:48:13.721129 4768 scope.go:117] "RemoveContainer" containerID="aaa535fc7f87d47d5bb54797f60b4adcabe1adc1d5d79720acc7a9ba7bb355c2" Feb 17 13:48:13 crc kubenswrapper[4768]: E0217 13:48:13.721591 4768 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"aaa535fc7f87d47d5bb54797f60b4adcabe1adc1d5d79720acc7a9ba7bb355c2\": container with ID starting with aaa535fc7f87d47d5bb54797f60b4adcabe1adc1d5d79720acc7a9ba7bb355c2 not found: ID does not exist" containerID="aaa535fc7f87d47d5bb54797f60b4adcabe1adc1d5d79720acc7a9ba7bb355c2" Feb 17 13:48:13 crc kubenswrapper[4768]: I0217 13:48:13.721636 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"aaa535fc7f87d47d5bb54797f60b4adcabe1adc1d5d79720acc7a9ba7bb355c2"} err="failed to get container status \"aaa535fc7f87d47d5bb54797f60b4adcabe1adc1d5d79720acc7a9ba7bb355c2\": rpc error: code = NotFound desc = could not find container \"aaa535fc7f87d47d5bb54797f60b4adcabe1adc1d5d79720acc7a9ba7bb355c2\": container with ID starting with aaa535fc7f87d47d5bb54797f60b4adcabe1adc1d5d79720acc7a9ba7bb355c2 not found: ID does not exist" Feb 17 13:48:13 crc kubenswrapper[4768]: I0217 13:48:13.721655 4768 scope.go:117] "RemoveContainer" containerID="2c29effffff4df037300ce8f32cae5337e3c60634d26fb82fde99ad7994a4fd4" Feb 17 13:48:13 crc kubenswrapper[4768]: E0217 13:48:13.722028 4768 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2c29effffff4df037300ce8f32cae5337e3c60634d26fb82fde99ad7994a4fd4\": container with ID starting with 2c29effffff4df037300ce8f32cae5337e3c60634d26fb82fde99ad7994a4fd4 not found: ID does not exist" containerID="2c29effffff4df037300ce8f32cae5337e3c60634d26fb82fde99ad7994a4fd4" Feb 17 13:48:13 crc kubenswrapper[4768]: I0217 13:48:13.722063 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2c29effffff4df037300ce8f32cae5337e3c60634d26fb82fde99ad7994a4fd4"} err="failed to get container status \"2c29effffff4df037300ce8f32cae5337e3c60634d26fb82fde99ad7994a4fd4\": rpc error: code = NotFound desc = could not find container \"2c29effffff4df037300ce8f32cae5337e3c60634d26fb82fde99ad7994a4fd4\": container with ID starting with 2c29effffff4df037300ce8f32cae5337e3c60634d26fb82fde99ad7994a4fd4 not found: ID does not exist" Feb 17 13:48:13 crc kubenswrapper[4768]: I0217 13:48:13.722083 4768 scope.go:117] "RemoveContainer" containerID="88f4574f4bb95ef3e580f5e3e98feb55cd1381df1cc14a3797f040f3a8d92e86" Feb 17 13:48:13 crc kubenswrapper[4768]: E0217 13:48:13.722571 4768 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"88f4574f4bb95ef3e580f5e3e98feb55cd1381df1cc14a3797f040f3a8d92e86\": container with ID starting with 88f4574f4bb95ef3e580f5e3e98feb55cd1381df1cc14a3797f040f3a8d92e86 not found: ID does not exist" containerID="88f4574f4bb95ef3e580f5e3e98feb55cd1381df1cc14a3797f040f3a8d92e86" Feb 17 13:48:13 crc kubenswrapper[4768]: I0217 13:48:13.722617 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"88f4574f4bb95ef3e580f5e3e98feb55cd1381df1cc14a3797f040f3a8d92e86"} err="failed to get container status \"88f4574f4bb95ef3e580f5e3e98feb55cd1381df1cc14a3797f040f3a8d92e86\": rpc error: code = NotFound desc = could not find container \"88f4574f4bb95ef3e580f5e3e98feb55cd1381df1cc14a3797f040f3a8d92e86\": container with ID starting with 88f4574f4bb95ef3e580f5e3e98feb55cd1381df1cc14a3797f040f3a8d92e86 not found: ID does not exist" Feb 17 13:48:13 crc kubenswrapper[4768]: I0217 13:48:13.722637 4768 scope.go:117] "RemoveContainer" containerID="2e14dfa7f4c26b9ae731392907658bf0a72b811243137b32320a1be70a334a1f" Feb 17 13:48:13 crc kubenswrapper[4768]: E0217 13:48:13.722979 4768 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2e14dfa7f4c26b9ae731392907658bf0a72b811243137b32320a1be70a334a1f\": container with ID starting with 2e14dfa7f4c26b9ae731392907658bf0a72b811243137b32320a1be70a334a1f not found: ID does not exist" containerID="2e14dfa7f4c26b9ae731392907658bf0a72b811243137b32320a1be70a334a1f" Feb 17 13:48:13 crc kubenswrapper[4768]: I0217 13:48:13.723006 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2e14dfa7f4c26b9ae731392907658bf0a72b811243137b32320a1be70a334a1f"} err="failed to get container status \"2e14dfa7f4c26b9ae731392907658bf0a72b811243137b32320a1be70a334a1f\": rpc error: code = NotFound desc = could not find container \"2e14dfa7f4c26b9ae731392907658bf0a72b811243137b32320a1be70a334a1f\": container with ID starting with 2e14dfa7f4c26b9ae731392907658bf0a72b811243137b32320a1be70a334a1f not found: ID does not exist" Feb 17 13:48:13 crc kubenswrapper[4768]: I0217 13:48:13.723027 4768 scope.go:117] "RemoveContainer" containerID="d0849644593ce9a9b0ce3191eb62e47c5986c4b20f93313dd9fa721c7879ede5" Feb 17 13:48:13 crc kubenswrapper[4768]: E0217 13:48:13.723634 4768 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d0849644593ce9a9b0ce3191eb62e47c5986c4b20f93313dd9fa721c7879ede5\": container with ID starting with d0849644593ce9a9b0ce3191eb62e47c5986c4b20f93313dd9fa721c7879ede5 not found: ID does not exist" containerID="d0849644593ce9a9b0ce3191eb62e47c5986c4b20f93313dd9fa721c7879ede5" Feb 17 13:48:13 crc kubenswrapper[4768]: I0217 13:48:13.723682 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d0849644593ce9a9b0ce3191eb62e47c5986c4b20f93313dd9fa721c7879ede5"} err="failed to get container status \"d0849644593ce9a9b0ce3191eb62e47c5986c4b20f93313dd9fa721c7879ede5\": rpc error: code = NotFound desc = could not find container \"d0849644593ce9a9b0ce3191eb62e47c5986c4b20f93313dd9fa721c7879ede5\": container with ID starting with d0849644593ce9a9b0ce3191eb62e47c5986c4b20f93313dd9fa721c7879ede5 not found: ID does not exist" Feb 17 13:48:13 crc kubenswrapper[4768]: I0217 13:48:13.723706 4768 scope.go:117] "RemoveContainer" containerID="6fcb3dc45db5b2c74f9bdd4c7c453815a392de1f2afe91ebed618327cfe5cb7c" Feb 17 13:48:13 crc kubenswrapper[4768]: E0217 13:48:13.724045 4768 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6fcb3dc45db5b2c74f9bdd4c7c453815a392de1f2afe91ebed618327cfe5cb7c\": container with ID starting with 6fcb3dc45db5b2c74f9bdd4c7c453815a392de1f2afe91ebed618327cfe5cb7c not found: ID does not exist" containerID="6fcb3dc45db5b2c74f9bdd4c7c453815a392de1f2afe91ebed618327cfe5cb7c" Feb 17 13:48:13 crc kubenswrapper[4768]: I0217 13:48:13.724076 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6fcb3dc45db5b2c74f9bdd4c7c453815a392de1f2afe91ebed618327cfe5cb7c"} err="failed to get container status \"6fcb3dc45db5b2c74f9bdd4c7c453815a392de1f2afe91ebed618327cfe5cb7c\": rpc error: code = NotFound desc = could not find container \"6fcb3dc45db5b2c74f9bdd4c7c453815a392de1f2afe91ebed618327cfe5cb7c\": container with ID starting with 6fcb3dc45db5b2c74f9bdd4c7c453815a392de1f2afe91ebed618327cfe5cb7c not found: ID does not exist" Feb 17 13:48:13 crc kubenswrapper[4768]: I0217 13:48:13.724117 4768 scope.go:117] "RemoveContainer" containerID="d61025c117feb912aa62fab47e8cea3a9df5a2d323840571cec1ecd64c4b3fbe" Feb 17 13:48:13 crc kubenswrapper[4768]: I0217 13:48:13.724517 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d61025c117feb912aa62fab47e8cea3a9df5a2d323840571cec1ecd64c4b3fbe"} err="failed to get container status \"d61025c117feb912aa62fab47e8cea3a9df5a2d323840571cec1ecd64c4b3fbe\": rpc error: code = NotFound desc = could not find container \"d61025c117feb912aa62fab47e8cea3a9df5a2d323840571cec1ecd64c4b3fbe\": container with ID starting with d61025c117feb912aa62fab47e8cea3a9df5a2d323840571cec1ecd64c4b3fbe not found: ID does not exist" Feb 17 13:48:13 crc kubenswrapper[4768]: I0217 13:48:13.724546 4768 scope.go:117] "RemoveContainer" containerID="115c47b6c343076ece428facfb56ccbf0b490cbfe7eff1470c731d6c304df3b9" Feb 17 13:48:13 crc kubenswrapper[4768]: I0217 13:48:13.724866 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"115c47b6c343076ece428facfb56ccbf0b490cbfe7eff1470c731d6c304df3b9"} err="failed to get container status \"115c47b6c343076ece428facfb56ccbf0b490cbfe7eff1470c731d6c304df3b9\": rpc error: code = NotFound desc = could not find container \"115c47b6c343076ece428facfb56ccbf0b490cbfe7eff1470c731d6c304df3b9\": container with ID starting with 115c47b6c343076ece428facfb56ccbf0b490cbfe7eff1470c731d6c304df3b9 not found: ID does not exist" Feb 17 13:48:13 crc kubenswrapper[4768]: I0217 13:48:13.724893 4768 scope.go:117] "RemoveContainer" containerID="a8e5ca206927b1e2a6968ce42ecdb1441da4c68ad11b65c657a68889ac73af37" Feb 17 13:48:13 crc kubenswrapper[4768]: I0217 13:48:13.725202 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a8e5ca206927b1e2a6968ce42ecdb1441da4c68ad11b65c657a68889ac73af37"} err="failed to get container status \"a8e5ca206927b1e2a6968ce42ecdb1441da4c68ad11b65c657a68889ac73af37\": rpc error: code = NotFound desc = could not find container \"a8e5ca206927b1e2a6968ce42ecdb1441da4c68ad11b65c657a68889ac73af37\": container with ID starting with a8e5ca206927b1e2a6968ce42ecdb1441da4c68ad11b65c657a68889ac73af37 not found: ID does not exist" Feb 17 13:48:13 crc kubenswrapper[4768]: I0217 13:48:13.725227 4768 scope.go:117] "RemoveContainer" containerID="bca83097ac9d4b621e9bffa8f2f815a6060db34cb0272281b765aa7705d3ab5e" Feb 17 13:48:13 crc kubenswrapper[4768]: I0217 13:48:13.725648 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bca83097ac9d4b621e9bffa8f2f815a6060db34cb0272281b765aa7705d3ab5e"} err="failed to get container status \"bca83097ac9d4b621e9bffa8f2f815a6060db34cb0272281b765aa7705d3ab5e\": rpc error: code = NotFound desc = could not find container \"bca83097ac9d4b621e9bffa8f2f815a6060db34cb0272281b765aa7705d3ab5e\": container with ID starting with bca83097ac9d4b621e9bffa8f2f815a6060db34cb0272281b765aa7705d3ab5e not found: ID does not exist" Feb 17 13:48:13 crc kubenswrapper[4768]: I0217 13:48:13.725673 4768 scope.go:117] "RemoveContainer" containerID="aaa535fc7f87d47d5bb54797f60b4adcabe1adc1d5d79720acc7a9ba7bb355c2" Feb 17 13:48:13 crc kubenswrapper[4768]: I0217 13:48:13.726082 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"aaa535fc7f87d47d5bb54797f60b4adcabe1adc1d5d79720acc7a9ba7bb355c2"} err="failed to get container status \"aaa535fc7f87d47d5bb54797f60b4adcabe1adc1d5d79720acc7a9ba7bb355c2\": rpc error: code = NotFound desc = could not find container \"aaa535fc7f87d47d5bb54797f60b4adcabe1adc1d5d79720acc7a9ba7bb355c2\": container with ID starting with aaa535fc7f87d47d5bb54797f60b4adcabe1adc1d5d79720acc7a9ba7bb355c2 not found: ID does not exist" Feb 17 13:48:13 crc kubenswrapper[4768]: I0217 13:48:13.726186 4768 scope.go:117] "RemoveContainer" containerID="2c29effffff4df037300ce8f32cae5337e3c60634d26fb82fde99ad7994a4fd4" Feb 17 13:48:13 crc kubenswrapper[4768]: I0217 13:48:13.727434 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2c29effffff4df037300ce8f32cae5337e3c60634d26fb82fde99ad7994a4fd4"} err="failed to get container status \"2c29effffff4df037300ce8f32cae5337e3c60634d26fb82fde99ad7994a4fd4\": rpc error: code = NotFound desc = could not find container \"2c29effffff4df037300ce8f32cae5337e3c60634d26fb82fde99ad7994a4fd4\": container with ID starting with 2c29effffff4df037300ce8f32cae5337e3c60634d26fb82fde99ad7994a4fd4 not found: ID does not exist" Feb 17 13:48:13 crc kubenswrapper[4768]: I0217 13:48:13.727502 4768 scope.go:117] "RemoveContainer" containerID="88f4574f4bb95ef3e580f5e3e98feb55cd1381df1cc14a3797f040f3a8d92e86" Feb 17 13:48:13 crc kubenswrapper[4768]: I0217 13:48:13.727888 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"88f4574f4bb95ef3e580f5e3e98feb55cd1381df1cc14a3797f040f3a8d92e86"} err="failed to get container status \"88f4574f4bb95ef3e580f5e3e98feb55cd1381df1cc14a3797f040f3a8d92e86\": rpc error: code = NotFound desc = could not find container \"88f4574f4bb95ef3e580f5e3e98feb55cd1381df1cc14a3797f040f3a8d92e86\": container with ID starting with 88f4574f4bb95ef3e580f5e3e98feb55cd1381df1cc14a3797f040f3a8d92e86 not found: ID does not exist" Feb 17 13:48:13 crc kubenswrapper[4768]: I0217 13:48:13.727929 4768 scope.go:117] "RemoveContainer" containerID="2e14dfa7f4c26b9ae731392907658bf0a72b811243137b32320a1be70a334a1f" Feb 17 13:48:13 crc kubenswrapper[4768]: I0217 13:48:13.728272 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2e14dfa7f4c26b9ae731392907658bf0a72b811243137b32320a1be70a334a1f"} err="failed to get container status \"2e14dfa7f4c26b9ae731392907658bf0a72b811243137b32320a1be70a334a1f\": rpc error: code = NotFound desc = could not find container \"2e14dfa7f4c26b9ae731392907658bf0a72b811243137b32320a1be70a334a1f\": container with ID starting with 2e14dfa7f4c26b9ae731392907658bf0a72b811243137b32320a1be70a334a1f not found: ID does not exist" Feb 17 13:48:13 crc kubenswrapper[4768]: I0217 13:48:13.728314 4768 scope.go:117] "RemoveContainer" containerID="d0849644593ce9a9b0ce3191eb62e47c5986c4b20f93313dd9fa721c7879ede5" Feb 17 13:48:13 crc kubenswrapper[4768]: I0217 13:48:13.728633 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d0849644593ce9a9b0ce3191eb62e47c5986c4b20f93313dd9fa721c7879ede5"} err="failed to get container status \"d0849644593ce9a9b0ce3191eb62e47c5986c4b20f93313dd9fa721c7879ede5\": rpc error: code = NotFound desc = could not find container \"d0849644593ce9a9b0ce3191eb62e47c5986c4b20f93313dd9fa721c7879ede5\": container with ID starting with d0849644593ce9a9b0ce3191eb62e47c5986c4b20f93313dd9fa721c7879ede5 not found: ID does not exist" Feb 17 13:48:13 crc kubenswrapper[4768]: I0217 13:48:13.728665 4768 scope.go:117] "RemoveContainer" containerID="6fcb3dc45db5b2c74f9bdd4c7c453815a392de1f2afe91ebed618327cfe5cb7c" Feb 17 13:48:13 crc kubenswrapper[4768]: I0217 13:48:13.729012 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6fcb3dc45db5b2c74f9bdd4c7c453815a392de1f2afe91ebed618327cfe5cb7c"} err="failed to get container status \"6fcb3dc45db5b2c74f9bdd4c7c453815a392de1f2afe91ebed618327cfe5cb7c\": rpc error: code = NotFound desc = could not find container \"6fcb3dc45db5b2c74f9bdd4c7c453815a392de1f2afe91ebed618327cfe5cb7c\": container with ID starting with 6fcb3dc45db5b2c74f9bdd4c7c453815a392de1f2afe91ebed618327cfe5cb7c not found: ID does not exist" Feb 17 13:48:13 crc kubenswrapper[4768]: I0217 13:48:13.729045 4768 scope.go:117] "RemoveContainer" containerID="d61025c117feb912aa62fab47e8cea3a9df5a2d323840571cec1ecd64c4b3fbe" Feb 17 13:48:13 crc kubenswrapper[4768]: I0217 13:48:13.729450 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d61025c117feb912aa62fab47e8cea3a9df5a2d323840571cec1ecd64c4b3fbe"} err="failed to get container status \"d61025c117feb912aa62fab47e8cea3a9df5a2d323840571cec1ecd64c4b3fbe\": rpc error: code = NotFound desc = could not find container \"d61025c117feb912aa62fab47e8cea3a9df5a2d323840571cec1ecd64c4b3fbe\": container with ID starting with d61025c117feb912aa62fab47e8cea3a9df5a2d323840571cec1ecd64c4b3fbe not found: ID does not exist" Feb 17 13:48:13 crc kubenswrapper[4768]: I0217 13:48:13.730196 4768 scope.go:117] "RemoveContainer" containerID="115c47b6c343076ece428facfb56ccbf0b490cbfe7eff1470c731d6c304df3b9" Feb 17 13:48:13 crc kubenswrapper[4768]: I0217 13:48:13.730620 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"115c47b6c343076ece428facfb56ccbf0b490cbfe7eff1470c731d6c304df3b9"} err="failed to get container status \"115c47b6c343076ece428facfb56ccbf0b490cbfe7eff1470c731d6c304df3b9\": rpc error: code = NotFound desc = could not find container \"115c47b6c343076ece428facfb56ccbf0b490cbfe7eff1470c731d6c304df3b9\": container with ID starting with 115c47b6c343076ece428facfb56ccbf0b490cbfe7eff1470c731d6c304df3b9 not found: ID does not exist" Feb 17 13:48:13 crc kubenswrapper[4768]: I0217 13:48:13.730675 4768 scope.go:117] "RemoveContainer" containerID="a8e5ca206927b1e2a6968ce42ecdb1441da4c68ad11b65c657a68889ac73af37" Feb 17 13:48:13 crc kubenswrapper[4768]: I0217 13:48:13.730944 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a8e5ca206927b1e2a6968ce42ecdb1441da4c68ad11b65c657a68889ac73af37"} err="failed to get container status \"a8e5ca206927b1e2a6968ce42ecdb1441da4c68ad11b65c657a68889ac73af37\": rpc error: code = NotFound desc = could not find container \"a8e5ca206927b1e2a6968ce42ecdb1441da4c68ad11b65c657a68889ac73af37\": container with ID starting with a8e5ca206927b1e2a6968ce42ecdb1441da4c68ad11b65c657a68889ac73af37 not found: ID does not exist" Feb 17 13:48:13 crc kubenswrapper[4768]: I0217 13:48:13.730992 4768 scope.go:117] "RemoveContainer" containerID="bca83097ac9d4b621e9bffa8f2f815a6060db34cb0272281b765aa7705d3ab5e" Feb 17 13:48:13 crc kubenswrapper[4768]: I0217 13:48:13.731360 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bca83097ac9d4b621e9bffa8f2f815a6060db34cb0272281b765aa7705d3ab5e"} err="failed to get container status \"bca83097ac9d4b621e9bffa8f2f815a6060db34cb0272281b765aa7705d3ab5e\": rpc error: code = NotFound desc = could not find container \"bca83097ac9d4b621e9bffa8f2f815a6060db34cb0272281b765aa7705d3ab5e\": container with ID starting with bca83097ac9d4b621e9bffa8f2f815a6060db34cb0272281b765aa7705d3ab5e not found: ID does not exist" Feb 17 13:48:13 crc kubenswrapper[4768]: I0217 13:48:13.731391 4768 scope.go:117] "RemoveContainer" containerID="aaa535fc7f87d47d5bb54797f60b4adcabe1adc1d5d79720acc7a9ba7bb355c2" Feb 17 13:48:13 crc kubenswrapper[4768]: I0217 13:48:13.731648 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"aaa535fc7f87d47d5bb54797f60b4adcabe1adc1d5d79720acc7a9ba7bb355c2"} err="failed to get container status \"aaa535fc7f87d47d5bb54797f60b4adcabe1adc1d5d79720acc7a9ba7bb355c2\": rpc error: code = NotFound desc = could not find container \"aaa535fc7f87d47d5bb54797f60b4adcabe1adc1d5d79720acc7a9ba7bb355c2\": container with ID starting with aaa535fc7f87d47d5bb54797f60b4adcabe1adc1d5d79720acc7a9ba7bb355c2 not found: ID does not exist" Feb 17 13:48:13 crc kubenswrapper[4768]: I0217 13:48:13.731673 4768 scope.go:117] "RemoveContainer" containerID="2c29effffff4df037300ce8f32cae5337e3c60634d26fb82fde99ad7994a4fd4" Feb 17 13:48:13 crc kubenswrapper[4768]: I0217 13:48:13.731929 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2c29effffff4df037300ce8f32cae5337e3c60634d26fb82fde99ad7994a4fd4"} err="failed to get container status \"2c29effffff4df037300ce8f32cae5337e3c60634d26fb82fde99ad7994a4fd4\": rpc error: code = NotFound desc = could not find container \"2c29effffff4df037300ce8f32cae5337e3c60634d26fb82fde99ad7994a4fd4\": container with ID starting with 2c29effffff4df037300ce8f32cae5337e3c60634d26fb82fde99ad7994a4fd4 not found: ID does not exist" Feb 17 13:48:13 crc kubenswrapper[4768]: I0217 13:48:13.731960 4768 scope.go:117] "RemoveContainer" containerID="88f4574f4bb95ef3e580f5e3e98feb55cd1381df1cc14a3797f040f3a8d92e86" Feb 17 13:48:13 crc kubenswrapper[4768]: I0217 13:48:13.732330 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"88f4574f4bb95ef3e580f5e3e98feb55cd1381df1cc14a3797f040f3a8d92e86"} err="failed to get container status \"88f4574f4bb95ef3e580f5e3e98feb55cd1381df1cc14a3797f040f3a8d92e86\": rpc error: code = NotFound desc = could not find container \"88f4574f4bb95ef3e580f5e3e98feb55cd1381df1cc14a3797f040f3a8d92e86\": container with ID starting with 88f4574f4bb95ef3e580f5e3e98feb55cd1381df1cc14a3797f040f3a8d92e86 not found: ID does not exist" Feb 17 13:48:13 crc kubenswrapper[4768]: I0217 13:48:13.732356 4768 scope.go:117] "RemoveContainer" containerID="2e14dfa7f4c26b9ae731392907658bf0a72b811243137b32320a1be70a334a1f" Feb 17 13:48:13 crc kubenswrapper[4768]: I0217 13:48:13.732906 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2e14dfa7f4c26b9ae731392907658bf0a72b811243137b32320a1be70a334a1f"} err="failed to get container status \"2e14dfa7f4c26b9ae731392907658bf0a72b811243137b32320a1be70a334a1f\": rpc error: code = NotFound desc = could not find container \"2e14dfa7f4c26b9ae731392907658bf0a72b811243137b32320a1be70a334a1f\": container with ID starting with 2e14dfa7f4c26b9ae731392907658bf0a72b811243137b32320a1be70a334a1f not found: ID does not exist" Feb 17 13:48:13 crc kubenswrapper[4768]: I0217 13:48:13.732933 4768 scope.go:117] "RemoveContainer" containerID="d0849644593ce9a9b0ce3191eb62e47c5986c4b20f93313dd9fa721c7879ede5" Feb 17 13:48:13 crc kubenswrapper[4768]: I0217 13:48:13.733245 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d0849644593ce9a9b0ce3191eb62e47c5986c4b20f93313dd9fa721c7879ede5"} err="failed to get container status \"d0849644593ce9a9b0ce3191eb62e47c5986c4b20f93313dd9fa721c7879ede5\": rpc error: code = NotFound desc = could not find container \"d0849644593ce9a9b0ce3191eb62e47c5986c4b20f93313dd9fa721c7879ede5\": container with ID starting with d0849644593ce9a9b0ce3191eb62e47c5986c4b20f93313dd9fa721c7879ede5 not found: ID does not exist" Feb 17 13:48:13 crc kubenswrapper[4768]: I0217 13:48:13.733265 4768 scope.go:117] "RemoveContainer" containerID="6fcb3dc45db5b2c74f9bdd4c7c453815a392de1f2afe91ebed618327cfe5cb7c" Feb 17 13:48:13 crc kubenswrapper[4768]: I0217 13:48:13.733715 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6fcb3dc45db5b2c74f9bdd4c7c453815a392de1f2afe91ebed618327cfe5cb7c"} err="failed to get container status \"6fcb3dc45db5b2c74f9bdd4c7c453815a392de1f2afe91ebed618327cfe5cb7c\": rpc error: code = NotFound desc = could not find container \"6fcb3dc45db5b2c74f9bdd4c7c453815a392de1f2afe91ebed618327cfe5cb7c\": container with ID starting with 6fcb3dc45db5b2c74f9bdd4c7c453815a392de1f2afe91ebed618327cfe5cb7c not found: ID does not exist" Feb 17 13:48:13 crc kubenswrapper[4768]: I0217 13:48:13.733745 4768 scope.go:117] "RemoveContainer" containerID="d61025c117feb912aa62fab47e8cea3a9df5a2d323840571cec1ecd64c4b3fbe" Feb 17 13:48:13 crc kubenswrapper[4768]: I0217 13:48:13.734058 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d61025c117feb912aa62fab47e8cea3a9df5a2d323840571cec1ecd64c4b3fbe"} err="failed to get container status \"d61025c117feb912aa62fab47e8cea3a9df5a2d323840571cec1ecd64c4b3fbe\": rpc error: code = NotFound desc = could not find container \"d61025c117feb912aa62fab47e8cea3a9df5a2d323840571cec1ecd64c4b3fbe\": container with ID starting with d61025c117feb912aa62fab47e8cea3a9df5a2d323840571cec1ecd64c4b3fbe not found: ID does not exist" Feb 17 13:48:13 crc kubenswrapper[4768]: I0217 13:48:13.734081 4768 scope.go:117] "RemoveContainer" containerID="115c47b6c343076ece428facfb56ccbf0b490cbfe7eff1470c731d6c304df3b9" Feb 17 13:48:13 crc kubenswrapper[4768]: I0217 13:48:13.734455 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"115c47b6c343076ece428facfb56ccbf0b490cbfe7eff1470c731d6c304df3b9"} err="failed to get container status \"115c47b6c343076ece428facfb56ccbf0b490cbfe7eff1470c731d6c304df3b9\": rpc error: code = NotFound desc = could not find container \"115c47b6c343076ece428facfb56ccbf0b490cbfe7eff1470c731d6c304df3b9\": container with ID starting with 115c47b6c343076ece428facfb56ccbf0b490cbfe7eff1470c731d6c304df3b9 not found: ID does not exist" Feb 17 13:48:13 crc kubenswrapper[4768]: I0217 13:48:13.734485 4768 scope.go:117] "RemoveContainer" containerID="a8e5ca206927b1e2a6968ce42ecdb1441da4c68ad11b65c657a68889ac73af37" Feb 17 13:48:13 crc kubenswrapper[4768]: I0217 13:48:13.734849 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a8e5ca206927b1e2a6968ce42ecdb1441da4c68ad11b65c657a68889ac73af37"} err="failed to get container status \"a8e5ca206927b1e2a6968ce42ecdb1441da4c68ad11b65c657a68889ac73af37\": rpc error: code = NotFound desc = could not find container \"a8e5ca206927b1e2a6968ce42ecdb1441da4c68ad11b65c657a68889ac73af37\": container with ID starting with a8e5ca206927b1e2a6968ce42ecdb1441da4c68ad11b65c657a68889ac73af37 not found: ID does not exist" Feb 17 13:48:13 crc kubenswrapper[4768]: I0217 13:48:13.734876 4768 scope.go:117] "RemoveContainer" containerID="bca83097ac9d4b621e9bffa8f2f815a6060db34cb0272281b765aa7705d3ab5e" Feb 17 13:48:13 crc kubenswrapper[4768]: I0217 13:48:13.735522 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bca83097ac9d4b621e9bffa8f2f815a6060db34cb0272281b765aa7705d3ab5e"} err="failed to get container status \"bca83097ac9d4b621e9bffa8f2f815a6060db34cb0272281b765aa7705d3ab5e\": rpc error: code = NotFound desc = could not find container \"bca83097ac9d4b621e9bffa8f2f815a6060db34cb0272281b765aa7705d3ab5e\": container with ID starting with bca83097ac9d4b621e9bffa8f2f815a6060db34cb0272281b765aa7705d3ab5e not found: ID does not exist" Feb 17 13:48:13 crc kubenswrapper[4768]: I0217 13:48:13.735573 4768 scope.go:117] "RemoveContainer" containerID="aaa535fc7f87d47d5bb54797f60b4adcabe1adc1d5d79720acc7a9ba7bb355c2" Feb 17 13:48:13 crc kubenswrapper[4768]: I0217 13:48:13.735855 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"aaa535fc7f87d47d5bb54797f60b4adcabe1adc1d5d79720acc7a9ba7bb355c2"} err="failed to get container status \"aaa535fc7f87d47d5bb54797f60b4adcabe1adc1d5d79720acc7a9ba7bb355c2\": rpc error: code = NotFound desc = could not find container \"aaa535fc7f87d47d5bb54797f60b4adcabe1adc1d5d79720acc7a9ba7bb355c2\": container with ID starting with aaa535fc7f87d47d5bb54797f60b4adcabe1adc1d5d79720acc7a9ba7bb355c2 not found: ID does not exist" Feb 17 13:48:13 crc kubenswrapper[4768]: I0217 13:48:13.735890 4768 scope.go:117] "RemoveContainer" containerID="2c29effffff4df037300ce8f32cae5337e3c60634d26fb82fde99ad7994a4fd4" Feb 17 13:48:13 crc kubenswrapper[4768]: I0217 13:48:13.736236 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2c29effffff4df037300ce8f32cae5337e3c60634d26fb82fde99ad7994a4fd4"} err="failed to get container status \"2c29effffff4df037300ce8f32cae5337e3c60634d26fb82fde99ad7994a4fd4\": rpc error: code = NotFound desc = could not find container \"2c29effffff4df037300ce8f32cae5337e3c60634d26fb82fde99ad7994a4fd4\": container with ID starting with 2c29effffff4df037300ce8f32cae5337e3c60634d26fb82fde99ad7994a4fd4 not found: ID does not exist" Feb 17 13:48:13 crc kubenswrapper[4768]: I0217 13:48:13.736263 4768 scope.go:117] "RemoveContainer" containerID="88f4574f4bb95ef3e580f5e3e98feb55cd1381df1cc14a3797f040f3a8d92e86" Feb 17 13:48:13 crc kubenswrapper[4768]: I0217 13:48:13.736549 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"88f4574f4bb95ef3e580f5e3e98feb55cd1381df1cc14a3797f040f3a8d92e86"} err="failed to get container status \"88f4574f4bb95ef3e580f5e3e98feb55cd1381df1cc14a3797f040f3a8d92e86\": rpc error: code = NotFound desc = could not find container \"88f4574f4bb95ef3e580f5e3e98feb55cd1381df1cc14a3797f040f3a8d92e86\": container with ID starting with 88f4574f4bb95ef3e580f5e3e98feb55cd1381df1cc14a3797f040f3a8d92e86 not found: ID does not exist" Feb 17 13:48:13 crc kubenswrapper[4768]: I0217 13:48:13.736569 4768 scope.go:117] "RemoveContainer" containerID="2e14dfa7f4c26b9ae731392907658bf0a72b811243137b32320a1be70a334a1f" Feb 17 13:48:13 crc kubenswrapper[4768]: I0217 13:48:13.736823 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2e14dfa7f4c26b9ae731392907658bf0a72b811243137b32320a1be70a334a1f"} err="failed to get container status \"2e14dfa7f4c26b9ae731392907658bf0a72b811243137b32320a1be70a334a1f\": rpc error: code = NotFound desc = could not find container \"2e14dfa7f4c26b9ae731392907658bf0a72b811243137b32320a1be70a334a1f\": container with ID starting with 2e14dfa7f4c26b9ae731392907658bf0a72b811243137b32320a1be70a334a1f not found: ID does not exist" Feb 17 13:48:13 crc kubenswrapper[4768]: I0217 13:48:13.736845 4768 scope.go:117] "RemoveContainer" containerID="d0849644593ce9a9b0ce3191eb62e47c5986c4b20f93313dd9fa721c7879ede5" Feb 17 13:48:13 crc kubenswrapper[4768]: I0217 13:48:13.737224 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d0849644593ce9a9b0ce3191eb62e47c5986c4b20f93313dd9fa721c7879ede5"} err="failed to get container status \"d0849644593ce9a9b0ce3191eb62e47c5986c4b20f93313dd9fa721c7879ede5\": rpc error: code = NotFound desc = could not find container \"d0849644593ce9a9b0ce3191eb62e47c5986c4b20f93313dd9fa721c7879ede5\": container with ID starting with d0849644593ce9a9b0ce3191eb62e47c5986c4b20f93313dd9fa721c7879ede5 not found: ID does not exist" Feb 17 13:48:13 crc kubenswrapper[4768]: I0217 13:48:13.737282 4768 scope.go:117] "RemoveContainer" containerID="6fcb3dc45db5b2c74f9bdd4c7c453815a392de1f2afe91ebed618327cfe5cb7c" Feb 17 13:48:13 crc kubenswrapper[4768]: I0217 13:48:13.737598 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6fcb3dc45db5b2c74f9bdd4c7c453815a392de1f2afe91ebed618327cfe5cb7c"} err="failed to get container status \"6fcb3dc45db5b2c74f9bdd4c7c453815a392de1f2afe91ebed618327cfe5cb7c\": rpc error: code = NotFound desc = could not find container \"6fcb3dc45db5b2c74f9bdd4c7c453815a392de1f2afe91ebed618327cfe5cb7c\": container with ID starting with 6fcb3dc45db5b2c74f9bdd4c7c453815a392de1f2afe91ebed618327cfe5cb7c not found: ID does not exist" Feb 17 13:48:14 crc kubenswrapper[4768]: I0217 13:48:14.487862 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-jjjqj_e044bf1f-26b2-4a39-86e6-0440eff3eaa9/kube-multus/2.log" Feb 17 13:48:14 crc kubenswrapper[4768]: I0217 13:48:14.493481 4768 generic.go:334] "Generic (PLEG): container finished" podID="eea3d023-6616-48ab-8208-6e62f936e840" containerID="064f0a7c456c22598edbf1c11bea1c712cb90850cfa781b0f2eb468293742a3a" exitCode=0 Feb 17 13:48:14 crc kubenswrapper[4768]: I0217 13:48:14.493527 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qt2lc" event={"ID":"eea3d023-6616-48ab-8208-6e62f936e840","Type":"ContainerDied","Data":"064f0a7c456c22598edbf1c11bea1c712cb90850cfa781b0f2eb468293742a3a"} Feb 17 13:48:14 crc kubenswrapper[4768]: I0217 13:48:14.493554 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qt2lc" event={"ID":"eea3d023-6616-48ab-8208-6e62f936e840","Type":"ContainerStarted","Data":"2bb08e8112e100f71a7dc41ea84250d14116fd152db0cfafafffc3a62d7a2df8"} Feb 17 13:48:15 crc kubenswrapper[4768]: I0217 13:48:15.500587 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qt2lc" event={"ID":"eea3d023-6616-48ab-8208-6e62f936e840","Type":"ContainerStarted","Data":"d5d33648855f10160f7e6cd72529bf436558806f02f5495e5f1d4d1579e77257"} Feb 17 13:48:15 crc kubenswrapper[4768]: I0217 13:48:15.501072 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qt2lc" event={"ID":"eea3d023-6616-48ab-8208-6e62f936e840","Type":"ContainerStarted","Data":"269072d511777376abc9a71ad0563c9b2a8782f0ad24ad23e6590e362925fcf5"} Feb 17 13:48:15 crc kubenswrapper[4768]: I0217 13:48:15.501084 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qt2lc" event={"ID":"eea3d023-6616-48ab-8208-6e62f936e840","Type":"ContainerStarted","Data":"b74ab341e54310e0e36f88a2de0f6fc9d7b3357e9a34bc9b434191db56cf0fb1"} Feb 17 13:48:15 crc kubenswrapper[4768]: I0217 13:48:15.501093 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qt2lc" event={"ID":"eea3d023-6616-48ab-8208-6e62f936e840","Type":"ContainerStarted","Data":"836aba912180fbad7272e2244b621dc71268c6c2e04bfa26b7f12e14515b530a"} Feb 17 13:48:15 crc kubenswrapper[4768]: I0217 13:48:15.501131 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qt2lc" event={"ID":"eea3d023-6616-48ab-8208-6e62f936e840","Type":"ContainerStarted","Data":"2fb9ed5609e3975284e18dd914610cd4305c560d79ce1d6e22449a6ae9b72270"} Feb 17 13:48:15 crc kubenswrapper[4768]: I0217 13:48:15.501143 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qt2lc" event={"ID":"eea3d023-6616-48ab-8208-6e62f936e840","Type":"ContainerStarted","Data":"3bf7ebf1d8ca445cc2493f56dc55ad8f460026a99742993548d945fad623b9be"} Feb 17 13:48:15 crc kubenswrapper[4768]: I0217 13:48:15.543580 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="742e6df8-2a68-426e-982c-ef825c6efca3" path="/var/lib/kubelet/pods/742e6df8-2a68-426e-982c-ef825c6efca3/volumes" Feb 17 13:48:18 crc kubenswrapper[4768]: I0217 13:48:18.532079 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qt2lc" event={"ID":"eea3d023-6616-48ab-8208-6e62f936e840","Type":"ContainerStarted","Data":"f043719db4c89514a0c694acf60c1a2d94586c98e51fed52a2977be0f579f713"} Feb 17 13:48:20 crc kubenswrapper[4768]: I0217 13:48:20.552141 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qt2lc" event={"ID":"eea3d023-6616-48ab-8208-6e62f936e840","Type":"ContainerStarted","Data":"ee7795c2eb9f301bee20c00e77103b28d80451a7fd7999569b089b8aecf2179b"} Feb 17 13:48:20 crc kubenswrapper[4768]: I0217 13:48:20.552490 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-qt2lc" Feb 17 13:48:20 crc kubenswrapper[4768]: I0217 13:48:20.584388 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-qt2lc" podStartSLOduration=7.584365621 podStartE2EDuration="7.584365621s" podCreationTimestamp="2026-02-17 13:48:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 13:48:20.582386699 +0000 UTC m=+719.861773171" watchObservedRunningTime="2026-02-17 13:48:20.584365621 +0000 UTC m=+719.863752073" Feb 17 13:48:20 crc kubenswrapper[4768]: I0217 13:48:20.592051 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-qt2lc" Feb 17 13:48:21 crc kubenswrapper[4768]: I0217 13:48:21.560199 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-qt2lc" Feb 17 13:48:21 crc kubenswrapper[4768]: I0217 13:48:21.560611 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-qt2lc" Feb 17 13:48:21 crc kubenswrapper[4768]: I0217 13:48:21.594798 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-qt2lc" Feb 17 13:48:26 crc kubenswrapper[4768]: I0217 13:48:26.534448 4768 scope.go:117] "RemoveContainer" containerID="f0eabe9e6b5551e88ed34f7b32f5573dd3d736e0c52761e08b1a6b74957522ef" Feb 17 13:48:26 crc kubenswrapper[4768]: E0217 13:48:26.535424 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-jjjqj_openshift-multus(e044bf1f-26b2-4a39-86e6-0440eff3eaa9)\"" pod="openshift-multus/multus-jjjqj" podUID="e044bf1f-26b2-4a39-86e6-0440eff3eaa9" Feb 17 13:48:28 crc kubenswrapper[4768]: I0217 13:48:28.060215 4768 patch_prober.go:28] interesting pod/machine-config-daemon-p97z4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 13:48:28 crc kubenswrapper[4768]: I0217 13:48:28.060348 4768 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-p97z4" podUID="10c685ba-8fe0-425c-958c-3fb6754d3d84" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 13:48:38 crc kubenswrapper[4768]: I0217 13:48:38.080592 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecant4wt"] Feb 17 13:48:38 crc kubenswrapper[4768]: I0217 13:48:38.082282 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecant4wt" Feb 17 13:48:38 crc kubenswrapper[4768]: I0217 13:48:38.084785 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Feb 17 13:48:38 crc kubenswrapper[4768]: I0217 13:48:38.093168 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecant4wt"] Feb 17 13:48:38 crc kubenswrapper[4768]: I0217 13:48:38.213483 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xslth\" (UniqueName: \"kubernetes.io/projected/4d7e7247-8115-4259-b218-d5d8dceac01d-kube-api-access-xslth\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecant4wt\" (UID: \"4d7e7247-8115-4259-b218-d5d8dceac01d\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecant4wt" Feb 17 13:48:38 crc kubenswrapper[4768]: I0217 13:48:38.213536 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/4d7e7247-8115-4259-b218-d5d8dceac01d-util\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecant4wt\" (UID: \"4d7e7247-8115-4259-b218-d5d8dceac01d\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecant4wt" Feb 17 13:48:38 crc kubenswrapper[4768]: I0217 13:48:38.213565 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/4d7e7247-8115-4259-b218-d5d8dceac01d-bundle\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecant4wt\" (UID: \"4d7e7247-8115-4259-b218-d5d8dceac01d\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecant4wt" Feb 17 13:48:38 crc kubenswrapper[4768]: I0217 13:48:38.314321 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xslth\" (UniqueName: \"kubernetes.io/projected/4d7e7247-8115-4259-b218-d5d8dceac01d-kube-api-access-xslth\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecant4wt\" (UID: \"4d7e7247-8115-4259-b218-d5d8dceac01d\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecant4wt" Feb 17 13:48:38 crc kubenswrapper[4768]: I0217 13:48:38.314364 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/4d7e7247-8115-4259-b218-d5d8dceac01d-util\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecant4wt\" (UID: \"4d7e7247-8115-4259-b218-d5d8dceac01d\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecant4wt" Feb 17 13:48:38 crc kubenswrapper[4768]: I0217 13:48:38.314383 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/4d7e7247-8115-4259-b218-d5d8dceac01d-bundle\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecant4wt\" (UID: \"4d7e7247-8115-4259-b218-d5d8dceac01d\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecant4wt" Feb 17 13:48:38 crc kubenswrapper[4768]: I0217 13:48:38.314872 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/4d7e7247-8115-4259-b218-d5d8dceac01d-util\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecant4wt\" (UID: \"4d7e7247-8115-4259-b218-d5d8dceac01d\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecant4wt" Feb 17 13:48:38 crc kubenswrapper[4768]: I0217 13:48:38.314925 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/4d7e7247-8115-4259-b218-d5d8dceac01d-bundle\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecant4wt\" (UID: \"4d7e7247-8115-4259-b218-d5d8dceac01d\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecant4wt" Feb 17 13:48:38 crc kubenswrapper[4768]: I0217 13:48:38.344607 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xslth\" (UniqueName: \"kubernetes.io/projected/4d7e7247-8115-4259-b218-d5d8dceac01d-kube-api-access-xslth\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecant4wt\" (UID: \"4d7e7247-8115-4259-b218-d5d8dceac01d\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecant4wt" Feb 17 13:48:38 crc kubenswrapper[4768]: I0217 13:48:38.399579 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecant4wt" Feb 17 13:48:38 crc kubenswrapper[4768]: E0217 13:48:38.437147 4768 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecant4wt_openshift-marketplace_4d7e7247-8115-4259-b218-d5d8dceac01d_0(0613ed6f92fcb0c54dfa986ea5b4024fcb6600fe92fcb8aa6ac8f436dc620dd8): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 17 13:48:38 crc kubenswrapper[4768]: E0217 13:48:38.437227 4768 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecant4wt_openshift-marketplace_4d7e7247-8115-4259-b218-d5d8dceac01d_0(0613ed6f92fcb0c54dfa986ea5b4024fcb6600fe92fcb8aa6ac8f436dc620dd8): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecant4wt" Feb 17 13:48:38 crc kubenswrapper[4768]: E0217 13:48:38.437260 4768 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecant4wt_openshift-marketplace_4d7e7247-8115-4259-b218-d5d8dceac01d_0(0613ed6f92fcb0c54dfa986ea5b4024fcb6600fe92fcb8aa6ac8f436dc620dd8): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecant4wt" Feb 17 13:48:38 crc kubenswrapper[4768]: E0217 13:48:38.437330 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecant4wt_openshift-marketplace(4d7e7247-8115-4259-b218-d5d8dceac01d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecant4wt_openshift-marketplace(4d7e7247-8115-4259-b218-d5d8dceac01d)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecant4wt_openshift-marketplace_4d7e7247-8115-4259-b218-d5d8dceac01d_0(0613ed6f92fcb0c54dfa986ea5b4024fcb6600fe92fcb8aa6ac8f436dc620dd8): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecant4wt" podUID="4d7e7247-8115-4259-b218-d5d8dceac01d" Feb 17 13:48:38 crc kubenswrapper[4768]: I0217 13:48:38.661363 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecant4wt" Feb 17 13:48:38 crc kubenswrapper[4768]: I0217 13:48:38.662283 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecant4wt" Feb 17 13:48:38 crc kubenswrapper[4768]: E0217 13:48:38.715488 4768 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecant4wt_openshift-marketplace_4d7e7247-8115-4259-b218-d5d8dceac01d_0(d77a9589a3dea314000ed2cde50a2c4e8723d9fa5a4fb9174c56749ea4031620): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 17 13:48:38 crc kubenswrapper[4768]: E0217 13:48:38.715600 4768 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecant4wt_openshift-marketplace_4d7e7247-8115-4259-b218-d5d8dceac01d_0(d77a9589a3dea314000ed2cde50a2c4e8723d9fa5a4fb9174c56749ea4031620): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecant4wt" Feb 17 13:48:38 crc kubenswrapper[4768]: E0217 13:48:38.715628 4768 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecant4wt_openshift-marketplace_4d7e7247-8115-4259-b218-d5d8dceac01d_0(d77a9589a3dea314000ed2cde50a2c4e8723d9fa5a4fb9174c56749ea4031620): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecant4wt" Feb 17 13:48:38 crc kubenswrapper[4768]: E0217 13:48:38.715687 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecant4wt_openshift-marketplace(4d7e7247-8115-4259-b218-d5d8dceac01d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecant4wt_openshift-marketplace(4d7e7247-8115-4259-b218-d5d8dceac01d)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecant4wt_openshift-marketplace_4d7e7247-8115-4259-b218-d5d8dceac01d_0(d77a9589a3dea314000ed2cde50a2c4e8723d9fa5a4fb9174c56749ea4031620): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecant4wt" podUID="4d7e7247-8115-4259-b218-d5d8dceac01d" Feb 17 13:48:39 crc kubenswrapper[4768]: I0217 13:48:39.534228 4768 scope.go:117] "RemoveContainer" containerID="f0eabe9e6b5551e88ed34f7b32f5573dd3d736e0c52761e08b1a6b74957522ef" Feb 17 13:48:40 crc kubenswrapper[4768]: I0217 13:48:40.676565 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-jjjqj_e044bf1f-26b2-4a39-86e6-0440eff3eaa9/kube-multus/2.log" Feb 17 13:48:40 crc kubenswrapper[4768]: I0217 13:48:40.676836 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-jjjqj" event={"ID":"e044bf1f-26b2-4a39-86e6-0440eff3eaa9","Type":"ContainerStarted","Data":"cf85066dd9b520db75537778652d7a9ab029c6dbb678c1f0c955ae0953e94b5c"} Feb 17 13:48:43 crc kubenswrapper[4768]: I0217 13:48:43.600772 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-qt2lc" Feb 17 13:48:52 crc kubenswrapper[4768]: I0217 13:48:52.533454 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecant4wt" Feb 17 13:48:52 crc kubenswrapper[4768]: I0217 13:48:52.534965 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecant4wt" Feb 17 13:48:52 crc kubenswrapper[4768]: I0217 13:48:52.983982 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecant4wt"] Feb 17 13:48:53 crc kubenswrapper[4768]: I0217 13:48:53.747604 4768 generic.go:334] "Generic (PLEG): container finished" podID="4d7e7247-8115-4259-b218-d5d8dceac01d" containerID="284a10e0a1cc42ea2f90ea7e4d3e8e434d9d056d09cd108efd3bb8ca561058d6" exitCode=0 Feb 17 13:48:53 crc kubenswrapper[4768]: I0217 13:48:53.747721 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecant4wt" event={"ID":"4d7e7247-8115-4259-b218-d5d8dceac01d","Type":"ContainerDied","Data":"284a10e0a1cc42ea2f90ea7e4d3e8e434d9d056d09cd108efd3bb8ca561058d6"} Feb 17 13:48:53 crc kubenswrapper[4768]: I0217 13:48:53.748062 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecant4wt" event={"ID":"4d7e7247-8115-4259-b218-d5d8dceac01d","Type":"ContainerStarted","Data":"286bfbdecef3f75862b65ad0c9c0cefedd2c2e04627932ce2148d949d2be57f5"} Feb 17 13:48:55 crc kubenswrapper[4768]: I0217 13:48:55.760159 4768 generic.go:334] "Generic (PLEG): container finished" podID="4d7e7247-8115-4259-b218-d5d8dceac01d" containerID="38e1f3aea84e9b42e7e29d968c519c6830321162dc454e28469c1ea66e6aeb71" exitCode=0 Feb 17 13:48:55 crc kubenswrapper[4768]: I0217 13:48:55.760267 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecant4wt" event={"ID":"4d7e7247-8115-4259-b218-d5d8dceac01d","Type":"ContainerDied","Data":"38e1f3aea84e9b42e7e29d968c519c6830321162dc454e28469c1ea66e6aeb71"} Feb 17 13:48:56 crc kubenswrapper[4768]: I0217 13:48:56.071746 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-d9qb6"] Feb 17 13:48:56 crc kubenswrapper[4768]: I0217 13:48:56.073199 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-d9qb6" Feb 17 13:48:56 crc kubenswrapper[4768]: I0217 13:48:56.084594 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-d9qb6"] Feb 17 13:48:56 crc kubenswrapper[4768]: I0217 13:48:56.140666 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e99cfd2d-ef01-4e8d-86a9-cb715e8878f2-utilities\") pod \"redhat-operators-d9qb6\" (UID: \"e99cfd2d-ef01-4e8d-86a9-cb715e8878f2\") " pod="openshift-marketplace/redhat-operators-d9qb6" Feb 17 13:48:56 crc kubenswrapper[4768]: I0217 13:48:56.140722 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e99cfd2d-ef01-4e8d-86a9-cb715e8878f2-catalog-content\") pod \"redhat-operators-d9qb6\" (UID: \"e99cfd2d-ef01-4e8d-86a9-cb715e8878f2\") " pod="openshift-marketplace/redhat-operators-d9qb6" Feb 17 13:48:56 crc kubenswrapper[4768]: I0217 13:48:56.140763 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xcpvk\" (UniqueName: \"kubernetes.io/projected/e99cfd2d-ef01-4e8d-86a9-cb715e8878f2-kube-api-access-xcpvk\") pod \"redhat-operators-d9qb6\" (UID: \"e99cfd2d-ef01-4e8d-86a9-cb715e8878f2\") " pod="openshift-marketplace/redhat-operators-d9qb6" Feb 17 13:48:56 crc kubenswrapper[4768]: I0217 13:48:56.242308 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xcpvk\" (UniqueName: \"kubernetes.io/projected/e99cfd2d-ef01-4e8d-86a9-cb715e8878f2-kube-api-access-xcpvk\") pod \"redhat-operators-d9qb6\" (UID: \"e99cfd2d-ef01-4e8d-86a9-cb715e8878f2\") " pod="openshift-marketplace/redhat-operators-d9qb6" Feb 17 13:48:56 crc kubenswrapper[4768]: I0217 13:48:56.242418 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e99cfd2d-ef01-4e8d-86a9-cb715e8878f2-utilities\") pod \"redhat-operators-d9qb6\" (UID: \"e99cfd2d-ef01-4e8d-86a9-cb715e8878f2\") " pod="openshift-marketplace/redhat-operators-d9qb6" Feb 17 13:48:56 crc kubenswrapper[4768]: I0217 13:48:56.242455 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e99cfd2d-ef01-4e8d-86a9-cb715e8878f2-catalog-content\") pod \"redhat-operators-d9qb6\" (UID: \"e99cfd2d-ef01-4e8d-86a9-cb715e8878f2\") " pod="openshift-marketplace/redhat-operators-d9qb6" Feb 17 13:48:56 crc kubenswrapper[4768]: I0217 13:48:56.242975 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e99cfd2d-ef01-4e8d-86a9-cb715e8878f2-catalog-content\") pod \"redhat-operators-d9qb6\" (UID: \"e99cfd2d-ef01-4e8d-86a9-cb715e8878f2\") " pod="openshift-marketplace/redhat-operators-d9qb6" Feb 17 13:48:56 crc kubenswrapper[4768]: I0217 13:48:56.243011 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e99cfd2d-ef01-4e8d-86a9-cb715e8878f2-utilities\") pod \"redhat-operators-d9qb6\" (UID: \"e99cfd2d-ef01-4e8d-86a9-cb715e8878f2\") " pod="openshift-marketplace/redhat-operators-d9qb6" Feb 17 13:48:56 crc kubenswrapper[4768]: I0217 13:48:56.270122 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xcpvk\" (UniqueName: \"kubernetes.io/projected/e99cfd2d-ef01-4e8d-86a9-cb715e8878f2-kube-api-access-xcpvk\") pod \"redhat-operators-d9qb6\" (UID: \"e99cfd2d-ef01-4e8d-86a9-cb715e8878f2\") " pod="openshift-marketplace/redhat-operators-d9qb6" Feb 17 13:48:56 crc kubenswrapper[4768]: I0217 13:48:56.427780 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-d9qb6" Feb 17 13:48:56 crc kubenswrapper[4768]: I0217 13:48:56.447612 4768 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Feb 17 13:48:56 crc kubenswrapper[4768]: I0217 13:48:56.684722 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-d9qb6"] Feb 17 13:48:56 crc kubenswrapper[4768]: I0217 13:48:56.767289 4768 generic.go:334] "Generic (PLEG): container finished" podID="4d7e7247-8115-4259-b218-d5d8dceac01d" containerID="5aa7f98feb027be4f211c226d3ef04b55b08d6171886fba0abd44fe12f892756" exitCode=0 Feb 17 13:48:56 crc kubenswrapper[4768]: I0217 13:48:56.767340 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecant4wt" event={"ID":"4d7e7247-8115-4259-b218-d5d8dceac01d","Type":"ContainerDied","Data":"5aa7f98feb027be4f211c226d3ef04b55b08d6171886fba0abd44fe12f892756"} Feb 17 13:48:56 crc kubenswrapper[4768]: I0217 13:48:56.768552 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-d9qb6" event={"ID":"e99cfd2d-ef01-4e8d-86a9-cb715e8878f2","Type":"ContainerStarted","Data":"75c83f800885c6a2f652773dc831de71692a8740c5c02e4c5e5823ea36b28a45"} Feb 17 13:48:57 crc kubenswrapper[4768]: I0217 13:48:57.776479 4768 generic.go:334] "Generic (PLEG): container finished" podID="e99cfd2d-ef01-4e8d-86a9-cb715e8878f2" containerID="a98a2604ec9aceb028fed066f0060ac3fed394c44de3c9b6cf1ce4f30f30534c" exitCode=0 Feb 17 13:48:57 crc kubenswrapper[4768]: I0217 13:48:57.777630 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-d9qb6" event={"ID":"e99cfd2d-ef01-4e8d-86a9-cb715e8878f2","Type":"ContainerDied","Data":"a98a2604ec9aceb028fed066f0060ac3fed394c44de3c9b6cf1ce4f30f30534c"} Feb 17 13:48:58 crc kubenswrapper[4768]: I0217 13:48:58.060424 4768 patch_prober.go:28] interesting pod/machine-config-daemon-p97z4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 13:48:58 crc kubenswrapper[4768]: I0217 13:48:58.060491 4768 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-p97z4" podUID="10c685ba-8fe0-425c-958c-3fb6754d3d84" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 13:48:58 crc kubenswrapper[4768]: I0217 13:48:58.060536 4768 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-p97z4" Feb 17 13:48:58 crc kubenswrapper[4768]: I0217 13:48:58.061161 4768 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"261ea7265dca6d9f9150a1c46ec950cce5894a7910bfec8d9ee8e08fac1f7c8f"} pod="openshift-machine-config-operator/machine-config-daemon-p97z4" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 17 13:48:58 crc kubenswrapper[4768]: I0217 13:48:58.061236 4768 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-p97z4" podUID="10c685ba-8fe0-425c-958c-3fb6754d3d84" containerName="machine-config-daemon" containerID="cri-o://261ea7265dca6d9f9150a1c46ec950cce5894a7910bfec8d9ee8e08fac1f7c8f" gracePeriod=600 Feb 17 13:48:58 crc kubenswrapper[4768]: I0217 13:48:58.075894 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecant4wt" Feb 17 13:48:58 crc kubenswrapper[4768]: I0217 13:48:58.167509 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/4d7e7247-8115-4259-b218-d5d8dceac01d-util\") pod \"4d7e7247-8115-4259-b218-d5d8dceac01d\" (UID: \"4d7e7247-8115-4259-b218-d5d8dceac01d\") " Feb 17 13:48:58 crc kubenswrapper[4768]: I0217 13:48:58.167590 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/4d7e7247-8115-4259-b218-d5d8dceac01d-bundle\") pod \"4d7e7247-8115-4259-b218-d5d8dceac01d\" (UID: \"4d7e7247-8115-4259-b218-d5d8dceac01d\") " Feb 17 13:48:58 crc kubenswrapper[4768]: I0217 13:48:58.167627 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xslth\" (UniqueName: \"kubernetes.io/projected/4d7e7247-8115-4259-b218-d5d8dceac01d-kube-api-access-xslth\") pod \"4d7e7247-8115-4259-b218-d5d8dceac01d\" (UID: \"4d7e7247-8115-4259-b218-d5d8dceac01d\") " Feb 17 13:48:58 crc kubenswrapper[4768]: I0217 13:48:58.168232 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4d7e7247-8115-4259-b218-d5d8dceac01d-bundle" (OuterVolumeSpecName: "bundle") pod "4d7e7247-8115-4259-b218-d5d8dceac01d" (UID: "4d7e7247-8115-4259-b218-d5d8dceac01d"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 13:48:58 crc kubenswrapper[4768]: I0217 13:48:58.168717 4768 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/4d7e7247-8115-4259-b218-d5d8dceac01d-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 13:48:58 crc kubenswrapper[4768]: I0217 13:48:58.172530 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4d7e7247-8115-4259-b218-d5d8dceac01d-kube-api-access-xslth" (OuterVolumeSpecName: "kube-api-access-xslth") pod "4d7e7247-8115-4259-b218-d5d8dceac01d" (UID: "4d7e7247-8115-4259-b218-d5d8dceac01d"). InnerVolumeSpecName "kube-api-access-xslth". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 13:48:58 crc kubenswrapper[4768]: I0217 13:48:58.186319 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4d7e7247-8115-4259-b218-d5d8dceac01d-util" (OuterVolumeSpecName: "util") pod "4d7e7247-8115-4259-b218-d5d8dceac01d" (UID: "4d7e7247-8115-4259-b218-d5d8dceac01d"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 13:48:58 crc kubenswrapper[4768]: I0217 13:48:58.269838 4768 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/4d7e7247-8115-4259-b218-d5d8dceac01d-util\") on node \"crc\" DevicePath \"\"" Feb 17 13:48:58 crc kubenswrapper[4768]: I0217 13:48:58.269865 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xslth\" (UniqueName: \"kubernetes.io/projected/4d7e7247-8115-4259-b218-d5d8dceac01d-kube-api-access-xslth\") on node \"crc\" DevicePath \"\"" Feb 17 13:48:58 crc kubenswrapper[4768]: I0217 13:48:58.785729 4768 generic.go:334] "Generic (PLEG): container finished" podID="10c685ba-8fe0-425c-958c-3fb6754d3d84" containerID="261ea7265dca6d9f9150a1c46ec950cce5894a7910bfec8d9ee8e08fac1f7c8f" exitCode=0 Feb 17 13:48:58 crc kubenswrapper[4768]: I0217 13:48:58.785834 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-p97z4" event={"ID":"10c685ba-8fe0-425c-958c-3fb6754d3d84","Type":"ContainerDied","Data":"261ea7265dca6d9f9150a1c46ec950cce5894a7910bfec8d9ee8e08fac1f7c8f"} Feb 17 13:48:58 crc kubenswrapper[4768]: I0217 13:48:58.785893 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-p97z4" event={"ID":"10c685ba-8fe0-425c-958c-3fb6754d3d84","Type":"ContainerStarted","Data":"83ffe2b5d1ed0faaa82ed446a55b456fa3a71e8473ab304c756bbf132bdab653"} Feb 17 13:48:58 crc kubenswrapper[4768]: I0217 13:48:58.785919 4768 scope.go:117] "RemoveContainer" containerID="47b6caa1099a99506831f3c5757b6a6214ede83a2d3a9d01c1a4df4a6cd207c8" Feb 17 13:48:58 crc kubenswrapper[4768]: I0217 13:48:58.790790 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecant4wt" event={"ID":"4d7e7247-8115-4259-b218-d5d8dceac01d","Type":"ContainerDied","Data":"286bfbdecef3f75862b65ad0c9c0cefedd2c2e04627932ce2148d949d2be57f5"} Feb 17 13:48:58 crc kubenswrapper[4768]: I0217 13:48:58.790814 4768 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="286bfbdecef3f75862b65ad0c9c0cefedd2c2e04627932ce2148d949d2be57f5" Feb 17 13:48:58 crc kubenswrapper[4768]: I0217 13:48:58.790850 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecant4wt" Feb 17 13:48:58 crc kubenswrapper[4768]: I0217 13:48:58.792132 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-d9qb6" event={"ID":"e99cfd2d-ef01-4e8d-86a9-cb715e8878f2","Type":"ContainerStarted","Data":"bb74e2bac322e7ddc7d68deee9e883d54d247a9cfd2347f37d196a5d20eef185"} Feb 17 13:48:59 crc kubenswrapper[4768]: I0217 13:48:59.801864 4768 generic.go:334] "Generic (PLEG): container finished" podID="e99cfd2d-ef01-4e8d-86a9-cb715e8878f2" containerID="bb74e2bac322e7ddc7d68deee9e883d54d247a9cfd2347f37d196a5d20eef185" exitCode=0 Feb 17 13:48:59 crc kubenswrapper[4768]: I0217 13:48:59.802074 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-d9qb6" event={"ID":"e99cfd2d-ef01-4e8d-86a9-cb715e8878f2","Type":"ContainerDied","Data":"bb74e2bac322e7ddc7d68deee9e883d54d247a9cfd2347f37d196a5d20eef185"} Feb 17 13:49:00 crc kubenswrapper[4768]: I0217 13:49:00.811269 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-d9qb6" event={"ID":"e99cfd2d-ef01-4e8d-86a9-cb715e8878f2","Type":"ContainerStarted","Data":"d6c5dc26284296034a3a222a04093dcd9a015f6a00eb7ee32249e242e56d6843"} Feb 17 13:49:00 crc kubenswrapper[4768]: I0217 13:49:00.827919 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-d9qb6" podStartSLOduration=2.3952662 podStartE2EDuration="4.827899541s" podCreationTimestamp="2026-02-17 13:48:56 +0000 UTC" firstStartedPulling="2026-02-17 13:48:57.778764205 +0000 UTC m=+757.058150647" lastFinishedPulling="2026-02-17 13:49:00.211397546 +0000 UTC m=+759.490783988" observedRunningTime="2026-02-17 13:49:00.825460546 +0000 UTC m=+760.104846988" watchObservedRunningTime="2026-02-17 13:49:00.827899541 +0000 UTC m=+760.107285983" Feb 17 13:49:03 crc kubenswrapper[4768]: I0217 13:49:03.787733 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-operator-694c9596b7-2fndh"] Feb 17 13:49:03 crc kubenswrapper[4768]: E0217 13:49:03.788275 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4d7e7247-8115-4259-b218-d5d8dceac01d" containerName="util" Feb 17 13:49:03 crc kubenswrapper[4768]: I0217 13:49:03.788290 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="4d7e7247-8115-4259-b218-d5d8dceac01d" containerName="util" Feb 17 13:49:03 crc kubenswrapper[4768]: E0217 13:49:03.788302 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4d7e7247-8115-4259-b218-d5d8dceac01d" containerName="pull" Feb 17 13:49:03 crc kubenswrapper[4768]: I0217 13:49:03.788309 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="4d7e7247-8115-4259-b218-d5d8dceac01d" containerName="pull" Feb 17 13:49:03 crc kubenswrapper[4768]: E0217 13:49:03.788318 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4d7e7247-8115-4259-b218-d5d8dceac01d" containerName="extract" Feb 17 13:49:03 crc kubenswrapper[4768]: I0217 13:49:03.788325 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="4d7e7247-8115-4259-b218-d5d8dceac01d" containerName="extract" Feb 17 13:49:03 crc kubenswrapper[4768]: I0217 13:49:03.788444 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="4d7e7247-8115-4259-b218-d5d8dceac01d" containerName="extract" Feb 17 13:49:03 crc kubenswrapper[4768]: I0217 13:49:03.788864 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-694c9596b7-2fndh" Feb 17 13:49:03 crc kubenswrapper[4768]: I0217 13:49:03.790718 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"openshift-service-ca.crt" Feb 17 13:49:03 crc kubenswrapper[4768]: I0217 13:49:03.790774 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-operator-dockercfg-dcrxp" Feb 17 13:49:03 crc kubenswrapper[4768]: I0217 13:49:03.790950 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"kube-root-ca.crt" Feb 17 13:49:03 crc kubenswrapper[4768]: I0217 13:49:03.800544 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-694c9596b7-2fndh"] Feb 17 13:49:03 crc kubenswrapper[4768]: I0217 13:49:03.841666 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hmzpc\" (UniqueName: \"kubernetes.io/projected/89be9918-4f1d-4c85-8d1c-73f9245fd232-kube-api-access-hmzpc\") pod \"nmstate-operator-694c9596b7-2fndh\" (UID: \"89be9918-4f1d-4c85-8d1c-73f9245fd232\") " pod="openshift-nmstate/nmstate-operator-694c9596b7-2fndh" Feb 17 13:49:03 crc kubenswrapper[4768]: I0217 13:49:03.942496 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hmzpc\" (UniqueName: \"kubernetes.io/projected/89be9918-4f1d-4c85-8d1c-73f9245fd232-kube-api-access-hmzpc\") pod \"nmstate-operator-694c9596b7-2fndh\" (UID: \"89be9918-4f1d-4c85-8d1c-73f9245fd232\") " pod="openshift-nmstate/nmstate-operator-694c9596b7-2fndh" Feb 17 13:49:03 crc kubenswrapper[4768]: I0217 13:49:03.969429 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hmzpc\" (UniqueName: \"kubernetes.io/projected/89be9918-4f1d-4c85-8d1c-73f9245fd232-kube-api-access-hmzpc\") pod \"nmstate-operator-694c9596b7-2fndh\" (UID: \"89be9918-4f1d-4c85-8d1c-73f9245fd232\") " pod="openshift-nmstate/nmstate-operator-694c9596b7-2fndh" Feb 17 13:49:04 crc kubenswrapper[4768]: I0217 13:49:04.110571 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-694c9596b7-2fndh" Feb 17 13:49:04 crc kubenswrapper[4768]: I0217 13:49:04.496361 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-694c9596b7-2fndh"] Feb 17 13:49:04 crc kubenswrapper[4768]: I0217 13:49:04.833893 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-694c9596b7-2fndh" event={"ID":"89be9918-4f1d-4c85-8d1c-73f9245fd232","Type":"ContainerStarted","Data":"083916fe4516a0ad6f85d31435cb29fd67039b1be34c5d2a8670df55eada809d"} Feb 17 13:49:06 crc kubenswrapper[4768]: I0217 13:49:06.428356 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-d9qb6" Feb 17 13:49:06 crc kubenswrapper[4768]: I0217 13:49:06.428685 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-d9qb6" Feb 17 13:49:07 crc kubenswrapper[4768]: I0217 13:49:07.472538 4768 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-d9qb6" podUID="e99cfd2d-ef01-4e8d-86a9-cb715e8878f2" containerName="registry-server" probeResult="failure" output=< Feb 17 13:49:07 crc kubenswrapper[4768]: timeout: failed to connect service ":50051" within 1s Feb 17 13:49:07 crc kubenswrapper[4768]: > Feb 17 13:49:07 crc kubenswrapper[4768]: I0217 13:49:07.855469 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-694c9596b7-2fndh" event={"ID":"89be9918-4f1d-4c85-8d1c-73f9245fd232","Type":"ContainerStarted","Data":"946bffdc6bc149e883bb84b41ae97e1f56030b73bd1c4b2aff29439cb911b593"} Feb 17 13:49:07 crc kubenswrapper[4768]: I0217 13:49:07.879617 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-operator-694c9596b7-2fndh" podStartSLOduration=2.671298028 podStartE2EDuration="4.879581695s" podCreationTimestamp="2026-02-17 13:49:03 +0000 UTC" firstStartedPulling="2026-02-17 13:49:04.503244078 +0000 UTC m=+763.782630510" lastFinishedPulling="2026-02-17 13:49:06.711527735 +0000 UTC m=+765.990914177" observedRunningTime="2026-02-17 13:49:07.870432241 +0000 UTC m=+767.149818773" watchObservedRunningTime="2026-02-17 13:49:07.879581695 +0000 UTC m=+767.158968167" Feb 17 13:49:08 crc kubenswrapper[4768]: I0217 13:49:08.976985 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-metrics-58c85c668d-f5lj2"] Feb 17 13:49:08 crc kubenswrapper[4768]: I0217 13:49:08.978199 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-58c85c668d-f5lj2" Feb 17 13:49:08 crc kubenswrapper[4768]: I0217 13:49:08.980132 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-handler-dockercfg-wtggn" Feb 17 13:49:08 crc kubenswrapper[4768]: I0217 13:49:08.986442 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-webhook-866bcb46dc-ghl29"] Feb 17 13:49:08 crc kubenswrapper[4768]: I0217 13:49:08.987157 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-ghl29" Feb 17 13:49:08 crc kubenswrapper[4768]: I0217 13:49:08.991768 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"openshift-nmstate-webhook" Feb 17 13:49:09 crc kubenswrapper[4768]: I0217 13:49:09.012714 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9splr\" (UniqueName: \"kubernetes.io/projected/77308fca-ea01-49d6-b264-61df88438fd0-kube-api-access-9splr\") pod \"nmstate-metrics-58c85c668d-f5lj2\" (UID: \"77308fca-ea01-49d6-b264-61df88438fd0\") " pod="openshift-nmstate/nmstate-metrics-58c85c668d-f5lj2" Feb 17 13:49:09 crc kubenswrapper[4768]: I0217 13:49:09.012811 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-57lrt\" (UniqueName: \"kubernetes.io/projected/f4ddc594-4af8-4856-9542-0a76bf8c5acc-kube-api-access-57lrt\") pod \"nmstate-webhook-866bcb46dc-ghl29\" (UID: \"f4ddc594-4af8-4856-9542-0a76bf8c5acc\") " pod="openshift-nmstate/nmstate-webhook-866bcb46dc-ghl29" Feb 17 13:49:09 crc kubenswrapper[4768]: I0217 13:49:09.012849 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/f4ddc594-4af8-4856-9542-0a76bf8c5acc-tls-key-pair\") pod \"nmstate-webhook-866bcb46dc-ghl29\" (UID: \"f4ddc594-4af8-4856-9542-0a76bf8c5acc\") " pod="openshift-nmstate/nmstate-webhook-866bcb46dc-ghl29" Feb 17 13:49:09 crc kubenswrapper[4768]: I0217 13:49:09.030224 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-handler-dfmfq"] Feb 17 13:49:09 crc kubenswrapper[4768]: I0217 13:49:09.031959 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-dfmfq" Feb 17 13:49:09 crc kubenswrapper[4768]: I0217 13:49:09.047060 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-866bcb46dc-ghl29"] Feb 17 13:49:09 crc kubenswrapper[4768]: I0217 13:49:09.067921 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-58c85c668d-f5lj2"] Feb 17 13:49:09 crc kubenswrapper[4768]: I0217 13:49:09.120397 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-console-plugin-5c78fc5d65-2cs2r"] Feb 17 13:49:09 crc kubenswrapper[4768]: I0217 13:49:09.120837 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/aef4be82-c769-456d-90be-c95789ab9c2c-ovs-socket\") pod \"nmstate-handler-dfmfq\" (UID: \"aef4be82-c769-456d-90be-c95789ab9c2c\") " pod="openshift-nmstate/nmstate-handler-dfmfq" Feb 17 13:49:09 crc kubenswrapper[4768]: I0217 13:49:09.120888 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9splr\" (UniqueName: \"kubernetes.io/projected/77308fca-ea01-49d6-b264-61df88438fd0-kube-api-access-9splr\") pod \"nmstate-metrics-58c85c668d-f5lj2\" (UID: \"77308fca-ea01-49d6-b264-61df88438fd0\") " pod="openshift-nmstate/nmstate-metrics-58c85c668d-f5lj2" Feb 17 13:49:09 crc kubenswrapper[4768]: I0217 13:49:09.120922 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/aef4be82-c769-456d-90be-c95789ab9c2c-dbus-socket\") pod \"nmstate-handler-dfmfq\" (UID: \"aef4be82-c769-456d-90be-c95789ab9c2c\") " pod="openshift-nmstate/nmstate-handler-dfmfq" Feb 17 13:49:09 crc kubenswrapper[4768]: I0217 13:49:09.120947 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/aef4be82-c769-456d-90be-c95789ab9c2c-nmstate-lock\") pod \"nmstate-handler-dfmfq\" (UID: \"aef4be82-c769-456d-90be-c95789ab9c2c\") " pod="openshift-nmstate/nmstate-handler-dfmfq" Feb 17 13:49:09 crc kubenswrapper[4768]: I0217 13:49:09.120985 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-57lrt\" (UniqueName: \"kubernetes.io/projected/f4ddc594-4af8-4856-9542-0a76bf8c5acc-kube-api-access-57lrt\") pod \"nmstate-webhook-866bcb46dc-ghl29\" (UID: \"f4ddc594-4af8-4856-9542-0a76bf8c5acc\") " pod="openshift-nmstate/nmstate-webhook-866bcb46dc-ghl29" Feb 17 13:49:09 crc kubenswrapper[4768]: I0217 13:49:09.121008 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/f4ddc594-4af8-4856-9542-0a76bf8c5acc-tls-key-pair\") pod \"nmstate-webhook-866bcb46dc-ghl29\" (UID: \"f4ddc594-4af8-4856-9542-0a76bf8c5acc\") " pod="openshift-nmstate/nmstate-webhook-866bcb46dc-ghl29" Feb 17 13:49:09 crc kubenswrapper[4768]: I0217 13:49:09.121038 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-znlnr\" (UniqueName: \"kubernetes.io/projected/aef4be82-c769-456d-90be-c95789ab9c2c-kube-api-access-znlnr\") pod \"nmstate-handler-dfmfq\" (UID: \"aef4be82-c769-456d-90be-c95789ab9c2c\") " pod="openshift-nmstate/nmstate-handler-dfmfq" Feb 17 13:49:09 crc kubenswrapper[4768]: I0217 13:49:09.122940 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-2cs2r" Feb 17 13:49:09 crc kubenswrapper[4768]: I0217 13:49:09.126235 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"plugin-serving-cert" Feb 17 13:49:09 crc kubenswrapper[4768]: I0217 13:49:09.126392 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"default-dockercfg-wsv2n" Feb 17 13:49:09 crc kubenswrapper[4768]: I0217 13:49:09.126504 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"nginx-conf" Feb 17 13:49:09 crc kubenswrapper[4768]: I0217 13:49:09.128578 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/f4ddc594-4af8-4856-9542-0a76bf8c5acc-tls-key-pair\") pod \"nmstate-webhook-866bcb46dc-ghl29\" (UID: \"f4ddc594-4af8-4856-9542-0a76bf8c5acc\") " pod="openshift-nmstate/nmstate-webhook-866bcb46dc-ghl29" Feb 17 13:49:09 crc kubenswrapper[4768]: I0217 13:49:09.147810 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-57lrt\" (UniqueName: \"kubernetes.io/projected/f4ddc594-4af8-4856-9542-0a76bf8c5acc-kube-api-access-57lrt\") pod \"nmstate-webhook-866bcb46dc-ghl29\" (UID: \"f4ddc594-4af8-4856-9542-0a76bf8c5acc\") " pod="openshift-nmstate/nmstate-webhook-866bcb46dc-ghl29" Feb 17 13:49:09 crc kubenswrapper[4768]: I0217 13:49:09.148362 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9splr\" (UniqueName: \"kubernetes.io/projected/77308fca-ea01-49d6-b264-61df88438fd0-kube-api-access-9splr\") pod \"nmstate-metrics-58c85c668d-f5lj2\" (UID: \"77308fca-ea01-49d6-b264-61df88438fd0\") " pod="openshift-nmstate/nmstate-metrics-58c85c668d-f5lj2" Feb 17 13:49:09 crc kubenswrapper[4768]: I0217 13:49:09.168614 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-5c78fc5d65-2cs2r"] Feb 17 13:49:09 crc kubenswrapper[4768]: I0217 13:49:09.221909 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/aef4be82-c769-456d-90be-c95789ab9c2c-dbus-socket\") pod \"nmstate-handler-dfmfq\" (UID: \"aef4be82-c769-456d-90be-c95789ab9c2c\") " pod="openshift-nmstate/nmstate-handler-dfmfq" Feb 17 13:49:09 crc kubenswrapper[4768]: I0217 13:49:09.221960 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c4r7p\" (UniqueName: \"kubernetes.io/projected/b7911594-28b5-4c40-b08b-5f3b33d9bd11-kube-api-access-c4r7p\") pod \"nmstate-console-plugin-5c78fc5d65-2cs2r\" (UID: \"b7911594-28b5-4c40-b08b-5f3b33d9bd11\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-2cs2r" Feb 17 13:49:09 crc kubenswrapper[4768]: I0217 13:49:09.221979 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/aef4be82-c769-456d-90be-c95789ab9c2c-nmstate-lock\") pod \"nmstate-handler-dfmfq\" (UID: \"aef4be82-c769-456d-90be-c95789ab9c2c\") " pod="openshift-nmstate/nmstate-handler-dfmfq" Feb 17 13:49:09 crc kubenswrapper[4768]: I0217 13:49:09.222014 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/b7911594-28b5-4c40-b08b-5f3b33d9bd11-nginx-conf\") pod \"nmstate-console-plugin-5c78fc5d65-2cs2r\" (UID: \"b7911594-28b5-4c40-b08b-5f3b33d9bd11\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-2cs2r" Feb 17 13:49:09 crc kubenswrapper[4768]: I0217 13:49:09.222053 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-znlnr\" (UniqueName: \"kubernetes.io/projected/aef4be82-c769-456d-90be-c95789ab9c2c-kube-api-access-znlnr\") pod \"nmstate-handler-dfmfq\" (UID: \"aef4be82-c769-456d-90be-c95789ab9c2c\") " pod="openshift-nmstate/nmstate-handler-dfmfq" Feb 17 13:49:09 crc kubenswrapper[4768]: I0217 13:49:09.222072 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/b7911594-28b5-4c40-b08b-5f3b33d9bd11-plugin-serving-cert\") pod \"nmstate-console-plugin-5c78fc5d65-2cs2r\" (UID: \"b7911594-28b5-4c40-b08b-5f3b33d9bd11\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-2cs2r" Feb 17 13:49:09 crc kubenswrapper[4768]: I0217 13:49:09.222112 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/aef4be82-c769-456d-90be-c95789ab9c2c-ovs-socket\") pod \"nmstate-handler-dfmfq\" (UID: \"aef4be82-c769-456d-90be-c95789ab9c2c\") " pod="openshift-nmstate/nmstate-handler-dfmfq" Feb 17 13:49:09 crc kubenswrapper[4768]: I0217 13:49:09.222175 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/aef4be82-c769-456d-90be-c95789ab9c2c-ovs-socket\") pod \"nmstate-handler-dfmfq\" (UID: \"aef4be82-c769-456d-90be-c95789ab9c2c\") " pod="openshift-nmstate/nmstate-handler-dfmfq" Feb 17 13:49:09 crc kubenswrapper[4768]: I0217 13:49:09.222215 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/aef4be82-c769-456d-90be-c95789ab9c2c-nmstate-lock\") pod \"nmstate-handler-dfmfq\" (UID: \"aef4be82-c769-456d-90be-c95789ab9c2c\") " pod="openshift-nmstate/nmstate-handler-dfmfq" Feb 17 13:49:09 crc kubenswrapper[4768]: I0217 13:49:09.222262 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/aef4be82-c769-456d-90be-c95789ab9c2c-dbus-socket\") pod \"nmstate-handler-dfmfq\" (UID: \"aef4be82-c769-456d-90be-c95789ab9c2c\") " pod="openshift-nmstate/nmstate-handler-dfmfq" Feb 17 13:49:09 crc kubenswrapper[4768]: I0217 13:49:09.248638 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-znlnr\" (UniqueName: \"kubernetes.io/projected/aef4be82-c769-456d-90be-c95789ab9c2c-kube-api-access-znlnr\") pod \"nmstate-handler-dfmfq\" (UID: \"aef4be82-c769-456d-90be-c95789ab9c2c\") " pod="openshift-nmstate/nmstate-handler-dfmfq" Feb 17 13:49:09 crc kubenswrapper[4768]: I0217 13:49:09.311501 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-58c85c668d-f5lj2" Feb 17 13:49:09 crc kubenswrapper[4768]: I0217 13:49:09.321763 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-5bf68987ff-v6qgp"] Feb 17 13:49:09 crc kubenswrapper[4768]: I0217 13:49:09.322638 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-5bf68987ff-v6qgp" Feb 17 13:49:09 crc kubenswrapper[4768]: I0217 13:49:09.323136 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/b7911594-28b5-4c40-b08b-5f3b33d9bd11-plugin-serving-cert\") pod \"nmstate-console-plugin-5c78fc5d65-2cs2r\" (UID: \"b7911594-28b5-4c40-b08b-5f3b33d9bd11\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-2cs2r" Feb 17 13:49:09 crc kubenswrapper[4768]: I0217 13:49:09.323208 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c4r7p\" (UniqueName: \"kubernetes.io/projected/b7911594-28b5-4c40-b08b-5f3b33d9bd11-kube-api-access-c4r7p\") pod \"nmstate-console-plugin-5c78fc5d65-2cs2r\" (UID: \"b7911594-28b5-4c40-b08b-5f3b33d9bd11\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-2cs2r" Feb 17 13:49:09 crc kubenswrapper[4768]: I0217 13:49:09.323246 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/b7911594-28b5-4c40-b08b-5f3b33d9bd11-nginx-conf\") pod \"nmstate-console-plugin-5c78fc5d65-2cs2r\" (UID: \"b7911594-28b5-4c40-b08b-5f3b33d9bd11\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-2cs2r" Feb 17 13:49:09 crc kubenswrapper[4768]: E0217 13:49:09.323502 4768 secret.go:188] Couldn't get secret openshift-nmstate/plugin-serving-cert: secret "plugin-serving-cert" not found Feb 17 13:49:09 crc kubenswrapper[4768]: E0217 13:49:09.323636 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b7911594-28b5-4c40-b08b-5f3b33d9bd11-plugin-serving-cert podName:b7911594-28b5-4c40-b08b-5f3b33d9bd11 nodeName:}" failed. No retries permitted until 2026-02-17 13:49:09.823599403 +0000 UTC m=+769.102986015 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "plugin-serving-cert" (UniqueName: "kubernetes.io/secret/b7911594-28b5-4c40-b08b-5f3b33d9bd11-plugin-serving-cert") pod "nmstate-console-plugin-5c78fc5d65-2cs2r" (UID: "b7911594-28b5-4c40-b08b-5f3b33d9bd11") : secret "plugin-serving-cert" not found Feb 17 13:49:09 crc kubenswrapper[4768]: I0217 13:49:09.324138 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/b7911594-28b5-4c40-b08b-5f3b33d9bd11-nginx-conf\") pod \"nmstate-console-plugin-5c78fc5d65-2cs2r\" (UID: \"b7911594-28b5-4c40-b08b-5f3b33d9bd11\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-2cs2r" Feb 17 13:49:09 crc kubenswrapper[4768]: I0217 13:49:09.324425 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-ghl29" Feb 17 13:49:09 crc kubenswrapper[4768]: I0217 13:49:09.339093 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-5bf68987ff-v6qgp"] Feb 17 13:49:09 crc kubenswrapper[4768]: I0217 13:49:09.350885 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c4r7p\" (UniqueName: \"kubernetes.io/projected/b7911594-28b5-4c40-b08b-5f3b33d9bd11-kube-api-access-c4r7p\") pod \"nmstate-console-plugin-5c78fc5d65-2cs2r\" (UID: \"b7911594-28b5-4c40-b08b-5f3b33d9bd11\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-2cs2r" Feb 17 13:49:09 crc kubenswrapper[4768]: I0217 13:49:09.365678 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-dfmfq" Feb 17 13:49:09 crc kubenswrapper[4768]: W0217 13:49:09.384385 4768 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podaef4be82_c769_456d_90be_c95789ab9c2c.slice/crio-0ef85c4e8bf38d9dd75c9e23c03cc05aa38c25172463f70ae681398751d07511 WatchSource:0}: Error finding container 0ef85c4e8bf38d9dd75c9e23c03cc05aa38c25172463f70ae681398751d07511: Status 404 returned error can't find the container with id 0ef85c4e8bf38d9dd75c9e23c03cc05aa38c25172463f70ae681398751d07511 Feb 17 13:49:09 crc kubenswrapper[4768]: I0217 13:49:09.424304 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/d4c56a55-2dd2-43c8-8bd7-0a35ea5cd86b-oauth-serving-cert\") pod \"console-5bf68987ff-v6qgp\" (UID: \"d4c56a55-2dd2-43c8-8bd7-0a35ea5cd86b\") " pod="openshift-console/console-5bf68987ff-v6qgp" Feb 17 13:49:09 crc kubenswrapper[4768]: I0217 13:49:09.424607 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/d4c56a55-2dd2-43c8-8bd7-0a35ea5cd86b-service-ca\") pod \"console-5bf68987ff-v6qgp\" (UID: \"d4c56a55-2dd2-43c8-8bd7-0a35ea5cd86b\") " pod="openshift-console/console-5bf68987ff-v6qgp" Feb 17 13:49:09 crc kubenswrapper[4768]: I0217 13:49:09.424751 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/d4c56a55-2dd2-43c8-8bd7-0a35ea5cd86b-console-serving-cert\") pod \"console-5bf68987ff-v6qgp\" (UID: \"d4c56a55-2dd2-43c8-8bd7-0a35ea5cd86b\") " pod="openshift-console/console-5bf68987ff-v6qgp" Feb 17 13:49:09 crc kubenswrapper[4768]: I0217 13:49:09.424802 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d4c56a55-2dd2-43c8-8bd7-0a35ea5cd86b-trusted-ca-bundle\") pod \"console-5bf68987ff-v6qgp\" (UID: \"d4c56a55-2dd2-43c8-8bd7-0a35ea5cd86b\") " pod="openshift-console/console-5bf68987ff-v6qgp" Feb 17 13:49:09 crc kubenswrapper[4768]: I0217 13:49:09.424854 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/d4c56a55-2dd2-43c8-8bd7-0a35ea5cd86b-console-oauth-config\") pod \"console-5bf68987ff-v6qgp\" (UID: \"d4c56a55-2dd2-43c8-8bd7-0a35ea5cd86b\") " pod="openshift-console/console-5bf68987ff-v6qgp" Feb 17 13:49:09 crc kubenswrapper[4768]: I0217 13:49:09.424911 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/d4c56a55-2dd2-43c8-8bd7-0a35ea5cd86b-console-config\") pod \"console-5bf68987ff-v6qgp\" (UID: \"d4c56a55-2dd2-43c8-8bd7-0a35ea5cd86b\") " pod="openshift-console/console-5bf68987ff-v6qgp" Feb 17 13:49:09 crc kubenswrapper[4768]: I0217 13:49:09.425258 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gbhrt\" (UniqueName: \"kubernetes.io/projected/d4c56a55-2dd2-43c8-8bd7-0a35ea5cd86b-kube-api-access-gbhrt\") pod \"console-5bf68987ff-v6qgp\" (UID: \"d4c56a55-2dd2-43c8-8bd7-0a35ea5cd86b\") " pod="openshift-console/console-5bf68987ff-v6qgp" Feb 17 13:49:09 crc kubenswrapper[4768]: I0217 13:49:09.526318 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/d4c56a55-2dd2-43c8-8bd7-0a35ea5cd86b-oauth-serving-cert\") pod \"console-5bf68987ff-v6qgp\" (UID: \"d4c56a55-2dd2-43c8-8bd7-0a35ea5cd86b\") " pod="openshift-console/console-5bf68987ff-v6qgp" Feb 17 13:49:09 crc kubenswrapper[4768]: I0217 13:49:09.526362 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/d4c56a55-2dd2-43c8-8bd7-0a35ea5cd86b-service-ca\") pod \"console-5bf68987ff-v6qgp\" (UID: \"d4c56a55-2dd2-43c8-8bd7-0a35ea5cd86b\") " pod="openshift-console/console-5bf68987ff-v6qgp" Feb 17 13:49:09 crc kubenswrapper[4768]: I0217 13:49:09.526410 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/d4c56a55-2dd2-43c8-8bd7-0a35ea5cd86b-console-serving-cert\") pod \"console-5bf68987ff-v6qgp\" (UID: \"d4c56a55-2dd2-43c8-8bd7-0a35ea5cd86b\") " pod="openshift-console/console-5bf68987ff-v6qgp" Feb 17 13:49:09 crc kubenswrapper[4768]: I0217 13:49:09.526443 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d4c56a55-2dd2-43c8-8bd7-0a35ea5cd86b-trusted-ca-bundle\") pod \"console-5bf68987ff-v6qgp\" (UID: \"d4c56a55-2dd2-43c8-8bd7-0a35ea5cd86b\") " pod="openshift-console/console-5bf68987ff-v6qgp" Feb 17 13:49:09 crc kubenswrapper[4768]: I0217 13:49:09.526472 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/d4c56a55-2dd2-43c8-8bd7-0a35ea5cd86b-console-oauth-config\") pod \"console-5bf68987ff-v6qgp\" (UID: \"d4c56a55-2dd2-43c8-8bd7-0a35ea5cd86b\") " pod="openshift-console/console-5bf68987ff-v6qgp" Feb 17 13:49:09 crc kubenswrapper[4768]: I0217 13:49:09.527410 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d4c56a55-2dd2-43c8-8bd7-0a35ea5cd86b-trusted-ca-bundle\") pod \"console-5bf68987ff-v6qgp\" (UID: \"d4c56a55-2dd2-43c8-8bd7-0a35ea5cd86b\") " pod="openshift-console/console-5bf68987ff-v6qgp" Feb 17 13:49:09 crc kubenswrapper[4768]: I0217 13:49:09.526493 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/d4c56a55-2dd2-43c8-8bd7-0a35ea5cd86b-console-config\") pod \"console-5bf68987ff-v6qgp\" (UID: \"d4c56a55-2dd2-43c8-8bd7-0a35ea5cd86b\") " pod="openshift-console/console-5bf68987ff-v6qgp" Feb 17 13:49:09 crc kubenswrapper[4768]: I0217 13:49:09.527490 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gbhrt\" (UniqueName: \"kubernetes.io/projected/d4c56a55-2dd2-43c8-8bd7-0a35ea5cd86b-kube-api-access-gbhrt\") pod \"console-5bf68987ff-v6qgp\" (UID: \"d4c56a55-2dd2-43c8-8bd7-0a35ea5cd86b\") " pod="openshift-console/console-5bf68987ff-v6qgp" Feb 17 13:49:09 crc kubenswrapper[4768]: I0217 13:49:09.527496 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/d4c56a55-2dd2-43c8-8bd7-0a35ea5cd86b-console-config\") pod \"console-5bf68987ff-v6qgp\" (UID: \"d4c56a55-2dd2-43c8-8bd7-0a35ea5cd86b\") " pod="openshift-console/console-5bf68987ff-v6qgp" Feb 17 13:49:09 crc kubenswrapper[4768]: I0217 13:49:09.527896 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/d4c56a55-2dd2-43c8-8bd7-0a35ea5cd86b-service-ca\") pod \"console-5bf68987ff-v6qgp\" (UID: \"d4c56a55-2dd2-43c8-8bd7-0a35ea5cd86b\") " pod="openshift-console/console-5bf68987ff-v6qgp" Feb 17 13:49:09 crc kubenswrapper[4768]: I0217 13:49:09.528613 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/d4c56a55-2dd2-43c8-8bd7-0a35ea5cd86b-oauth-serving-cert\") pod \"console-5bf68987ff-v6qgp\" (UID: \"d4c56a55-2dd2-43c8-8bd7-0a35ea5cd86b\") " pod="openshift-console/console-5bf68987ff-v6qgp" Feb 17 13:49:09 crc kubenswrapper[4768]: I0217 13:49:09.530755 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/d4c56a55-2dd2-43c8-8bd7-0a35ea5cd86b-console-oauth-config\") pod \"console-5bf68987ff-v6qgp\" (UID: \"d4c56a55-2dd2-43c8-8bd7-0a35ea5cd86b\") " pod="openshift-console/console-5bf68987ff-v6qgp" Feb 17 13:49:09 crc kubenswrapper[4768]: I0217 13:49:09.530851 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/d4c56a55-2dd2-43c8-8bd7-0a35ea5cd86b-console-serving-cert\") pod \"console-5bf68987ff-v6qgp\" (UID: \"d4c56a55-2dd2-43c8-8bd7-0a35ea5cd86b\") " pod="openshift-console/console-5bf68987ff-v6qgp" Feb 17 13:49:09 crc kubenswrapper[4768]: I0217 13:49:09.543960 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gbhrt\" (UniqueName: \"kubernetes.io/projected/d4c56a55-2dd2-43c8-8bd7-0a35ea5cd86b-kube-api-access-gbhrt\") pod \"console-5bf68987ff-v6qgp\" (UID: \"d4c56a55-2dd2-43c8-8bd7-0a35ea5cd86b\") " pod="openshift-console/console-5bf68987ff-v6qgp" Feb 17 13:49:09 crc kubenswrapper[4768]: I0217 13:49:09.570919 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-866bcb46dc-ghl29"] Feb 17 13:49:09 crc kubenswrapper[4768]: W0217 13:49:09.578058 4768 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf4ddc594_4af8_4856_9542_0a76bf8c5acc.slice/crio-431f0563ce0a368b063c12c9f4552b01f1d0b569dac1d5610ee4ce17ee653c8d WatchSource:0}: Error finding container 431f0563ce0a368b063c12c9f4552b01f1d0b569dac1d5610ee4ce17ee653c8d: Status 404 returned error can't find the container with id 431f0563ce0a368b063c12c9f4552b01f1d0b569dac1d5610ee4ce17ee653c8d Feb 17 13:49:09 crc kubenswrapper[4768]: I0217 13:49:09.674269 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-5bf68987ff-v6qgp" Feb 17 13:49:09 crc kubenswrapper[4768]: I0217 13:49:09.814707 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-58c85c668d-f5lj2"] Feb 17 13:49:09 crc kubenswrapper[4768]: W0217 13:49:09.823144 4768 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod77308fca_ea01_49d6_b264_61df88438fd0.slice/crio-e6ed8aadd26eba1bf5c5f21ee394bc16e74551c7a907ae6e473b442464a0bf61 WatchSource:0}: Error finding container e6ed8aadd26eba1bf5c5f21ee394bc16e74551c7a907ae6e473b442464a0bf61: Status 404 returned error can't find the container with id e6ed8aadd26eba1bf5c5f21ee394bc16e74551c7a907ae6e473b442464a0bf61 Feb 17 13:49:09 crc kubenswrapper[4768]: I0217 13:49:09.831306 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/b7911594-28b5-4c40-b08b-5f3b33d9bd11-plugin-serving-cert\") pod \"nmstate-console-plugin-5c78fc5d65-2cs2r\" (UID: \"b7911594-28b5-4c40-b08b-5f3b33d9bd11\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-2cs2r" Feb 17 13:49:09 crc kubenswrapper[4768]: I0217 13:49:09.840971 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/b7911594-28b5-4c40-b08b-5f3b33d9bd11-plugin-serving-cert\") pod \"nmstate-console-plugin-5c78fc5d65-2cs2r\" (UID: \"b7911594-28b5-4c40-b08b-5f3b33d9bd11\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-2cs2r" Feb 17 13:49:09 crc kubenswrapper[4768]: I0217 13:49:09.859296 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-5bf68987ff-v6qgp"] Feb 17 13:49:09 crc kubenswrapper[4768]: W0217 13:49:09.864088 4768 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd4c56a55_2dd2_43c8_8bd7_0a35ea5cd86b.slice/crio-47c8314587fa54ec9f4a5e69b42f52e93fb4f251c1f1134da88cdefdb69053d5 WatchSource:0}: Error finding container 47c8314587fa54ec9f4a5e69b42f52e93fb4f251c1f1134da88cdefdb69053d5: Status 404 returned error can't find the container with id 47c8314587fa54ec9f4a5e69b42f52e93fb4f251c1f1134da88cdefdb69053d5 Feb 17 13:49:09 crc kubenswrapper[4768]: I0217 13:49:09.866936 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-dfmfq" event={"ID":"aef4be82-c769-456d-90be-c95789ab9c2c","Type":"ContainerStarted","Data":"0ef85c4e8bf38d9dd75c9e23c03cc05aa38c25172463f70ae681398751d07511"} Feb 17 13:49:09 crc kubenswrapper[4768]: I0217 13:49:09.869185 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-ghl29" event={"ID":"f4ddc594-4af8-4856-9542-0a76bf8c5acc","Type":"ContainerStarted","Data":"431f0563ce0a368b063c12c9f4552b01f1d0b569dac1d5610ee4ce17ee653c8d"} Feb 17 13:49:09 crc kubenswrapper[4768]: I0217 13:49:09.870821 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-58c85c668d-f5lj2" event={"ID":"77308fca-ea01-49d6-b264-61df88438fd0","Type":"ContainerStarted","Data":"e6ed8aadd26eba1bf5c5f21ee394bc16e74551c7a907ae6e473b442464a0bf61"} Feb 17 13:49:10 crc kubenswrapper[4768]: I0217 13:49:10.083438 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-2cs2r" Feb 17 13:49:10 crc kubenswrapper[4768]: I0217 13:49:10.562393 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-5c78fc5d65-2cs2r"] Feb 17 13:49:10 crc kubenswrapper[4768]: I0217 13:49:10.884773 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-2cs2r" event={"ID":"b7911594-28b5-4c40-b08b-5f3b33d9bd11","Type":"ContainerStarted","Data":"311c74c0df169cfe181d91e6f7635f7a1b037d15cc82d96043a3c0f8f451c044"} Feb 17 13:49:10 crc kubenswrapper[4768]: I0217 13:49:10.888138 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-5bf68987ff-v6qgp" event={"ID":"d4c56a55-2dd2-43c8-8bd7-0a35ea5cd86b","Type":"ContainerStarted","Data":"51b1b80dfb124b041412e3c73a94bd70e0366f632507813cb4799319d48a81d0"} Feb 17 13:49:10 crc kubenswrapper[4768]: I0217 13:49:10.888187 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-5bf68987ff-v6qgp" event={"ID":"d4c56a55-2dd2-43c8-8bd7-0a35ea5cd86b","Type":"ContainerStarted","Data":"47c8314587fa54ec9f4a5e69b42f52e93fb4f251c1f1134da88cdefdb69053d5"} Feb 17 13:49:11 crc kubenswrapper[4768]: I0217 13:49:11.566340 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-5bf68987ff-v6qgp" podStartSLOduration=2.566320786 podStartE2EDuration="2.566320786s" podCreationTimestamp="2026-02-17 13:49:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 13:49:10.912432826 +0000 UTC m=+770.191819288" watchObservedRunningTime="2026-02-17 13:49:11.566320786 +0000 UTC m=+770.845707238" Feb 17 13:49:13 crc kubenswrapper[4768]: I0217 13:49:13.908753 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-ghl29" event={"ID":"f4ddc594-4af8-4856-9542-0a76bf8c5acc","Type":"ContainerStarted","Data":"9e0c9697573766d5dfc831004c392343e7f5d3beaf5fad092349b79b212dfb03"} Feb 17 13:49:13 crc kubenswrapper[4768]: I0217 13:49:13.909634 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-ghl29" Feb 17 13:49:13 crc kubenswrapper[4768]: I0217 13:49:13.911455 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-58c85c668d-f5lj2" event={"ID":"77308fca-ea01-49d6-b264-61df88438fd0","Type":"ContainerStarted","Data":"b4a03905df270e64364a8f555389e8e4eeae21cbd4ebfd1c7dcfc36d00f7a7fd"} Feb 17 13:49:13 crc kubenswrapper[4768]: I0217 13:49:13.913907 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-dfmfq" event={"ID":"aef4be82-c769-456d-90be-c95789ab9c2c","Type":"ContainerStarted","Data":"688844b054cba953325a8df77871409734ce177cb01b9e008b6287c942a05c7e"} Feb 17 13:49:13 crc kubenswrapper[4768]: I0217 13:49:13.914237 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-handler-dfmfq" Feb 17 13:49:13 crc kubenswrapper[4768]: I0217 13:49:13.962988 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-ghl29" podStartSLOduration=2.755773955 podStartE2EDuration="5.962960828s" podCreationTimestamp="2026-02-17 13:49:08 +0000 UTC" firstStartedPulling="2026-02-17 13:49:09.580197294 +0000 UTC m=+768.859583726" lastFinishedPulling="2026-02-17 13:49:12.787384117 +0000 UTC m=+772.066770599" observedRunningTime="2026-02-17 13:49:13.939225516 +0000 UTC m=+773.218612038" watchObservedRunningTime="2026-02-17 13:49:13.962960828 +0000 UTC m=+773.242347310" Feb 17 13:49:13 crc kubenswrapper[4768]: I0217 13:49:13.964467 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-handler-dfmfq" podStartSLOduration=2.600015587 podStartE2EDuration="5.964455197s" podCreationTimestamp="2026-02-17 13:49:08 +0000 UTC" firstStartedPulling="2026-02-17 13:49:09.389345333 +0000 UTC m=+768.668731775" lastFinishedPulling="2026-02-17 13:49:12.753784923 +0000 UTC m=+772.033171385" observedRunningTime="2026-02-17 13:49:13.954671827 +0000 UTC m=+773.234058299" watchObservedRunningTime="2026-02-17 13:49:13.964455197 +0000 UTC m=+773.243841679" Feb 17 13:49:14 crc kubenswrapper[4768]: I0217 13:49:14.923446 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-2cs2r" event={"ID":"b7911594-28b5-4c40-b08b-5f3b33d9bd11","Type":"ContainerStarted","Data":"3de54e82a359fde05ebe8fd738baabe4425cebb68aca2011c6c6d30c70e53770"} Feb 17 13:49:14 crc kubenswrapper[4768]: I0217 13:49:14.941375 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-2cs2r" podStartSLOduration=2.5462389119999997 podStartE2EDuration="5.941358508s" podCreationTimestamp="2026-02-17 13:49:09 +0000 UTC" firstStartedPulling="2026-02-17 13:49:10.576686627 +0000 UTC m=+769.856073069" lastFinishedPulling="2026-02-17 13:49:13.971806193 +0000 UTC m=+773.251192665" observedRunningTime="2026-02-17 13:49:14.940205008 +0000 UTC m=+774.219591450" watchObservedRunningTime="2026-02-17 13:49:14.941358508 +0000 UTC m=+774.220744950" Feb 17 13:49:15 crc kubenswrapper[4768]: I0217 13:49:15.933825 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-58c85c668d-f5lj2" event={"ID":"77308fca-ea01-49d6-b264-61df88438fd0","Type":"ContainerStarted","Data":"6ca452c7e6e8a6abb5d1a73a7b954fa4787245ddd418236c022fc1336effeff1"} Feb 17 13:49:15 crc kubenswrapper[4768]: I0217 13:49:15.953079 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-metrics-58c85c668d-f5lj2" podStartSLOduration=2.20422305 podStartE2EDuration="7.953055835s" podCreationTimestamp="2026-02-17 13:49:08 +0000 UTC" firstStartedPulling="2026-02-17 13:49:09.826588155 +0000 UTC m=+769.105974587" lastFinishedPulling="2026-02-17 13:49:15.57542093 +0000 UTC m=+774.854807372" observedRunningTime="2026-02-17 13:49:15.951629867 +0000 UTC m=+775.231016319" watchObservedRunningTime="2026-02-17 13:49:15.953055835 +0000 UTC m=+775.232442307" Feb 17 13:49:16 crc kubenswrapper[4768]: I0217 13:49:16.496911 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-d9qb6" Feb 17 13:49:16 crc kubenswrapper[4768]: I0217 13:49:16.554536 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-d9qb6" Feb 17 13:49:16 crc kubenswrapper[4768]: I0217 13:49:16.742425 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-d9qb6"] Feb 17 13:49:17 crc kubenswrapper[4768]: I0217 13:49:17.945567 4768 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-d9qb6" podUID="e99cfd2d-ef01-4e8d-86a9-cb715e8878f2" containerName="registry-server" containerID="cri-o://d6c5dc26284296034a3a222a04093dcd9a015f6a00eb7ee32249e242e56d6843" gracePeriod=2 Feb 17 13:49:18 crc kubenswrapper[4768]: I0217 13:49:18.340518 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-d9qb6" Feb 17 13:49:18 crc kubenswrapper[4768]: I0217 13:49:18.446585 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcpvk\" (UniqueName: \"kubernetes.io/projected/e99cfd2d-ef01-4e8d-86a9-cb715e8878f2-kube-api-access-xcpvk\") pod \"e99cfd2d-ef01-4e8d-86a9-cb715e8878f2\" (UID: \"e99cfd2d-ef01-4e8d-86a9-cb715e8878f2\") " Feb 17 13:49:18 crc kubenswrapper[4768]: I0217 13:49:18.446680 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e99cfd2d-ef01-4e8d-86a9-cb715e8878f2-catalog-content\") pod \"e99cfd2d-ef01-4e8d-86a9-cb715e8878f2\" (UID: \"e99cfd2d-ef01-4e8d-86a9-cb715e8878f2\") " Feb 17 13:49:18 crc kubenswrapper[4768]: I0217 13:49:18.446789 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e99cfd2d-ef01-4e8d-86a9-cb715e8878f2-utilities\") pod \"e99cfd2d-ef01-4e8d-86a9-cb715e8878f2\" (UID: \"e99cfd2d-ef01-4e8d-86a9-cb715e8878f2\") " Feb 17 13:49:18 crc kubenswrapper[4768]: I0217 13:49:18.447637 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e99cfd2d-ef01-4e8d-86a9-cb715e8878f2-utilities" (OuterVolumeSpecName: "utilities") pod "e99cfd2d-ef01-4e8d-86a9-cb715e8878f2" (UID: "e99cfd2d-ef01-4e8d-86a9-cb715e8878f2"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 13:49:18 crc kubenswrapper[4768]: I0217 13:49:18.453283 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e99cfd2d-ef01-4e8d-86a9-cb715e8878f2-kube-api-access-xcpvk" (OuterVolumeSpecName: "kube-api-access-xcpvk") pod "e99cfd2d-ef01-4e8d-86a9-cb715e8878f2" (UID: "e99cfd2d-ef01-4e8d-86a9-cb715e8878f2"). InnerVolumeSpecName "kube-api-access-xcpvk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 13:49:18 crc kubenswrapper[4768]: I0217 13:49:18.548681 4768 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e99cfd2d-ef01-4e8d-86a9-cb715e8878f2-utilities\") on node \"crc\" DevicePath \"\"" Feb 17 13:49:18 crc kubenswrapper[4768]: I0217 13:49:18.548895 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcpvk\" (UniqueName: \"kubernetes.io/projected/e99cfd2d-ef01-4e8d-86a9-cb715e8878f2-kube-api-access-xcpvk\") on node \"crc\" DevicePath \"\"" Feb 17 13:49:18 crc kubenswrapper[4768]: I0217 13:49:18.566466 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e99cfd2d-ef01-4e8d-86a9-cb715e8878f2-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e99cfd2d-ef01-4e8d-86a9-cb715e8878f2" (UID: "e99cfd2d-ef01-4e8d-86a9-cb715e8878f2"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 13:49:18 crc kubenswrapper[4768]: I0217 13:49:18.651057 4768 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e99cfd2d-ef01-4e8d-86a9-cb715e8878f2-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 17 13:49:18 crc kubenswrapper[4768]: I0217 13:49:18.954946 4768 generic.go:334] "Generic (PLEG): container finished" podID="e99cfd2d-ef01-4e8d-86a9-cb715e8878f2" containerID="d6c5dc26284296034a3a222a04093dcd9a015f6a00eb7ee32249e242e56d6843" exitCode=0 Feb 17 13:49:18 crc kubenswrapper[4768]: I0217 13:49:18.955039 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-d9qb6" event={"ID":"e99cfd2d-ef01-4e8d-86a9-cb715e8878f2","Type":"ContainerDied","Data":"d6c5dc26284296034a3a222a04093dcd9a015f6a00eb7ee32249e242e56d6843"} Feb 17 13:49:18 crc kubenswrapper[4768]: I0217 13:49:18.955091 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-d9qb6" event={"ID":"e99cfd2d-ef01-4e8d-86a9-cb715e8878f2","Type":"ContainerDied","Data":"75c83f800885c6a2f652773dc831de71692a8740c5c02e4c5e5823ea36b28a45"} Feb 17 13:49:18 crc kubenswrapper[4768]: I0217 13:49:18.955156 4768 scope.go:117] "RemoveContainer" containerID="d6c5dc26284296034a3a222a04093dcd9a015f6a00eb7ee32249e242e56d6843" Feb 17 13:49:18 crc kubenswrapper[4768]: I0217 13:49:18.955046 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-d9qb6" Feb 17 13:49:18 crc kubenswrapper[4768]: I0217 13:49:18.978394 4768 scope.go:117] "RemoveContainer" containerID="bb74e2bac322e7ddc7d68deee9e883d54d247a9cfd2347f37d196a5d20eef185" Feb 17 13:49:19 crc kubenswrapper[4768]: I0217 13:49:19.000442 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-d9qb6"] Feb 17 13:49:19 crc kubenswrapper[4768]: I0217 13:49:19.002526 4768 scope.go:117] "RemoveContainer" containerID="a98a2604ec9aceb028fed066f0060ac3fed394c44de3c9b6cf1ce4f30f30534c" Feb 17 13:49:19 crc kubenswrapper[4768]: I0217 13:49:19.017549 4768 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-d9qb6"] Feb 17 13:49:19 crc kubenswrapper[4768]: I0217 13:49:19.022954 4768 scope.go:117] "RemoveContainer" containerID="d6c5dc26284296034a3a222a04093dcd9a015f6a00eb7ee32249e242e56d6843" Feb 17 13:49:19 crc kubenswrapper[4768]: E0217 13:49:19.023544 4768 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d6c5dc26284296034a3a222a04093dcd9a015f6a00eb7ee32249e242e56d6843\": container with ID starting with d6c5dc26284296034a3a222a04093dcd9a015f6a00eb7ee32249e242e56d6843 not found: ID does not exist" containerID="d6c5dc26284296034a3a222a04093dcd9a015f6a00eb7ee32249e242e56d6843" Feb 17 13:49:19 crc kubenswrapper[4768]: I0217 13:49:19.023580 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d6c5dc26284296034a3a222a04093dcd9a015f6a00eb7ee32249e242e56d6843"} err="failed to get container status \"d6c5dc26284296034a3a222a04093dcd9a015f6a00eb7ee32249e242e56d6843\": rpc error: code = NotFound desc = could not find container \"d6c5dc26284296034a3a222a04093dcd9a015f6a00eb7ee32249e242e56d6843\": container with ID starting with d6c5dc26284296034a3a222a04093dcd9a015f6a00eb7ee32249e242e56d6843 not found: ID does not exist" Feb 17 13:49:19 crc kubenswrapper[4768]: I0217 13:49:19.023607 4768 scope.go:117] "RemoveContainer" containerID="bb74e2bac322e7ddc7d68deee9e883d54d247a9cfd2347f37d196a5d20eef185" Feb 17 13:49:19 crc kubenswrapper[4768]: E0217 13:49:19.023891 4768 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bb74e2bac322e7ddc7d68deee9e883d54d247a9cfd2347f37d196a5d20eef185\": container with ID starting with bb74e2bac322e7ddc7d68deee9e883d54d247a9cfd2347f37d196a5d20eef185 not found: ID does not exist" containerID="bb74e2bac322e7ddc7d68deee9e883d54d247a9cfd2347f37d196a5d20eef185" Feb 17 13:49:19 crc kubenswrapper[4768]: I0217 13:49:19.023980 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bb74e2bac322e7ddc7d68deee9e883d54d247a9cfd2347f37d196a5d20eef185"} err="failed to get container status \"bb74e2bac322e7ddc7d68deee9e883d54d247a9cfd2347f37d196a5d20eef185\": rpc error: code = NotFound desc = could not find container \"bb74e2bac322e7ddc7d68deee9e883d54d247a9cfd2347f37d196a5d20eef185\": container with ID starting with bb74e2bac322e7ddc7d68deee9e883d54d247a9cfd2347f37d196a5d20eef185 not found: ID does not exist" Feb 17 13:49:19 crc kubenswrapper[4768]: I0217 13:49:19.024048 4768 scope.go:117] "RemoveContainer" containerID="a98a2604ec9aceb028fed066f0060ac3fed394c44de3c9b6cf1ce4f30f30534c" Feb 17 13:49:19 crc kubenswrapper[4768]: E0217 13:49:19.024419 4768 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a98a2604ec9aceb028fed066f0060ac3fed394c44de3c9b6cf1ce4f30f30534c\": container with ID starting with a98a2604ec9aceb028fed066f0060ac3fed394c44de3c9b6cf1ce4f30f30534c not found: ID does not exist" containerID="a98a2604ec9aceb028fed066f0060ac3fed394c44de3c9b6cf1ce4f30f30534c" Feb 17 13:49:19 crc kubenswrapper[4768]: I0217 13:49:19.024440 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a98a2604ec9aceb028fed066f0060ac3fed394c44de3c9b6cf1ce4f30f30534c"} err="failed to get container status \"a98a2604ec9aceb028fed066f0060ac3fed394c44de3c9b6cf1ce4f30f30534c\": rpc error: code = NotFound desc = could not find container \"a98a2604ec9aceb028fed066f0060ac3fed394c44de3c9b6cf1ce4f30f30534c\": container with ID starting with a98a2604ec9aceb028fed066f0060ac3fed394c44de3c9b6cf1ce4f30f30534c not found: ID does not exist" Feb 17 13:49:19 crc kubenswrapper[4768]: I0217 13:49:19.387524 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-handler-dfmfq" Feb 17 13:49:19 crc kubenswrapper[4768]: I0217 13:49:19.544875 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e99cfd2d-ef01-4e8d-86a9-cb715e8878f2" path="/var/lib/kubelet/pods/e99cfd2d-ef01-4e8d-86a9-cb715e8878f2/volumes" Feb 17 13:49:19 crc kubenswrapper[4768]: I0217 13:49:19.675237 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-5bf68987ff-v6qgp" Feb 17 13:49:19 crc kubenswrapper[4768]: I0217 13:49:19.675817 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-5bf68987ff-v6qgp" Feb 17 13:49:19 crc kubenswrapper[4768]: I0217 13:49:19.681019 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-5bf68987ff-v6qgp" Feb 17 13:49:19 crc kubenswrapper[4768]: I0217 13:49:19.970791 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-5bf68987ff-v6qgp" Feb 17 13:49:20 crc kubenswrapper[4768]: I0217 13:49:20.035980 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-9fmzj"] Feb 17 13:49:29 crc kubenswrapper[4768]: I0217 13:49:29.333500 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-ghl29" Feb 17 13:49:41 crc kubenswrapper[4768]: I0217 13:49:41.139170 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc2137gw9s"] Feb 17 13:49:41 crc kubenswrapper[4768]: E0217 13:49:41.147662 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e99cfd2d-ef01-4e8d-86a9-cb715e8878f2" containerName="extract-content" Feb 17 13:49:41 crc kubenswrapper[4768]: I0217 13:49:41.147696 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="e99cfd2d-ef01-4e8d-86a9-cb715e8878f2" containerName="extract-content" Feb 17 13:49:41 crc kubenswrapper[4768]: E0217 13:49:41.147707 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e99cfd2d-ef01-4e8d-86a9-cb715e8878f2" containerName="registry-server" Feb 17 13:49:41 crc kubenswrapper[4768]: I0217 13:49:41.147713 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="e99cfd2d-ef01-4e8d-86a9-cb715e8878f2" containerName="registry-server" Feb 17 13:49:41 crc kubenswrapper[4768]: E0217 13:49:41.147732 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e99cfd2d-ef01-4e8d-86a9-cb715e8878f2" containerName="extract-utilities" Feb 17 13:49:41 crc kubenswrapper[4768]: I0217 13:49:41.147740 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="e99cfd2d-ef01-4e8d-86a9-cb715e8878f2" containerName="extract-utilities" Feb 17 13:49:41 crc kubenswrapper[4768]: I0217 13:49:41.147858 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="e99cfd2d-ef01-4e8d-86a9-cb715e8878f2" containerName="registry-server" Feb 17 13:49:41 crc kubenswrapper[4768]: I0217 13:49:41.148551 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc2137gw9s"] Feb 17 13:49:41 crc kubenswrapper[4768]: I0217 13:49:41.148685 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc2137gw9s" Feb 17 13:49:41 crc kubenswrapper[4768]: I0217 13:49:41.151202 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Feb 17 13:49:41 crc kubenswrapper[4768]: I0217 13:49:41.284080 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/1b729781-75da-4a19-afbf-7a9459f6a7da-util\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc2137gw9s\" (UID: \"1b729781-75da-4a19-afbf-7a9459f6a7da\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc2137gw9s" Feb 17 13:49:41 crc kubenswrapper[4768]: I0217 13:49:41.284465 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vpl55\" (UniqueName: \"kubernetes.io/projected/1b729781-75da-4a19-afbf-7a9459f6a7da-kube-api-access-vpl55\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc2137gw9s\" (UID: \"1b729781-75da-4a19-afbf-7a9459f6a7da\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc2137gw9s" Feb 17 13:49:41 crc kubenswrapper[4768]: I0217 13:49:41.284557 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/1b729781-75da-4a19-afbf-7a9459f6a7da-bundle\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc2137gw9s\" (UID: \"1b729781-75da-4a19-afbf-7a9459f6a7da\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc2137gw9s" Feb 17 13:49:41 crc kubenswrapper[4768]: I0217 13:49:41.385715 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vpl55\" (UniqueName: \"kubernetes.io/projected/1b729781-75da-4a19-afbf-7a9459f6a7da-kube-api-access-vpl55\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc2137gw9s\" (UID: \"1b729781-75da-4a19-afbf-7a9459f6a7da\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc2137gw9s" Feb 17 13:49:41 crc kubenswrapper[4768]: I0217 13:49:41.386252 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/1b729781-75da-4a19-afbf-7a9459f6a7da-bundle\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc2137gw9s\" (UID: \"1b729781-75da-4a19-afbf-7a9459f6a7da\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc2137gw9s" Feb 17 13:49:41 crc kubenswrapper[4768]: I0217 13:49:41.386719 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/1b729781-75da-4a19-afbf-7a9459f6a7da-util\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc2137gw9s\" (UID: \"1b729781-75da-4a19-afbf-7a9459f6a7da\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc2137gw9s" Feb 17 13:49:41 crc kubenswrapper[4768]: I0217 13:49:41.387226 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/1b729781-75da-4a19-afbf-7a9459f6a7da-bundle\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc2137gw9s\" (UID: \"1b729781-75da-4a19-afbf-7a9459f6a7da\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc2137gw9s" Feb 17 13:49:41 crc kubenswrapper[4768]: I0217 13:49:41.387422 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/1b729781-75da-4a19-afbf-7a9459f6a7da-util\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc2137gw9s\" (UID: \"1b729781-75da-4a19-afbf-7a9459f6a7da\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc2137gw9s" Feb 17 13:49:41 crc kubenswrapper[4768]: I0217 13:49:41.406610 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vpl55\" (UniqueName: \"kubernetes.io/projected/1b729781-75da-4a19-afbf-7a9459f6a7da-kube-api-access-vpl55\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc2137gw9s\" (UID: \"1b729781-75da-4a19-afbf-7a9459f6a7da\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc2137gw9s" Feb 17 13:49:41 crc kubenswrapper[4768]: I0217 13:49:41.462655 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc2137gw9s" Feb 17 13:49:41 crc kubenswrapper[4768]: I0217 13:49:41.648905 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc2137gw9s"] Feb 17 13:49:42 crc kubenswrapper[4768]: I0217 13:49:42.110779 4768 generic.go:334] "Generic (PLEG): container finished" podID="1b729781-75da-4a19-afbf-7a9459f6a7da" containerID="2c05ded99780f6d80209517fe321cd7b0edce7982f73104bae3a835b7a6cf7ff" exitCode=0 Feb 17 13:49:42 crc kubenswrapper[4768]: I0217 13:49:42.110841 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc2137gw9s" event={"ID":"1b729781-75da-4a19-afbf-7a9459f6a7da","Type":"ContainerDied","Data":"2c05ded99780f6d80209517fe321cd7b0edce7982f73104bae3a835b7a6cf7ff"} Feb 17 13:49:42 crc kubenswrapper[4768]: I0217 13:49:42.110911 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc2137gw9s" event={"ID":"1b729781-75da-4a19-afbf-7a9459f6a7da","Type":"ContainerStarted","Data":"697933e288af96c8b6c150a149446f841be0094e71aced4e1ddf5141cd09447c"} Feb 17 13:49:45 crc kubenswrapper[4768]: I0217 13:49:45.086415 4768 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-f9d7485db-9fmzj" podUID="0030a046-d1bb-4a34-830c-c275306cee43" containerName="console" containerID="cri-o://0341b0ed70f9a262f791c3325bd1c895770a7fadc1bdf1789880873bb9ef9e10" gracePeriod=15 Feb 17 13:49:45 crc kubenswrapper[4768]: I0217 13:49:45.128754 4768 generic.go:334] "Generic (PLEG): container finished" podID="1b729781-75da-4a19-afbf-7a9459f6a7da" containerID="419a9a40e4591aad49d7af25d79256688ab741877f34fe7b1a8d1b2c9472e8b7" exitCode=0 Feb 17 13:49:45 crc kubenswrapper[4768]: I0217 13:49:45.128804 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc2137gw9s" event={"ID":"1b729781-75da-4a19-afbf-7a9459f6a7da","Type":"ContainerDied","Data":"419a9a40e4591aad49d7af25d79256688ab741877f34fe7b1a8d1b2c9472e8b7"} Feb 17 13:49:45 crc kubenswrapper[4768]: I0217 13:49:45.519357 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-9fmzj_0030a046-d1bb-4a34-830c-c275306cee43/console/0.log" Feb 17 13:49:45 crc kubenswrapper[4768]: I0217 13:49:45.519607 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-9fmzj" Feb 17 13:49:45 crc kubenswrapper[4768]: I0217 13:49:45.635345 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/0030a046-d1bb-4a34-830c-c275306cee43-console-serving-cert\") pod \"0030a046-d1bb-4a34-830c-c275306cee43\" (UID: \"0030a046-d1bb-4a34-830c-c275306cee43\") " Feb 17 13:49:45 crc kubenswrapper[4768]: I0217 13:49:45.635416 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0030a046-d1bb-4a34-830c-c275306cee43-trusted-ca-bundle\") pod \"0030a046-d1bb-4a34-830c-c275306cee43\" (UID: \"0030a046-d1bb-4a34-830c-c275306cee43\") " Feb 17 13:49:45 crc kubenswrapper[4768]: I0217 13:49:45.635447 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/0030a046-d1bb-4a34-830c-c275306cee43-console-config\") pod \"0030a046-d1bb-4a34-830c-c275306cee43\" (UID: \"0030a046-d1bb-4a34-830c-c275306cee43\") " Feb 17 13:49:45 crc kubenswrapper[4768]: I0217 13:49:45.635495 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0030a046-d1bb-4a34-830c-c275306cee43-service-ca\") pod \"0030a046-d1bb-4a34-830c-c275306cee43\" (UID: \"0030a046-d1bb-4a34-830c-c275306cee43\") " Feb 17 13:49:45 crc kubenswrapper[4768]: I0217 13:49:45.635536 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gcrjd\" (UniqueName: \"kubernetes.io/projected/0030a046-d1bb-4a34-830c-c275306cee43-kube-api-access-gcrjd\") pod \"0030a046-d1bb-4a34-830c-c275306cee43\" (UID: \"0030a046-d1bb-4a34-830c-c275306cee43\") " Feb 17 13:49:45 crc kubenswrapper[4768]: I0217 13:49:45.635556 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/0030a046-d1bb-4a34-830c-c275306cee43-oauth-serving-cert\") pod \"0030a046-d1bb-4a34-830c-c275306cee43\" (UID: \"0030a046-d1bb-4a34-830c-c275306cee43\") " Feb 17 13:49:45 crc kubenswrapper[4768]: I0217 13:49:45.635607 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/0030a046-d1bb-4a34-830c-c275306cee43-console-oauth-config\") pod \"0030a046-d1bb-4a34-830c-c275306cee43\" (UID: \"0030a046-d1bb-4a34-830c-c275306cee43\") " Feb 17 13:49:45 crc kubenswrapper[4768]: I0217 13:49:45.636532 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0030a046-d1bb-4a34-830c-c275306cee43-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "0030a046-d1bb-4a34-830c-c275306cee43" (UID: "0030a046-d1bb-4a34-830c-c275306cee43"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 13:49:45 crc kubenswrapper[4768]: I0217 13:49:45.636524 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0030a046-d1bb-4a34-830c-c275306cee43-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "0030a046-d1bb-4a34-830c-c275306cee43" (UID: "0030a046-d1bb-4a34-830c-c275306cee43"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 13:49:45 crc kubenswrapper[4768]: I0217 13:49:45.636591 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0030a046-d1bb-4a34-830c-c275306cee43-service-ca" (OuterVolumeSpecName: "service-ca") pod "0030a046-d1bb-4a34-830c-c275306cee43" (UID: "0030a046-d1bb-4a34-830c-c275306cee43"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 13:49:45 crc kubenswrapper[4768]: I0217 13:49:45.636809 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0030a046-d1bb-4a34-830c-c275306cee43-console-config" (OuterVolumeSpecName: "console-config") pod "0030a046-d1bb-4a34-830c-c275306cee43" (UID: "0030a046-d1bb-4a34-830c-c275306cee43"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 13:49:45 crc kubenswrapper[4768]: I0217 13:49:45.642251 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0030a046-d1bb-4a34-830c-c275306cee43-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "0030a046-d1bb-4a34-830c-c275306cee43" (UID: "0030a046-d1bb-4a34-830c-c275306cee43"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 13:49:45 crc kubenswrapper[4768]: I0217 13:49:45.643045 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0030a046-d1bb-4a34-830c-c275306cee43-kube-api-access-gcrjd" (OuterVolumeSpecName: "kube-api-access-gcrjd") pod "0030a046-d1bb-4a34-830c-c275306cee43" (UID: "0030a046-d1bb-4a34-830c-c275306cee43"). InnerVolumeSpecName "kube-api-access-gcrjd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 13:49:45 crc kubenswrapper[4768]: I0217 13:49:45.646146 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0030a046-d1bb-4a34-830c-c275306cee43-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "0030a046-d1bb-4a34-830c-c275306cee43" (UID: "0030a046-d1bb-4a34-830c-c275306cee43"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 13:49:45 crc kubenswrapper[4768]: I0217 13:49:45.736788 4768 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/0030a046-d1bb-4a34-830c-c275306cee43-console-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 17 13:49:45 crc kubenswrapper[4768]: I0217 13:49:45.736851 4768 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0030a046-d1bb-4a34-830c-c275306cee43-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 13:49:45 crc kubenswrapper[4768]: I0217 13:49:45.736867 4768 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/0030a046-d1bb-4a34-830c-c275306cee43-console-config\") on node \"crc\" DevicePath \"\"" Feb 17 13:49:45 crc kubenswrapper[4768]: I0217 13:49:45.736881 4768 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0030a046-d1bb-4a34-830c-c275306cee43-service-ca\") on node \"crc\" DevicePath \"\"" Feb 17 13:49:45 crc kubenswrapper[4768]: I0217 13:49:45.736939 4768 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/0030a046-d1bb-4a34-830c-c275306cee43-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 17 13:49:45 crc kubenswrapper[4768]: I0217 13:49:45.736950 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gcrjd\" (UniqueName: \"kubernetes.io/projected/0030a046-d1bb-4a34-830c-c275306cee43-kube-api-access-gcrjd\") on node \"crc\" DevicePath \"\"" Feb 17 13:49:45 crc kubenswrapper[4768]: I0217 13:49:45.736959 4768 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/0030a046-d1bb-4a34-830c-c275306cee43-console-oauth-config\") on node \"crc\" DevicePath \"\"" Feb 17 13:49:46 crc kubenswrapper[4768]: I0217 13:49:46.138291 4768 generic.go:334] "Generic (PLEG): container finished" podID="1b729781-75da-4a19-afbf-7a9459f6a7da" containerID="9c5154b3c5de4847808fb276b9351022a314bf1b817475b6418449b2a642e83e" exitCode=0 Feb 17 13:49:46 crc kubenswrapper[4768]: I0217 13:49:46.138421 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc2137gw9s" event={"ID":"1b729781-75da-4a19-afbf-7a9459f6a7da","Type":"ContainerDied","Data":"9c5154b3c5de4847808fb276b9351022a314bf1b817475b6418449b2a642e83e"} Feb 17 13:49:46 crc kubenswrapper[4768]: I0217 13:49:46.142262 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-9fmzj_0030a046-d1bb-4a34-830c-c275306cee43/console/0.log" Feb 17 13:49:46 crc kubenswrapper[4768]: I0217 13:49:46.142346 4768 generic.go:334] "Generic (PLEG): container finished" podID="0030a046-d1bb-4a34-830c-c275306cee43" containerID="0341b0ed70f9a262f791c3325bd1c895770a7fadc1bdf1789880873bb9ef9e10" exitCode=2 Feb 17 13:49:46 crc kubenswrapper[4768]: I0217 13:49:46.142407 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-9fmzj" event={"ID":"0030a046-d1bb-4a34-830c-c275306cee43","Type":"ContainerDied","Data":"0341b0ed70f9a262f791c3325bd1c895770a7fadc1bdf1789880873bb9ef9e10"} Feb 17 13:49:46 crc kubenswrapper[4768]: I0217 13:49:46.142462 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-9fmzj" event={"ID":"0030a046-d1bb-4a34-830c-c275306cee43","Type":"ContainerDied","Data":"8c24714f527a739abe2f59896b8429c3c57b368f0af4069f5376ea795f5efa92"} Feb 17 13:49:46 crc kubenswrapper[4768]: I0217 13:49:46.142460 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-9fmzj" Feb 17 13:49:46 crc kubenswrapper[4768]: I0217 13:49:46.142486 4768 scope.go:117] "RemoveContainer" containerID="0341b0ed70f9a262f791c3325bd1c895770a7fadc1bdf1789880873bb9ef9e10" Feb 17 13:49:46 crc kubenswrapper[4768]: I0217 13:49:46.174886 4768 scope.go:117] "RemoveContainer" containerID="0341b0ed70f9a262f791c3325bd1c895770a7fadc1bdf1789880873bb9ef9e10" Feb 17 13:49:46 crc kubenswrapper[4768]: E0217 13:49:46.179742 4768 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0341b0ed70f9a262f791c3325bd1c895770a7fadc1bdf1789880873bb9ef9e10\": container with ID starting with 0341b0ed70f9a262f791c3325bd1c895770a7fadc1bdf1789880873bb9ef9e10 not found: ID does not exist" containerID="0341b0ed70f9a262f791c3325bd1c895770a7fadc1bdf1789880873bb9ef9e10" Feb 17 13:49:46 crc kubenswrapper[4768]: I0217 13:49:46.179795 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0341b0ed70f9a262f791c3325bd1c895770a7fadc1bdf1789880873bb9ef9e10"} err="failed to get container status \"0341b0ed70f9a262f791c3325bd1c895770a7fadc1bdf1789880873bb9ef9e10\": rpc error: code = NotFound desc = could not find container \"0341b0ed70f9a262f791c3325bd1c895770a7fadc1bdf1789880873bb9ef9e10\": container with ID starting with 0341b0ed70f9a262f791c3325bd1c895770a7fadc1bdf1789880873bb9ef9e10 not found: ID does not exist" Feb 17 13:49:46 crc kubenswrapper[4768]: I0217 13:49:46.219569 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-9fmzj"] Feb 17 13:49:46 crc kubenswrapper[4768]: I0217 13:49:46.224847 4768 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-f9d7485db-9fmzj"] Feb 17 13:49:47 crc kubenswrapper[4768]: I0217 13:49:47.447208 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc2137gw9s" Feb 17 13:49:47 crc kubenswrapper[4768]: I0217 13:49:47.547285 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0030a046-d1bb-4a34-830c-c275306cee43" path="/var/lib/kubelet/pods/0030a046-d1bb-4a34-830c-c275306cee43/volumes" Feb 17 13:49:47 crc kubenswrapper[4768]: I0217 13:49:47.560171 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vpl55\" (UniqueName: \"kubernetes.io/projected/1b729781-75da-4a19-afbf-7a9459f6a7da-kube-api-access-vpl55\") pod \"1b729781-75da-4a19-afbf-7a9459f6a7da\" (UID: \"1b729781-75da-4a19-afbf-7a9459f6a7da\") " Feb 17 13:49:47 crc kubenswrapper[4768]: I0217 13:49:47.560241 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/1b729781-75da-4a19-afbf-7a9459f6a7da-util\") pod \"1b729781-75da-4a19-afbf-7a9459f6a7da\" (UID: \"1b729781-75da-4a19-afbf-7a9459f6a7da\") " Feb 17 13:49:47 crc kubenswrapper[4768]: I0217 13:49:47.560284 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/1b729781-75da-4a19-afbf-7a9459f6a7da-bundle\") pod \"1b729781-75da-4a19-afbf-7a9459f6a7da\" (UID: \"1b729781-75da-4a19-afbf-7a9459f6a7da\") " Feb 17 13:49:47 crc kubenswrapper[4768]: I0217 13:49:47.561376 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1b729781-75da-4a19-afbf-7a9459f6a7da-bundle" (OuterVolumeSpecName: "bundle") pod "1b729781-75da-4a19-afbf-7a9459f6a7da" (UID: "1b729781-75da-4a19-afbf-7a9459f6a7da"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 13:49:47 crc kubenswrapper[4768]: I0217 13:49:47.567557 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1b729781-75da-4a19-afbf-7a9459f6a7da-kube-api-access-vpl55" (OuterVolumeSpecName: "kube-api-access-vpl55") pod "1b729781-75da-4a19-afbf-7a9459f6a7da" (UID: "1b729781-75da-4a19-afbf-7a9459f6a7da"). InnerVolumeSpecName "kube-api-access-vpl55". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 13:49:47 crc kubenswrapper[4768]: I0217 13:49:47.595184 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1b729781-75da-4a19-afbf-7a9459f6a7da-util" (OuterVolumeSpecName: "util") pod "1b729781-75da-4a19-afbf-7a9459f6a7da" (UID: "1b729781-75da-4a19-afbf-7a9459f6a7da"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 13:49:47 crc kubenswrapper[4768]: I0217 13:49:47.661653 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vpl55\" (UniqueName: \"kubernetes.io/projected/1b729781-75da-4a19-afbf-7a9459f6a7da-kube-api-access-vpl55\") on node \"crc\" DevicePath \"\"" Feb 17 13:49:47 crc kubenswrapper[4768]: I0217 13:49:47.661690 4768 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/1b729781-75da-4a19-afbf-7a9459f6a7da-util\") on node \"crc\" DevicePath \"\"" Feb 17 13:49:47 crc kubenswrapper[4768]: I0217 13:49:47.661699 4768 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/1b729781-75da-4a19-afbf-7a9459f6a7da-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 13:49:48 crc kubenswrapper[4768]: I0217 13:49:48.159728 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc2137gw9s" event={"ID":"1b729781-75da-4a19-afbf-7a9459f6a7da","Type":"ContainerDied","Data":"697933e288af96c8b6c150a149446f841be0094e71aced4e1ddf5141cd09447c"} Feb 17 13:49:48 crc kubenswrapper[4768]: I0217 13:49:48.160143 4768 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="697933e288af96c8b6c150a149446f841be0094e71aced4e1ddf5141cd09447c" Feb 17 13:49:48 crc kubenswrapper[4768]: I0217 13:49:48.159792 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc2137gw9s" Feb 17 13:49:58 crc kubenswrapper[4768]: I0217 13:49:58.201468 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-controller-manager-df9f8fb7d-rjc2w"] Feb 17 13:49:58 crc kubenswrapper[4768]: E0217 13:49:58.202360 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1b729781-75da-4a19-afbf-7a9459f6a7da" containerName="pull" Feb 17 13:49:58 crc kubenswrapper[4768]: I0217 13:49:58.202377 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="1b729781-75da-4a19-afbf-7a9459f6a7da" containerName="pull" Feb 17 13:49:58 crc kubenswrapper[4768]: E0217 13:49:58.202392 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1b729781-75da-4a19-afbf-7a9459f6a7da" containerName="util" Feb 17 13:49:58 crc kubenswrapper[4768]: I0217 13:49:58.202399 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="1b729781-75da-4a19-afbf-7a9459f6a7da" containerName="util" Feb 17 13:49:58 crc kubenswrapper[4768]: E0217 13:49:58.202413 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1b729781-75da-4a19-afbf-7a9459f6a7da" containerName="extract" Feb 17 13:49:58 crc kubenswrapper[4768]: I0217 13:49:58.202422 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="1b729781-75da-4a19-afbf-7a9459f6a7da" containerName="extract" Feb 17 13:49:58 crc kubenswrapper[4768]: E0217 13:49:58.202431 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0030a046-d1bb-4a34-830c-c275306cee43" containerName="console" Feb 17 13:49:58 crc kubenswrapper[4768]: I0217 13:49:58.202438 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="0030a046-d1bb-4a34-830c-c275306cee43" containerName="console" Feb 17 13:49:58 crc kubenswrapper[4768]: I0217 13:49:58.202570 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="0030a046-d1bb-4a34-830c-c275306cee43" containerName="console" Feb 17 13:49:58 crc kubenswrapper[4768]: I0217 13:49:58.202586 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="1b729781-75da-4a19-afbf-7a9459f6a7da" containerName="extract" Feb 17 13:49:58 crc kubenswrapper[4768]: I0217 13:49:58.203054 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-df9f8fb7d-rjc2w" Feb 17 13:49:58 crc kubenswrapper[4768]: I0217 13:49:58.204429 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"kube-root-ca.crt" Feb 17 13:49:58 crc kubenswrapper[4768]: I0217 13:49:58.205150 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"openshift-service-ca.crt" Feb 17 13:49:58 crc kubenswrapper[4768]: I0217 13:49:58.205626 4768 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"manager-account-dockercfg-zhzzz" Feb 17 13:49:58 crc kubenswrapper[4768]: I0217 13:49:58.205720 4768 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-controller-manager-service-cert" Feb 17 13:49:58 crc kubenswrapper[4768]: I0217 13:49:58.216948 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-df9f8fb7d-rjc2w"] Feb 17 13:49:58 crc kubenswrapper[4768]: I0217 13:49:58.228156 4768 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-cert" Feb 17 13:49:58 crc kubenswrapper[4768]: I0217 13:49:58.394999 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/75aabc4c-e213-4a4b-a0ec-b0907ae8fd0e-webhook-cert\") pod \"metallb-operator-controller-manager-df9f8fb7d-rjc2w\" (UID: \"75aabc4c-e213-4a4b-a0ec-b0907ae8fd0e\") " pod="metallb-system/metallb-operator-controller-manager-df9f8fb7d-rjc2w" Feb 17 13:49:58 crc kubenswrapper[4768]: I0217 13:49:58.395067 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/75aabc4c-e213-4a4b-a0ec-b0907ae8fd0e-apiservice-cert\") pod \"metallb-operator-controller-manager-df9f8fb7d-rjc2w\" (UID: \"75aabc4c-e213-4a4b-a0ec-b0907ae8fd0e\") " pod="metallb-system/metallb-operator-controller-manager-df9f8fb7d-rjc2w" Feb 17 13:49:58 crc kubenswrapper[4768]: I0217 13:49:58.395165 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cbgxl\" (UniqueName: \"kubernetes.io/projected/75aabc4c-e213-4a4b-a0ec-b0907ae8fd0e-kube-api-access-cbgxl\") pod \"metallb-operator-controller-manager-df9f8fb7d-rjc2w\" (UID: \"75aabc4c-e213-4a4b-a0ec-b0907ae8fd0e\") " pod="metallb-system/metallb-operator-controller-manager-df9f8fb7d-rjc2w" Feb 17 13:49:58 crc kubenswrapper[4768]: I0217 13:49:58.495926 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/75aabc4c-e213-4a4b-a0ec-b0907ae8fd0e-apiservice-cert\") pod \"metallb-operator-controller-manager-df9f8fb7d-rjc2w\" (UID: \"75aabc4c-e213-4a4b-a0ec-b0907ae8fd0e\") " pod="metallb-system/metallb-operator-controller-manager-df9f8fb7d-rjc2w" Feb 17 13:49:58 crc kubenswrapper[4768]: I0217 13:49:58.496014 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cbgxl\" (UniqueName: \"kubernetes.io/projected/75aabc4c-e213-4a4b-a0ec-b0907ae8fd0e-kube-api-access-cbgxl\") pod \"metallb-operator-controller-manager-df9f8fb7d-rjc2w\" (UID: \"75aabc4c-e213-4a4b-a0ec-b0907ae8fd0e\") " pod="metallb-system/metallb-operator-controller-manager-df9f8fb7d-rjc2w" Feb 17 13:49:58 crc kubenswrapper[4768]: I0217 13:49:58.496062 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/75aabc4c-e213-4a4b-a0ec-b0907ae8fd0e-webhook-cert\") pod \"metallb-operator-controller-manager-df9f8fb7d-rjc2w\" (UID: \"75aabc4c-e213-4a4b-a0ec-b0907ae8fd0e\") " pod="metallb-system/metallb-operator-controller-manager-df9f8fb7d-rjc2w" Feb 17 13:49:58 crc kubenswrapper[4768]: I0217 13:49:58.502423 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/75aabc4c-e213-4a4b-a0ec-b0907ae8fd0e-apiservice-cert\") pod \"metallb-operator-controller-manager-df9f8fb7d-rjc2w\" (UID: \"75aabc4c-e213-4a4b-a0ec-b0907ae8fd0e\") " pod="metallb-system/metallb-operator-controller-manager-df9f8fb7d-rjc2w" Feb 17 13:49:58 crc kubenswrapper[4768]: I0217 13:49:58.502611 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/75aabc4c-e213-4a4b-a0ec-b0907ae8fd0e-webhook-cert\") pod \"metallb-operator-controller-manager-df9f8fb7d-rjc2w\" (UID: \"75aabc4c-e213-4a4b-a0ec-b0907ae8fd0e\") " pod="metallb-system/metallb-operator-controller-manager-df9f8fb7d-rjc2w" Feb 17 13:49:58 crc kubenswrapper[4768]: I0217 13:49:58.516778 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cbgxl\" (UniqueName: \"kubernetes.io/projected/75aabc4c-e213-4a4b-a0ec-b0907ae8fd0e-kube-api-access-cbgxl\") pod \"metallb-operator-controller-manager-df9f8fb7d-rjc2w\" (UID: \"75aabc4c-e213-4a4b-a0ec-b0907ae8fd0e\") " pod="metallb-system/metallb-operator-controller-manager-df9f8fb7d-rjc2w" Feb 17 13:49:58 crc kubenswrapper[4768]: I0217 13:49:58.520306 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-df9f8fb7d-rjc2w" Feb 17 13:49:58 crc kubenswrapper[4768]: I0217 13:49:58.557588 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-webhook-server-654b8769b8-plb5c"] Feb 17 13:49:58 crc kubenswrapper[4768]: I0217 13:49:58.558909 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-654b8769b8-plb5c" Feb 17 13:49:58 crc kubenswrapper[4768]: I0217 13:49:58.560579 4768 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-service-cert" Feb 17 13:49:58 crc kubenswrapper[4768]: I0217 13:49:58.560792 4768 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-dockercfg-zrszw" Feb 17 13:49:58 crc kubenswrapper[4768]: I0217 13:49:58.560979 4768 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Feb 17 13:49:58 crc kubenswrapper[4768]: I0217 13:49:58.577240 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-654b8769b8-plb5c"] Feb 17 13:49:58 crc kubenswrapper[4768]: I0217 13:49:58.714700 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/fac3924e-d369-478e-9c10-c0a381b8696c-webhook-cert\") pod \"metallb-operator-webhook-server-654b8769b8-plb5c\" (UID: \"fac3924e-d369-478e-9c10-c0a381b8696c\") " pod="metallb-system/metallb-operator-webhook-server-654b8769b8-plb5c" Feb 17 13:49:58 crc kubenswrapper[4768]: I0217 13:49:58.714845 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mb286\" (UniqueName: \"kubernetes.io/projected/fac3924e-d369-478e-9c10-c0a381b8696c-kube-api-access-mb286\") pod \"metallb-operator-webhook-server-654b8769b8-plb5c\" (UID: \"fac3924e-d369-478e-9c10-c0a381b8696c\") " pod="metallb-system/metallb-operator-webhook-server-654b8769b8-plb5c" Feb 17 13:49:58 crc kubenswrapper[4768]: I0217 13:49:58.714982 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/fac3924e-d369-478e-9c10-c0a381b8696c-apiservice-cert\") pod \"metallb-operator-webhook-server-654b8769b8-plb5c\" (UID: \"fac3924e-d369-478e-9c10-c0a381b8696c\") " pod="metallb-system/metallb-operator-webhook-server-654b8769b8-plb5c" Feb 17 13:49:58 crc kubenswrapper[4768]: I0217 13:49:58.818903 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mb286\" (UniqueName: \"kubernetes.io/projected/fac3924e-d369-478e-9c10-c0a381b8696c-kube-api-access-mb286\") pod \"metallb-operator-webhook-server-654b8769b8-plb5c\" (UID: \"fac3924e-d369-478e-9c10-c0a381b8696c\") " pod="metallb-system/metallb-operator-webhook-server-654b8769b8-plb5c" Feb 17 13:49:58 crc kubenswrapper[4768]: I0217 13:49:58.818991 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/fac3924e-d369-478e-9c10-c0a381b8696c-apiservice-cert\") pod \"metallb-operator-webhook-server-654b8769b8-plb5c\" (UID: \"fac3924e-d369-478e-9c10-c0a381b8696c\") " pod="metallb-system/metallb-operator-webhook-server-654b8769b8-plb5c" Feb 17 13:49:58 crc kubenswrapper[4768]: I0217 13:49:58.819017 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/fac3924e-d369-478e-9c10-c0a381b8696c-webhook-cert\") pod \"metallb-operator-webhook-server-654b8769b8-plb5c\" (UID: \"fac3924e-d369-478e-9c10-c0a381b8696c\") " pod="metallb-system/metallb-operator-webhook-server-654b8769b8-plb5c" Feb 17 13:49:58 crc kubenswrapper[4768]: I0217 13:49:58.827898 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/fac3924e-d369-478e-9c10-c0a381b8696c-apiservice-cert\") pod \"metallb-operator-webhook-server-654b8769b8-plb5c\" (UID: \"fac3924e-d369-478e-9c10-c0a381b8696c\") " pod="metallb-system/metallb-operator-webhook-server-654b8769b8-plb5c" Feb 17 13:49:58 crc kubenswrapper[4768]: I0217 13:49:58.828770 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/fac3924e-d369-478e-9c10-c0a381b8696c-webhook-cert\") pod \"metallb-operator-webhook-server-654b8769b8-plb5c\" (UID: \"fac3924e-d369-478e-9c10-c0a381b8696c\") " pod="metallb-system/metallb-operator-webhook-server-654b8769b8-plb5c" Feb 17 13:49:58 crc kubenswrapper[4768]: I0217 13:49:58.847769 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mb286\" (UniqueName: \"kubernetes.io/projected/fac3924e-d369-478e-9c10-c0a381b8696c-kube-api-access-mb286\") pod \"metallb-operator-webhook-server-654b8769b8-plb5c\" (UID: \"fac3924e-d369-478e-9c10-c0a381b8696c\") " pod="metallb-system/metallb-operator-webhook-server-654b8769b8-plb5c" Feb 17 13:49:58 crc kubenswrapper[4768]: I0217 13:49:58.862785 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-df9f8fb7d-rjc2w"] Feb 17 13:49:58 crc kubenswrapper[4768]: W0217 13:49:58.869858 4768 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod75aabc4c_e213_4a4b_a0ec_b0907ae8fd0e.slice/crio-b9a4bbaf5d391b6493e4e86fa3f6ee9e0905bcccf0600232f58162d11b52a3f1 WatchSource:0}: Error finding container b9a4bbaf5d391b6493e4e86fa3f6ee9e0905bcccf0600232f58162d11b52a3f1: Status 404 returned error can't find the container with id b9a4bbaf5d391b6493e4e86fa3f6ee9e0905bcccf0600232f58162d11b52a3f1 Feb 17 13:49:58 crc kubenswrapper[4768]: I0217 13:49:58.903684 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-654b8769b8-plb5c" Feb 17 13:49:59 crc kubenswrapper[4768]: I0217 13:49:59.140487 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-654b8769b8-plb5c"] Feb 17 13:49:59 crc kubenswrapper[4768]: W0217 13:49:59.143060 4768 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfac3924e_d369_478e_9c10_c0a381b8696c.slice/crio-71e147d9e471faaef30f050970820e759920ea99a41cef0c5014a1724566eb16 WatchSource:0}: Error finding container 71e147d9e471faaef30f050970820e759920ea99a41cef0c5014a1724566eb16: Status 404 returned error can't find the container with id 71e147d9e471faaef30f050970820e759920ea99a41cef0c5014a1724566eb16 Feb 17 13:49:59 crc kubenswrapper[4768]: I0217 13:49:59.220623 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-df9f8fb7d-rjc2w" event={"ID":"75aabc4c-e213-4a4b-a0ec-b0907ae8fd0e","Type":"ContainerStarted","Data":"b9a4bbaf5d391b6493e4e86fa3f6ee9e0905bcccf0600232f58162d11b52a3f1"} Feb 17 13:49:59 crc kubenswrapper[4768]: I0217 13:49:59.221481 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-654b8769b8-plb5c" event={"ID":"fac3924e-d369-478e-9c10-c0a381b8696c","Type":"ContainerStarted","Data":"71e147d9e471faaef30f050970820e759920ea99a41cef0c5014a1724566eb16"} Feb 17 13:50:04 crc kubenswrapper[4768]: I0217 13:50:04.278621 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-654b8769b8-plb5c" event={"ID":"fac3924e-d369-478e-9c10-c0a381b8696c","Type":"ContainerStarted","Data":"d1a4cbfc1721215d2c6775b34eed9700718b7422cc0af241716dbd955c959069"} Feb 17 13:50:04 crc kubenswrapper[4768]: I0217 13:50:04.280211 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-webhook-server-654b8769b8-plb5c" Feb 17 13:50:04 crc kubenswrapper[4768]: I0217 13:50:04.280371 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-df9f8fb7d-rjc2w" event={"ID":"75aabc4c-e213-4a4b-a0ec-b0907ae8fd0e","Type":"ContainerStarted","Data":"b609764cd1e9671b60360baea0c4d9ff8f8824e7e3534e5f0d7ddc71c7edf123"} Feb 17 13:50:04 crc kubenswrapper[4768]: I0217 13:50:04.280562 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-df9f8fb7d-rjc2w" Feb 17 13:50:04 crc kubenswrapper[4768]: I0217 13:50:04.305201 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-webhook-server-654b8769b8-plb5c" podStartSLOduration=1.32590755 podStartE2EDuration="6.305180785s" podCreationTimestamp="2026-02-17 13:49:58 +0000 UTC" firstStartedPulling="2026-02-17 13:49:59.145943508 +0000 UTC m=+818.425329950" lastFinishedPulling="2026-02-17 13:50:04.125216733 +0000 UTC m=+823.404603185" observedRunningTime="2026-02-17 13:50:04.302641687 +0000 UTC m=+823.582028139" watchObservedRunningTime="2026-02-17 13:50:04.305180785 +0000 UTC m=+823.584567237" Feb 17 13:50:04 crc kubenswrapper[4768]: I0217 13:50:04.319450 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-controller-manager-df9f8fb7d-rjc2w" podStartSLOduration=1.109698042 podStartE2EDuration="6.319431464s" podCreationTimestamp="2026-02-17 13:49:58 +0000 UTC" firstStartedPulling="2026-02-17 13:49:58.873486613 +0000 UTC m=+818.152873065" lastFinishedPulling="2026-02-17 13:50:04.083220055 +0000 UTC m=+823.362606487" observedRunningTime="2026-02-17 13:50:04.318814918 +0000 UTC m=+823.598201360" watchObservedRunningTime="2026-02-17 13:50:04.319431464 +0000 UTC m=+823.598817906" Feb 17 13:50:18 crc kubenswrapper[4768]: I0217 13:50:18.907363 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-webhook-server-654b8769b8-plb5c" Feb 17 13:50:38 crc kubenswrapper[4768]: I0217 13:50:38.523939 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-controller-manager-df9f8fb7d-rjc2w" Feb 17 13:50:39 crc kubenswrapper[4768]: I0217 13:50:39.324422 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-webhook-server-78b44bf5bb-qq5lm"] Feb 17 13:50:39 crc kubenswrapper[4768]: I0217 13:50:39.328510 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-qq5lm" Feb 17 13:50:39 crc kubenswrapper[4768]: I0217 13:50:39.337763 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-hr4bc"] Feb 17 13:50:39 crc kubenswrapper[4768]: I0217 13:50:39.338270 4768 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-webhook-server-cert" Feb 17 13:50:39 crc kubenswrapper[4768]: I0217 13:50:39.341766 4768 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-daemon-dockercfg-xxz79" Feb 17 13:50:39 crc kubenswrapper[4768]: I0217 13:50:39.346604 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-78b44bf5bb-qq5lm"] Feb 17 13:50:39 crc kubenswrapper[4768]: I0217 13:50:39.346833 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-hr4bc" Feb 17 13:50:39 crc kubenswrapper[4768]: I0217 13:50:39.349061 4768 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-certs-secret" Feb 17 13:50:39 crc kubenswrapper[4768]: I0217 13:50:39.349406 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"frr-startup" Feb 17 13:50:39 crc kubenswrapper[4768]: I0217 13:50:39.416535 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/speaker-8trsw"] Feb 17 13:50:39 crc kubenswrapper[4768]: I0217 13:50:39.418921 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-8trsw" Feb 17 13:50:39 crc kubenswrapper[4768]: I0217 13:50:39.421418 4768 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-dockercfg-k7tqn" Feb 17 13:50:39 crc kubenswrapper[4768]: I0217 13:50:39.421587 4768 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-certs-secret" Feb 17 13:50:39 crc kubenswrapper[4768]: I0217 13:50:39.421711 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"metallb-excludel2" Feb 17 13:50:39 crc kubenswrapper[4768]: I0217 13:50:39.421818 4768 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-memberlist" Feb 17 13:50:39 crc kubenswrapper[4768]: I0217 13:50:39.437322 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/controller-69bbfbf88f-nv4f7"] Feb 17 13:50:39 crc kubenswrapper[4768]: I0217 13:50:39.438190 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-69bbfbf88f-nv4f7" Feb 17 13:50:39 crc kubenswrapper[4768]: I0217 13:50:39.443846 4768 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-certs-secret" Feb 17 13:50:39 crc kubenswrapper[4768]: I0217 13:50:39.455045 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-69bbfbf88f-nv4f7"] Feb 17 13:50:39 crc kubenswrapper[4768]: I0217 13:50:39.458783 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/9267631c-d9e1-49dc-a9bc-40f8ef1182ca-frr-conf\") pod \"frr-k8s-hr4bc\" (UID: \"9267631c-d9e1-49dc-a9bc-40f8ef1182ca\") " pod="metallb-system/frr-k8s-hr4bc" Feb 17 13:50:39 crc kubenswrapper[4768]: I0217 13:50:39.458816 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/9267631c-d9e1-49dc-a9bc-40f8ef1182ca-frr-sockets\") pod \"frr-k8s-hr4bc\" (UID: \"9267631c-d9e1-49dc-a9bc-40f8ef1182ca\") " pod="metallb-system/frr-k8s-hr4bc" Feb 17 13:50:39 crc kubenswrapper[4768]: I0217 13:50:39.458839 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/9267631c-d9e1-49dc-a9bc-40f8ef1182ca-reloader\") pod \"frr-k8s-hr4bc\" (UID: \"9267631c-d9e1-49dc-a9bc-40f8ef1182ca\") " pod="metallb-system/frr-k8s-hr4bc" Feb 17 13:50:39 crc kubenswrapper[4768]: I0217 13:50:39.458857 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/8d47747b-4164-4e0e-b424-513d688cf6a8-cert\") pod \"frr-k8s-webhook-server-78b44bf5bb-qq5lm\" (UID: \"8d47747b-4164-4e0e-b424-513d688cf6a8\") " pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-qq5lm" Feb 17 13:50:39 crc kubenswrapper[4768]: I0217 13:50:39.458878 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x6xhm\" (UniqueName: \"kubernetes.io/projected/9267631c-d9e1-49dc-a9bc-40f8ef1182ca-kube-api-access-x6xhm\") pod \"frr-k8s-hr4bc\" (UID: \"9267631c-d9e1-49dc-a9bc-40f8ef1182ca\") " pod="metallb-system/frr-k8s-hr4bc" Feb 17 13:50:39 crc kubenswrapper[4768]: I0217 13:50:39.458902 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/9267631c-d9e1-49dc-a9bc-40f8ef1182ca-frr-startup\") pod \"frr-k8s-hr4bc\" (UID: \"9267631c-d9e1-49dc-a9bc-40f8ef1182ca\") " pod="metallb-system/frr-k8s-hr4bc" Feb 17 13:50:39 crc kubenswrapper[4768]: I0217 13:50:39.458922 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r5ddc\" (UniqueName: \"kubernetes.io/projected/8d47747b-4164-4e0e-b424-513d688cf6a8-kube-api-access-r5ddc\") pod \"frr-k8s-webhook-server-78b44bf5bb-qq5lm\" (UID: \"8d47747b-4164-4e0e-b424-513d688cf6a8\") " pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-qq5lm" Feb 17 13:50:39 crc kubenswrapper[4768]: I0217 13:50:39.458944 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/9267631c-d9e1-49dc-a9bc-40f8ef1182ca-metrics\") pod \"frr-k8s-hr4bc\" (UID: \"9267631c-d9e1-49dc-a9bc-40f8ef1182ca\") " pod="metallb-system/frr-k8s-hr4bc" Feb 17 13:50:39 crc kubenswrapper[4768]: I0217 13:50:39.458960 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/9267631c-d9e1-49dc-a9bc-40f8ef1182ca-metrics-certs\") pod \"frr-k8s-hr4bc\" (UID: \"9267631c-d9e1-49dc-a9bc-40f8ef1182ca\") " pod="metallb-system/frr-k8s-hr4bc" Feb 17 13:50:39 crc kubenswrapper[4768]: I0217 13:50:39.559696 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/9267631c-d9e1-49dc-a9bc-40f8ef1182ca-frr-conf\") pod \"frr-k8s-hr4bc\" (UID: \"9267631c-d9e1-49dc-a9bc-40f8ef1182ca\") " pod="metallb-system/frr-k8s-hr4bc" Feb 17 13:50:39 crc kubenswrapper[4768]: I0217 13:50:39.559741 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/9267631c-d9e1-49dc-a9bc-40f8ef1182ca-frr-sockets\") pod \"frr-k8s-hr4bc\" (UID: \"9267631c-d9e1-49dc-a9bc-40f8ef1182ca\") " pod="metallb-system/frr-k8s-hr4bc" Feb 17 13:50:39 crc kubenswrapper[4768]: I0217 13:50:39.559771 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/9267631c-d9e1-49dc-a9bc-40f8ef1182ca-reloader\") pod \"frr-k8s-hr4bc\" (UID: \"9267631c-d9e1-49dc-a9bc-40f8ef1182ca\") " pod="metallb-system/frr-k8s-hr4bc" Feb 17 13:50:39 crc kubenswrapper[4768]: I0217 13:50:39.559797 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/8d47747b-4164-4e0e-b424-513d688cf6a8-cert\") pod \"frr-k8s-webhook-server-78b44bf5bb-qq5lm\" (UID: \"8d47747b-4164-4e0e-b424-513d688cf6a8\") " pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-qq5lm" Feb 17 13:50:39 crc kubenswrapper[4768]: I0217 13:50:39.559825 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x6xhm\" (UniqueName: \"kubernetes.io/projected/9267631c-d9e1-49dc-a9bc-40f8ef1182ca-kube-api-access-x6xhm\") pod \"frr-k8s-hr4bc\" (UID: \"9267631c-d9e1-49dc-a9bc-40f8ef1182ca\") " pod="metallb-system/frr-k8s-hr4bc" Feb 17 13:50:39 crc kubenswrapper[4768]: I0217 13:50:39.559863 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lkxbl\" (UniqueName: \"kubernetes.io/projected/90e8e26b-3dc0-4bf7-a493-8c089ace61a0-kube-api-access-lkxbl\") pod \"speaker-8trsw\" (UID: \"90e8e26b-3dc0-4bf7-a493-8c089ace61a0\") " pod="metallb-system/speaker-8trsw" Feb 17 13:50:39 crc kubenswrapper[4768]: I0217 13:50:39.559890 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/9267631c-d9e1-49dc-a9bc-40f8ef1182ca-frr-startup\") pod \"frr-k8s-hr4bc\" (UID: \"9267631c-d9e1-49dc-a9bc-40f8ef1182ca\") " pod="metallb-system/frr-k8s-hr4bc" Feb 17 13:50:39 crc kubenswrapper[4768]: I0217 13:50:39.559915 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r5ddc\" (UniqueName: \"kubernetes.io/projected/8d47747b-4164-4e0e-b424-513d688cf6a8-kube-api-access-r5ddc\") pod \"frr-k8s-webhook-server-78b44bf5bb-qq5lm\" (UID: \"8d47747b-4164-4e0e-b424-513d688cf6a8\") " pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-qq5lm" Feb 17 13:50:39 crc kubenswrapper[4768]: I0217 13:50:39.559940 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/90e8e26b-3dc0-4bf7-a493-8c089ace61a0-metallb-excludel2\") pod \"speaker-8trsw\" (UID: \"90e8e26b-3dc0-4bf7-a493-8c089ace61a0\") " pod="metallb-system/speaker-8trsw" Feb 17 13:50:39 crc kubenswrapper[4768]: I0217 13:50:39.559961 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/1a756f9a-bd11-42b8-9b67-1585ee9a5322-metrics-certs\") pod \"controller-69bbfbf88f-nv4f7\" (UID: \"1a756f9a-bd11-42b8-9b67-1585ee9a5322\") " pod="metallb-system/controller-69bbfbf88f-nv4f7" Feb 17 13:50:39 crc kubenswrapper[4768]: I0217 13:50:39.559988 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/9267631c-d9e1-49dc-a9bc-40f8ef1182ca-metrics\") pod \"frr-k8s-hr4bc\" (UID: \"9267631c-d9e1-49dc-a9bc-40f8ef1182ca\") " pod="metallb-system/frr-k8s-hr4bc" Feb 17 13:50:39 crc kubenswrapper[4768]: I0217 13:50:39.560010 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/9267631c-d9e1-49dc-a9bc-40f8ef1182ca-metrics-certs\") pod \"frr-k8s-hr4bc\" (UID: \"9267631c-d9e1-49dc-a9bc-40f8ef1182ca\") " pod="metallb-system/frr-k8s-hr4bc" Feb 17 13:50:39 crc kubenswrapper[4768]: I0217 13:50:39.560322 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/9267631c-d9e1-49dc-a9bc-40f8ef1182ca-frr-conf\") pod \"frr-k8s-hr4bc\" (UID: \"9267631c-d9e1-49dc-a9bc-40f8ef1182ca\") " pod="metallb-system/frr-k8s-hr4bc" Feb 17 13:50:39 crc kubenswrapper[4768]: I0217 13:50:39.560451 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/9267631c-d9e1-49dc-a9bc-40f8ef1182ca-reloader\") pod \"frr-k8s-hr4bc\" (UID: \"9267631c-d9e1-49dc-a9bc-40f8ef1182ca\") " pod="metallb-system/frr-k8s-hr4bc" Feb 17 13:50:39 crc kubenswrapper[4768]: I0217 13:50:39.560562 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/9267631c-d9e1-49dc-a9bc-40f8ef1182ca-metrics\") pod \"frr-k8s-hr4bc\" (UID: \"9267631c-d9e1-49dc-a9bc-40f8ef1182ca\") " pod="metallb-system/frr-k8s-hr4bc" Feb 17 13:50:39 crc kubenswrapper[4768]: I0217 13:50:39.560571 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/9267631c-d9e1-49dc-a9bc-40f8ef1182ca-frr-sockets\") pod \"frr-k8s-hr4bc\" (UID: \"9267631c-d9e1-49dc-a9bc-40f8ef1182ca\") " pod="metallb-system/frr-k8s-hr4bc" Feb 17 13:50:39 crc kubenswrapper[4768]: I0217 13:50:39.560612 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/1a756f9a-bd11-42b8-9b67-1585ee9a5322-cert\") pod \"controller-69bbfbf88f-nv4f7\" (UID: \"1a756f9a-bd11-42b8-9b67-1585ee9a5322\") " pod="metallb-system/controller-69bbfbf88f-nv4f7" Feb 17 13:50:39 crc kubenswrapper[4768]: I0217 13:50:39.560645 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/90e8e26b-3dc0-4bf7-a493-8c089ace61a0-memberlist\") pod \"speaker-8trsw\" (UID: \"90e8e26b-3dc0-4bf7-a493-8c089ace61a0\") " pod="metallb-system/speaker-8trsw" Feb 17 13:50:39 crc kubenswrapper[4768]: I0217 13:50:39.560672 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/90e8e26b-3dc0-4bf7-a493-8c089ace61a0-metrics-certs\") pod \"speaker-8trsw\" (UID: \"90e8e26b-3dc0-4bf7-a493-8c089ace61a0\") " pod="metallb-system/speaker-8trsw" Feb 17 13:50:39 crc kubenswrapper[4768]: I0217 13:50:39.561290 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/9267631c-d9e1-49dc-a9bc-40f8ef1182ca-frr-startup\") pod \"frr-k8s-hr4bc\" (UID: \"9267631c-d9e1-49dc-a9bc-40f8ef1182ca\") " pod="metallb-system/frr-k8s-hr4bc" Feb 17 13:50:39 crc kubenswrapper[4768]: I0217 13:50:39.561598 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5rbbh\" (UniqueName: \"kubernetes.io/projected/1a756f9a-bd11-42b8-9b67-1585ee9a5322-kube-api-access-5rbbh\") pod \"controller-69bbfbf88f-nv4f7\" (UID: \"1a756f9a-bd11-42b8-9b67-1585ee9a5322\") " pod="metallb-system/controller-69bbfbf88f-nv4f7" Feb 17 13:50:39 crc kubenswrapper[4768]: I0217 13:50:39.566453 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/9267631c-d9e1-49dc-a9bc-40f8ef1182ca-metrics-certs\") pod \"frr-k8s-hr4bc\" (UID: \"9267631c-d9e1-49dc-a9bc-40f8ef1182ca\") " pod="metallb-system/frr-k8s-hr4bc" Feb 17 13:50:39 crc kubenswrapper[4768]: I0217 13:50:39.567868 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/8d47747b-4164-4e0e-b424-513d688cf6a8-cert\") pod \"frr-k8s-webhook-server-78b44bf5bb-qq5lm\" (UID: \"8d47747b-4164-4e0e-b424-513d688cf6a8\") " pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-qq5lm" Feb 17 13:50:39 crc kubenswrapper[4768]: I0217 13:50:39.575201 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x6xhm\" (UniqueName: \"kubernetes.io/projected/9267631c-d9e1-49dc-a9bc-40f8ef1182ca-kube-api-access-x6xhm\") pod \"frr-k8s-hr4bc\" (UID: \"9267631c-d9e1-49dc-a9bc-40f8ef1182ca\") " pod="metallb-system/frr-k8s-hr4bc" Feb 17 13:50:39 crc kubenswrapper[4768]: I0217 13:50:39.578918 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r5ddc\" (UniqueName: \"kubernetes.io/projected/8d47747b-4164-4e0e-b424-513d688cf6a8-kube-api-access-r5ddc\") pod \"frr-k8s-webhook-server-78b44bf5bb-qq5lm\" (UID: \"8d47747b-4164-4e0e-b424-513d688cf6a8\") " pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-qq5lm" Feb 17 13:50:39 crc kubenswrapper[4768]: I0217 13:50:39.662678 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lkxbl\" (UniqueName: \"kubernetes.io/projected/90e8e26b-3dc0-4bf7-a493-8c089ace61a0-kube-api-access-lkxbl\") pod \"speaker-8trsw\" (UID: \"90e8e26b-3dc0-4bf7-a493-8c089ace61a0\") " pod="metallb-system/speaker-8trsw" Feb 17 13:50:39 crc kubenswrapper[4768]: I0217 13:50:39.662763 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/1a756f9a-bd11-42b8-9b67-1585ee9a5322-metrics-certs\") pod \"controller-69bbfbf88f-nv4f7\" (UID: \"1a756f9a-bd11-42b8-9b67-1585ee9a5322\") " pod="metallb-system/controller-69bbfbf88f-nv4f7" Feb 17 13:50:39 crc kubenswrapper[4768]: I0217 13:50:39.662794 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/90e8e26b-3dc0-4bf7-a493-8c089ace61a0-metallb-excludel2\") pod \"speaker-8trsw\" (UID: \"90e8e26b-3dc0-4bf7-a493-8c089ace61a0\") " pod="metallb-system/speaker-8trsw" Feb 17 13:50:39 crc kubenswrapper[4768]: I0217 13:50:39.662857 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/1a756f9a-bd11-42b8-9b67-1585ee9a5322-cert\") pod \"controller-69bbfbf88f-nv4f7\" (UID: \"1a756f9a-bd11-42b8-9b67-1585ee9a5322\") " pod="metallb-system/controller-69bbfbf88f-nv4f7" Feb 17 13:50:39 crc kubenswrapper[4768]: I0217 13:50:39.662901 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/90e8e26b-3dc0-4bf7-a493-8c089ace61a0-memberlist\") pod \"speaker-8trsw\" (UID: \"90e8e26b-3dc0-4bf7-a493-8c089ace61a0\") " pod="metallb-system/speaker-8trsw" Feb 17 13:50:39 crc kubenswrapper[4768]: I0217 13:50:39.662933 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/90e8e26b-3dc0-4bf7-a493-8c089ace61a0-metrics-certs\") pod \"speaker-8trsw\" (UID: \"90e8e26b-3dc0-4bf7-a493-8c089ace61a0\") " pod="metallb-system/speaker-8trsw" Feb 17 13:50:39 crc kubenswrapper[4768]: I0217 13:50:39.662959 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5rbbh\" (UniqueName: \"kubernetes.io/projected/1a756f9a-bd11-42b8-9b67-1585ee9a5322-kube-api-access-5rbbh\") pod \"controller-69bbfbf88f-nv4f7\" (UID: \"1a756f9a-bd11-42b8-9b67-1585ee9a5322\") " pod="metallb-system/controller-69bbfbf88f-nv4f7" Feb 17 13:50:39 crc kubenswrapper[4768]: E0217 13:50:39.663214 4768 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Feb 17 13:50:39 crc kubenswrapper[4768]: E0217 13:50:39.663295 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/90e8e26b-3dc0-4bf7-a493-8c089ace61a0-memberlist podName:90e8e26b-3dc0-4bf7-a493-8c089ace61a0 nodeName:}" failed. No retries permitted until 2026-02-17 13:50:40.163258969 +0000 UTC m=+859.442645411 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/90e8e26b-3dc0-4bf7-a493-8c089ace61a0-memberlist") pod "speaker-8trsw" (UID: "90e8e26b-3dc0-4bf7-a493-8c089ace61a0") : secret "metallb-memberlist" not found Feb 17 13:50:39 crc kubenswrapper[4768]: I0217 13:50:39.664123 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/90e8e26b-3dc0-4bf7-a493-8c089ace61a0-metallb-excludel2\") pod \"speaker-8trsw\" (UID: \"90e8e26b-3dc0-4bf7-a493-8c089ace61a0\") " pod="metallb-system/speaker-8trsw" Feb 17 13:50:39 crc kubenswrapper[4768]: I0217 13:50:39.666145 4768 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Feb 17 13:50:39 crc kubenswrapper[4768]: I0217 13:50:39.666370 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/1a756f9a-bd11-42b8-9b67-1585ee9a5322-metrics-certs\") pod \"controller-69bbfbf88f-nv4f7\" (UID: \"1a756f9a-bd11-42b8-9b67-1585ee9a5322\") " pod="metallb-system/controller-69bbfbf88f-nv4f7" Feb 17 13:50:39 crc kubenswrapper[4768]: I0217 13:50:39.668629 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/90e8e26b-3dc0-4bf7-a493-8c089ace61a0-metrics-certs\") pod \"speaker-8trsw\" (UID: \"90e8e26b-3dc0-4bf7-a493-8c089ace61a0\") " pod="metallb-system/speaker-8trsw" Feb 17 13:50:39 crc kubenswrapper[4768]: I0217 13:50:39.676583 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/1a756f9a-bd11-42b8-9b67-1585ee9a5322-cert\") pod \"controller-69bbfbf88f-nv4f7\" (UID: \"1a756f9a-bd11-42b8-9b67-1585ee9a5322\") " pod="metallb-system/controller-69bbfbf88f-nv4f7" Feb 17 13:50:39 crc kubenswrapper[4768]: I0217 13:50:39.681619 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lkxbl\" (UniqueName: \"kubernetes.io/projected/90e8e26b-3dc0-4bf7-a493-8c089ace61a0-kube-api-access-lkxbl\") pod \"speaker-8trsw\" (UID: \"90e8e26b-3dc0-4bf7-a493-8c089ace61a0\") " pod="metallb-system/speaker-8trsw" Feb 17 13:50:39 crc kubenswrapper[4768]: I0217 13:50:39.685307 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5rbbh\" (UniqueName: \"kubernetes.io/projected/1a756f9a-bd11-42b8-9b67-1585ee9a5322-kube-api-access-5rbbh\") pod \"controller-69bbfbf88f-nv4f7\" (UID: \"1a756f9a-bd11-42b8-9b67-1585ee9a5322\") " pod="metallb-system/controller-69bbfbf88f-nv4f7" Feb 17 13:50:39 crc kubenswrapper[4768]: I0217 13:50:39.688152 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-qq5lm" Feb 17 13:50:39 crc kubenswrapper[4768]: I0217 13:50:39.699148 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-hr4bc" Feb 17 13:50:39 crc kubenswrapper[4768]: I0217 13:50:39.755755 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-69bbfbf88f-nv4f7" Feb 17 13:50:39 crc kubenswrapper[4768]: I0217 13:50:39.944583 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-69bbfbf88f-nv4f7"] Feb 17 13:50:40 crc kubenswrapper[4768]: I0217 13:50:40.102977 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-78b44bf5bb-qq5lm"] Feb 17 13:50:40 crc kubenswrapper[4768]: I0217 13:50:40.171154 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/90e8e26b-3dc0-4bf7-a493-8c089ace61a0-memberlist\") pod \"speaker-8trsw\" (UID: \"90e8e26b-3dc0-4bf7-a493-8c089ace61a0\") " pod="metallb-system/speaker-8trsw" Feb 17 13:50:40 crc kubenswrapper[4768]: I0217 13:50:40.177500 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/90e8e26b-3dc0-4bf7-a493-8c089ace61a0-memberlist\") pod \"speaker-8trsw\" (UID: \"90e8e26b-3dc0-4bf7-a493-8c089ace61a0\") " pod="metallb-system/speaker-8trsw" Feb 17 13:50:40 crc kubenswrapper[4768]: I0217 13:50:40.340125 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-8trsw" Feb 17 13:50:40 crc kubenswrapper[4768]: W0217 13:50:40.366958 4768 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod90e8e26b_3dc0_4bf7_a493_8c089ace61a0.slice/crio-baaea18e937de7ecbdc8b6812bae8e23e61d648329073bb1731ceda5cea0f168 WatchSource:0}: Error finding container baaea18e937de7ecbdc8b6812bae8e23e61d648329073bb1731ceda5cea0f168: Status 404 returned error can't find the container with id baaea18e937de7ecbdc8b6812bae8e23e61d648329073bb1731ceda5cea0f168 Feb 17 13:50:40 crc kubenswrapper[4768]: I0217 13:50:40.506908 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-69bbfbf88f-nv4f7" event={"ID":"1a756f9a-bd11-42b8-9b67-1585ee9a5322","Type":"ContainerStarted","Data":"0d5b59f4c5d23ae3efc31c94770976fa4c3188fc432e4539f7614fb2662a69c9"} Feb 17 13:50:40 crc kubenswrapper[4768]: I0217 13:50:40.507179 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-69bbfbf88f-nv4f7" event={"ID":"1a756f9a-bd11-42b8-9b67-1585ee9a5322","Type":"ContainerStarted","Data":"3980aab4f729d05f4a32ff2d181a768af8578141023c8212b293655016cf05a5"} Feb 17 13:50:40 crc kubenswrapper[4768]: I0217 13:50:40.507193 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-69bbfbf88f-nv4f7" event={"ID":"1a756f9a-bd11-42b8-9b67-1585ee9a5322","Type":"ContainerStarted","Data":"75d759b7edf4e4afe581451d11fc878995f3f171b9ad73c4b979b33ea148f8dd"} Feb 17 13:50:40 crc kubenswrapper[4768]: I0217 13:50:40.508097 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/controller-69bbfbf88f-nv4f7" Feb 17 13:50:40 crc kubenswrapper[4768]: I0217 13:50:40.518164 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-8trsw" event={"ID":"90e8e26b-3dc0-4bf7-a493-8c089ace61a0","Type":"ContainerStarted","Data":"baaea18e937de7ecbdc8b6812bae8e23e61d648329073bb1731ceda5cea0f168"} Feb 17 13:50:40 crc kubenswrapper[4768]: I0217 13:50:40.527312 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-hr4bc" event={"ID":"9267631c-d9e1-49dc-a9bc-40f8ef1182ca","Type":"ContainerStarted","Data":"18fee322f4a6e3f1fb1f71f3392551e446be3047990533fcd5c10358cc2660eb"} Feb 17 13:50:40 crc kubenswrapper[4768]: I0217 13:50:40.528539 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-qq5lm" event={"ID":"8d47747b-4164-4e0e-b424-513d688cf6a8","Type":"ContainerStarted","Data":"687e8bf8777798a1746907ad9b6d7b448db8319c3a308a63a384aec3f1147b18"} Feb 17 13:50:41 crc kubenswrapper[4768]: I0217 13:50:41.549725 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-8trsw" event={"ID":"90e8e26b-3dc0-4bf7-a493-8c089ace61a0","Type":"ContainerStarted","Data":"964aed556eabbfb4ac7af04f41aead87567012a06d5c7b348f0346c6b01fa6aa"} Feb 17 13:50:41 crc kubenswrapper[4768]: I0217 13:50:41.552926 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-8trsw" event={"ID":"90e8e26b-3dc0-4bf7-a493-8c089ace61a0","Type":"ContainerStarted","Data":"df49dfdc2a1fb15ae3c33f565f70fb5a559c009653d732f2da866f4020e5176b"} Feb 17 13:50:41 crc kubenswrapper[4768]: I0217 13:50:41.555764 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/controller-69bbfbf88f-nv4f7" podStartSLOduration=2.555730872 podStartE2EDuration="2.555730872s" podCreationTimestamp="2026-02-17 13:50:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 13:50:40.54327946 +0000 UTC m=+859.822665912" watchObservedRunningTime="2026-02-17 13:50:41.555730872 +0000 UTC m=+860.835117314" Feb 17 13:50:41 crc kubenswrapper[4768]: I0217 13:50:41.573592 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/speaker-8trsw" podStartSLOduration=2.573535004 podStartE2EDuration="2.573535004s" podCreationTimestamp="2026-02-17 13:50:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 13:50:41.56832945 +0000 UTC m=+860.847715892" watchObservedRunningTime="2026-02-17 13:50:41.573535004 +0000 UTC m=+860.852921446" Feb 17 13:50:42 crc kubenswrapper[4768]: I0217 13:50:42.546571 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/speaker-8trsw" Feb 17 13:50:47 crc kubenswrapper[4768]: I0217 13:50:47.583942 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-qq5lm" event={"ID":"8d47747b-4164-4e0e-b424-513d688cf6a8","Type":"ContainerStarted","Data":"7fb5d7d64a19a0ffd084cc65e10142738abf05d3134bd95c35dbc8a4526102d9"} Feb 17 13:50:47 crc kubenswrapper[4768]: I0217 13:50:47.584612 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-qq5lm" Feb 17 13:50:47 crc kubenswrapper[4768]: I0217 13:50:47.586696 4768 generic.go:334] "Generic (PLEG): container finished" podID="9267631c-d9e1-49dc-a9bc-40f8ef1182ca" containerID="3af4232c731063cbaa4afb57d3c68cd1f4694fd4621b7eb1e5af45fbd7eeae8f" exitCode=0 Feb 17 13:50:47 crc kubenswrapper[4768]: I0217 13:50:47.586762 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-hr4bc" event={"ID":"9267631c-d9e1-49dc-a9bc-40f8ef1182ca","Type":"ContainerDied","Data":"3af4232c731063cbaa4afb57d3c68cd1f4694fd4621b7eb1e5af45fbd7eeae8f"} Feb 17 13:50:47 crc kubenswrapper[4768]: I0217 13:50:47.608780 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-qq5lm" podStartSLOduration=2.028632292 podStartE2EDuration="8.608758162s" podCreationTimestamp="2026-02-17 13:50:39 +0000 UTC" firstStartedPulling="2026-02-17 13:50:40.11489984 +0000 UTC m=+859.394286272" lastFinishedPulling="2026-02-17 13:50:46.69502568 +0000 UTC m=+865.974412142" observedRunningTime="2026-02-17 13:50:47.600597097 +0000 UTC m=+866.879983549" watchObservedRunningTime="2026-02-17 13:50:47.608758162 +0000 UTC m=+866.888144604" Feb 17 13:50:48 crc kubenswrapper[4768]: I0217 13:50:48.594735 4768 generic.go:334] "Generic (PLEG): container finished" podID="9267631c-d9e1-49dc-a9bc-40f8ef1182ca" containerID="9a6ca9af2c80e03949a19f070297e8b3afa5ab298e9c406903ae4c566e5866da" exitCode=0 Feb 17 13:50:48 crc kubenswrapper[4768]: I0217 13:50:48.594797 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-hr4bc" event={"ID":"9267631c-d9e1-49dc-a9bc-40f8ef1182ca","Type":"ContainerDied","Data":"9a6ca9af2c80e03949a19f070297e8b3afa5ab298e9c406903ae4c566e5866da"} Feb 17 13:50:49 crc kubenswrapper[4768]: I0217 13:50:49.603902 4768 generic.go:334] "Generic (PLEG): container finished" podID="9267631c-d9e1-49dc-a9bc-40f8ef1182ca" containerID="3983fa0f75396bda81701d0d03cbf647089179c32d760ddba8eee7be42674816" exitCode=0 Feb 17 13:50:49 crc kubenswrapper[4768]: I0217 13:50:49.603953 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-hr4bc" event={"ID":"9267631c-d9e1-49dc-a9bc-40f8ef1182ca","Type":"ContainerDied","Data":"3983fa0f75396bda81701d0d03cbf647089179c32d760ddba8eee7be42674816"} Feb 17 13:50:50 crc kubenswrapper[4768]: I0217 13:50:50.352772 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/speaker-8trsw" Feb 17 13:50:50 crc kubenswrapper[4768]: I0217 13:50:50.612399 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-hr4bc" event={"ID":"9267631c-d9e1-49dc-a9bc-40f8ef1182ca","Type":"ContainerStarted","Data":"a70dc5baa07a47bc77966d5ebc5ec4e799fe98915940f5da2bbfd784b4864333"} Feb 17 13:50:50 crc kubenswrapper[4768]: I0217 13:50:50.612440 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-hr4bc" event={"ID":"9267631c-d9e1-49dc-a9bc-40f8ef1182ca","Type":"ContainerStarted","Data":"fed9cba256a22cd568a0369b052968f6b92174308d7e8b4cf52d3743eebc1b05"} Feb 17 13:50:50 crc kubenswrapper[4768]: I0217 13:50:50.612449 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-hr4bc" event={"ID":"9267631c-d9e1-49dc-a9bc-40f8ef1182ca","Type":"ContainerStarted","Data":"6f1d7aadebe3ccbe1d652940786d828a1f84a0a9f89d84862f0af00d688b5922"} Feb 17 13:50:50 crc kubenswrapper[4768]: I0217 13:50:50.612458 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-hr4bc" event={"ID":"9267631c-d9e1-49dc-a9bc-40f8ef1182ca","Type":"ContainerStarted","Data":"7395d3f7b83b7bf33763e330133ad6fcf396464fd26d113a9f6efd94051fe0cb"} Feb 17 13:50:50 crc kubenswrapper[4768]: I0217 13:50:50.612466 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-hr4bc" event={"ID":"9267631c-d9e1-49dc-a9bc-40f8ef1182ca","Type":"ContainerStarted","Data":"1aac75d30c07b8afcf1c45073dc558a991c597909b50cbb1fd14b376f3bf5032"} Feb 17 13:50:51 crc kubenswrapper[4768]: I0217 13:50:51.624617 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-hr4bc" event={"ID":"9267631c-d9e1-49dc-a9bc-40f8ef1182ca","Type":"ContainerStarted","Data":"dd1731f0d5d6b6bdcc8a0b3453027996c62fd9a45283a461732ad99dffad6432"} Feb 17 13:50:51 crc kubenswrapper[4768]: I0217 13:50:51.625079 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-hr4bc" Feb 17 13:50:51 crc kubenswrapper[4768]: I0217 13:50:51.649229 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-hr4bc" podStartSLOduration=5.781031903 podStartE2EDuration="12.649208225s" podCreationTimestamp="2026-02-17 13:50:39 +0000 UTC" firstStartedPulling="2026-02-17 13:50:39.818866298 +0000 UTC m=+859.098252730" lastFinishedPulling="2026-02-17 13:50:46.6870426 +0000 UTC m=+865.966429052" observedRunningTime="2026-02-17 13:50:51.645899344 +0000 UTC m=+870.925285796" watchObservedRunningTime="2026-02-17 13:50:51.649208225 +0000 UTC m=+870.928594677" Feb 17 13:50:53 crc kubenswrapper[4768]: I0217 13:50:53.384573 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-pzxtl"] Feb 17 13:50:53 crc kubenswrapper[4768]: I0217 13:50:53.385860 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-pzxtl" Feb 17 13:50:53 crc kubenswrapper[4768]: I0217 13:50:53.402364 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-pzxtl"] Feb 17 13:50:53 crc kubenswrapper[4768]: I0217 13:50:53.505856 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5qc5t\" (UniqueName: \"kubernetes.io/projected/2575a01b-7720-4c86-ba23-e0cfb4150f85-kube-api-access-5qc5t\") pod \"certified-operators-pzxtl\" (UID: \"2575a01b-7720-4c86-ba23-e0cfb4150f85\") " pod="openshift-marketplace/certified-operators-pzxtl" Feb 17 13:50:53 crc kubenswrapper[4768]: I0217 13:50:53.505925 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2575a01b-7720-4c86-ba23-e0cfb4150f85-utilities\") pod \"certified-operators-pzxtl\" (UID: \"2575a01b-7720-4c86-ba23-e0cfb4150f85\") " pod="openshift-marketplace/certified-operators-pzxtl" Feb 17 13:50:53 crc kubenswrapper[4768]: I0217 13:50:53.506140 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2575a01b-7720-4c86-ba23-e0cfb4150f85-catalog-content\") pod \"certified-operators-pzxtl\" (UID: \"2575a01b-7720-4c86-ba23-e0cfb4150f85\") " pod="openshift-marketplace/certified-operators-pzxtl" Feb 17 13:50:53 crc kubenswrapper[4768]: I0217 13:50:53.607554 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5qc5t\" (UniqueName: \"kubernetes.io/projected/2575a01b-7720-4c86-ba23-e0cfb4150f85-kube-api-access-5qc5t\") pod \"certified-operators-pzxtl\" (UID: \"2575a01b-7720-4c86-ba23-e0cfb4150f85\") " pod="openshift-marketplace/certified-operators-pzxtl" Feb 17 13:50:53 crc kubenswrapper[4768]: I0217 13:50:53.607614 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2575a01b-7720-4c86-ba23-e0cfb4150f85-utilities\") pod \"certified-operators-pzxtl\" (UID: \"2575a01b-7720-4c86-ba23-e0cfb4150f85\") " pod="openshift-marketplace/certified-operators-pzxtl" Feb 17 13:50:53 crc kubenswrapper[4768]: I0217 13:50:53.607657 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2575a01b-7720-4c86-ba23-e0cfb4150f85-catalog-content\") pod \"certified-operators-pzxtl\" (UID: \"2575a01b-7720-4c86-ba23-e0cfb4150f85\") " pod="openshift-marketplace/certified-operators-pzxtl" Feb 17 13:50:53 crc kubenswrapper[4768]: I0217 13:50:53.608232 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2575a01b-7720-4c86-ba23-e0cfb4150f85-catalog-content\") pod \"certified-operators-pzxtl\" (UID: \"2575a01b-7720-4c86-ba23-e0cfb4150f85\") " pod="openshift-marketplace/certified-operators-pzxtl" Feb 17 13:50:53 crc kubenswrapper[4768]: I0217 13:50:53.608238 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2575a01b-7720-4c86-ba23-e0cfb4150f85-utilities\") pod \"certified-operators-pzxtl\" (UID: \"2575a01b-7720-4c86-ba23-e0cfb4150f85\") " pod="openshift-marketplace/certified-operators-pzxtl" Feb 17 13:50:53 crc kubenswrapper[4768]: I0217 13:50:53.626752 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5qc5t\" (UniqueName: \"kubernetes.io/projected/2575a01b-7720-4c86-ba23-e0cfb4150f85-kube-api-access-5qc5t\") pod \"certified-operators-pzxtl\" (UID: \"2575a01b-7720-4c86-ba23-e0cfb4150f85\") " pod="openshift-marketplace/certified-operators-pzxtl" Feb 17 13:50:53 crc kubenswrapper[4768]: I0217 13:50:53.705610 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-pzxtl" Feb 17 13:50:54 crc kubenswrapper[4768]: I0217 13:50:54.308622 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-pzxtl"] Feb 17 13:50:54 crc kubenswrapper[4768]: W0217 13:50:54.316341 4768 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2575a01b_7720_4c86_ba23_e0cfb4150f85.slice/crio-e4233e429b3cbd32e3ba087aacf67a16376b27103f9ffda8e304135a7e60ec46 WatchSource:0}: Error finding container e4233e429b3cbd32e3ba087aacf67a16376b27103f9ffda8e304135a7e60ec46: Status 404 returned error can't find the container with id e4233e429b3cbd32e3ba087aacf67a16376b27103f9ffda8e304135a7e60ec46 Feb 17 13:50:54 crc kubenswrapper[4768]: I0217 13:50:54.641913 4768 generic.go:334] "Generic (PLEG): container finished" podID="2575a01b-7720-4c86-ba23-e0cfb4150f85" containerID="72cb50012e35699ad3344f72ac82285bac3dd24901477c58c5d2382147dc91ff" exitCode=0 Feb 17 13:50:54 crc kubenswrapper[4768]: I0217 13:50:54.641964 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pzxtl" event={"ID":"2575a01b-7720-4c86-ba23-e0cfb4150f85","Type":"ContainerDied","Data":"72cb50012e35699ad3344f72ac82285bac3dd24901477c58c5d2382147dc91ff"} Feb 17 13:50:54 crc kubenswrapper[4768]: I0217 13:50:54.642247 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pzxtl" event={"ID":"2575a01b-7720-4c86-ba23-e0cfb4150f85","Type":"ContainerStarted","Data":"e4233e429b3cbd32e3ba087aacf67a16376b27103f9ffda8e304135a7e60ec46"} Feb 17 13:50:54 crc kubenswrapper[4768]: I0217 13:50:54.700477 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="metallb-system/frr-k8s-hr4bc" Feb 17 13:50:54 crc kubenswrapper[4768]: I0217 13:50:54.736313 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="metallb-system/frr-k8s-hr4bc" Feb 17 13:50:55 crc kubenswrapper[4768]: I0217 13:50:55.358616 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-nsd2m"] Feb 17 13:50:55 crc kubenswrapper[4768]: I0217 13:50:55.360056 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-nsd2m" Feb 17 13:50:55 crc kubenswrapper[4768]: I0217 13:50:55.370447 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-nsd2m"] Feb 17 13:50:55 crc kubenswrapper[4768]: I0217 13:50:55.470400 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7hdcx\" (UniqueName: \"kubernetes.io/projected/6b8a2819-6947-42eb-8421-3dfd4da9cab4-kube-api-access-7hdcx\") pod \"redhat-marketplace-nsd2m\" (UID: \"6b8a2819-6947-42eb-8421-3dfd4da9cab4\") " pod="openshift-marketplace/redhat-marketplace-nsd2m" Feb 17 13:50:55 crc kubenswrapper[4768]: I0217 13:50:55.470703 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6b8a2819-6947-42eb-8421-3dfd4da9cab4-catalog-content\") pod \"redhat-marketplace-nsd2m\" (UID: \"6b8a2819-6947-42eb-8421-3dfd4da9cab4\") " pod="openshift-marketplace/redhat-marketplace-nsd2m" Feb 17 13:50:55 crc kubenswrapper[4768]: I0217 13:50:55.470892 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6b8a2819-6947-42eb-8421-3dfd4da9cab4-utilities\") pod \"redhat-marketplace-nsd2m\" (UID: \"6b8a2819-6947-42eb-8421-3dfd4da9cab4\") " pod="openshift-marketplace/redhat-marketplace-nsd2m" Feb 17 13:50:55 crc kubenswrapper[4768]: I0217 13:50:55.572542 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6b8a2819-6947-42eb-8421-3dfd4da9cab4-utilities\") pod \"redhat-marketplace-nsd2m\" (UID: \"6b8a2819-6947-42eb-8421-3dfd4da9cab4\") " pod="openshift-marketplace/redhat-marketplace-nsd2m" Feb 17 13:50:55 crc kubenswrapper[4768]: I0217 13:50:55.572610 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7hdcx\" (UniqueName: \"kubernetes.io/projected/6b8a2819-6947-42eb-8421-3dfd4da9cab4-kube-api-access-7hdcx\") pod \"redhat-marketplace-nsd2m\" (UID: \"6b8a2819-6947-42eb-8421-3dfd4da9cab4\") " pod="openshift-marketplace/redhat-marketplace-nsd2m" Feb 17 13:50:55 crc kubenswrapper[4768]: I0217 13:50:55.572630 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6b8a2819-6947-42eb-8421-3dfd4da9cab4-catalog-content\") pod \"redhat-marketplace-nsd2m\" (UID: \"6b8a2819-6947-42eb-8421-3dfd4da9cab4\") " pod="openshift-marketplace/redhat-marketplace-nsd2m" Feb 17 13:50:55 crc kubenswrapper[4768]: I0217 13:50:55.573152 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6b8a2819-6947-42eb-8421-3dfd4da9cab4-utilities\") pod \"redhat-marketplace-nsd2m\" (UID: \"6b8a2819-6947-42eb-8421-3dfd4da9cab4\") " pod="openshift-marketplace/redhat-marketplace-nsd2m" Feb 17 13:50:55 crc kubenswrapper[4768]: I0217 13:50:55.573403 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6b8a2819-6947-42eb-8421-3dfd4da9cab4-catalog-content\") pod \"redhat-marketplace-nsd2m\" (UID: \"6b8a2819-6947-42eb-8421-3dfd4da9cab4\") " pod="openshift-marketplace/redhat-marketplace-nsd2m" Feb 17 13:50:55 crc kubenswrapper[4768]: I0217 13:50:55.590622 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7hdcx\" (UniqueName: \"kubernetes.io/projected/6b8a2819-6947-42eb-8421-3dfd4da9cab4-kube-api-access-7hdcx\") pod \"redhat-marketplace-nsd2m\" (UID: \"6b8a2819-6947-42eb-8421-3dfd4da9cab4\") " pod="openshift-marketplace/redhat-marketplace-nsd2m" Feb 17 13:50:55 crc kubenswrapper[4768]: I0217 13:50:55.649894 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pzxtl" event={"ID":"2575a01b-7720-4c86-ba23-e0cfb4150f85","Type":"ContainerStarted","Data":"96e8779e31445cfb3c13378ade5dabbf147f5f2f9f83c87d69b5714e4bc077e3"} Feb 17 13:50:55 crc kubenswrapper[4768]: I0217 13:50:55.682294 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-nsd2m" Feb 17 13:50:56 crc kubenswrapper[4768]: I0217 13:50:56.102068 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-nsd2m"] Feb 17 13:50:56 crc kubenswrapper[4768]: W0217 13:50:56.111330 4768 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6b8a2819_6947_42eb_8421_3dfd4da9cab4.slice/crio-339c181b07207e7582e9b3acaa739c7a552cf9ccb3610a8a2652bb9079e4009b WatchSource:0}: Error finding container 339c181b07207e7582e9b3acaa739c7a552cf9ccb3610a8a2652bb9079e4009b: Status 404 returned error can't find the container with id 339c181b07207e7582e9b3acaa739c7a552cf9ccb3610a8a2652bb9079e4009b Feb 17 13:50:56 crc kubenswrapper[4768]: I0217 13:50:56.656585 4768 generic.go:334] "Generic (PLEG): container finished" podID="2575a01b-7720-4c86-ba23-e0cfb4150f85" containerID="96e8779e31445cfb3c13378ade5dabbf147f5f2f9f83c87d69b5714e4bc077e3" exitCode=0 Feb 17 13:50:56 crc kubenswrapper[4768]: I0217 13:50:56.656682 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pzxtl" event={"ID":"2575a01b-7720-4c86-ba23-e0cfb4150f85","Type":"ContainerDied","Data":"96e8779e31445cfb3c13378ade5dabbf147f5f2f9f83c87d69b5714e4bc077e3"} Feb 17 13:50:56 crc kubenswrapper[4768]: I0217 13:50:56.658645 4768 generic.go:334] "Generic (PLEG): container finished" podID="6b8a2819-6947-42eb-8421-3dfd4da9cab4" containerID="bd3e45c210d21e01b9787b9e79f1c19a256635ef63b797ef8652896b2495aeda" exitCode=0 Feb 17 13:50:56 crc kubenswrapper[4768]: I0217 13:50:56.658676 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nsd2m" event={"ID":"6b8a2819-6947-42eb-8421-3dfd4da9cab4","Type":"ContainerDied","Data":"bd3e45c210d21e01b9787b9e79f1c19a256635ef63b797ef8652896b2495aeda"} Feb 17 13:50:56 crc kubenswrapper[4768]: I0217 13:50:56.658698 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nsd2m" event={"ID":"6b8a2819-6947-42eb-8421-3dfd4da9cab4","Type":"ContainerStarted","Data":"339c181b07207e7582e9b3acaa739c7a552cf9ccb3610a8a2652bb9079e4009b"} Feb 17 13:50:57 crc kubenswrapper[4768]: I0217 13:50:57.665613 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pzxtl" event={"ID":"2575a01b-7720-4c86-ba23-e0cfb4150f85","Type":"ContainerStarted","Data":"da645af803d078e59f27dbedc502a1487493ba0136aa01124612e605e3de8205"} Feb 17 13:50:57 crc kubenswrapper[4768]: I0217 13:50:57.667494 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nsd2m" event={"ID":"6b8a2819-6947-42eb-8421-3dfd4da9cab4","Type":"ContainerStarted","Data":"1162788d18df35f8f0a1714a8f4354a19d805b7afcc56e1bf729cc5078ad8ca0"} Feb 17 13:50:57 crc kubenswrapper[4768]: I0217 13:50:57.689835 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-pzxtl" podStartSLOduration=2.200977124 podStartE2EDuration="4.689819521s" podCreationTimestamp="2026-02-17 13:50:53 +0000 UTC" firstStartedPulling="2026-02-17 13:50:54.643236958 +0000 UTC m=+873.922623400" lastFinishedPulling="2026-02-17 13:50:57.132079345 +0000 UTC m=+876.411465797" observedRunningTime="2026-02-17 13:50:57.686576172 +0000 UTC m=+876.965962614" watchObservedRunningTime="2026-02-17 13:50:57.689819521 +0000 UTC m=+876.969205963" Feb 17 13:50:58 crc kubenswrapper[4768]: I0217 13:50:58.060568 4768 patch_prober.go:28] interesting pod/machine-config-daemon-p97z4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 13:50:58 crc kubenswrapper[4768]: I0217 13:50:58.061011 4768 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-p97z4" podUID="10c685ba-8fe0-425c-958c-3fb6754d3d84" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 13:50:58 crc kubenswrapper[4768]: I0217 13:50:58.362154 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-46vqg"] Feb 17 13:50:58 crc kubenswrapper[4768]: I0217 13:50:58.362961 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-46vqg" Feb 17 13:50:58 crc kubenswrapper[4768]: I0217 13:50:58.364994 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"openshift-service-ca.crt" Feb 17 13:50:58 crc kubenswrapper[4768]: I0217 13:50:58.365179 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-index-dockercfg-blqcx" Feb 17 13:50:58 crc kubenswrapper[4768]: I0217 13:50:58.366967 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"kube-root-ca.crt" Feb 17 13:50:58 crc kubenswrapper[4768]: I0217 13:50:58.373212 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-46vqg"] Feb 17 13:50:58 crc kubenswrapper[4768]: I0217 13:50:58.530471 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jvfmj\" (UniqueName: \"kubernetes.io/projected/ee54ff6a-14d8-4701-beac-8f6eeafc5d84-kube-api-access-jvfmj\") pod \"openstack-operator-index-46vqg\" (UID: \"ee54ff6a-14d8-4701-beac-8f6eeafc5d84\") " pod="openstack-operators/openstack-operator-index-46vqg" Feb 17 13:50:58 crc kubenswrapper[4768]: I0217 13:50:58.632255 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jvfmj\" (UniqueName: \"kubernetes.io/projected/ee54ff6a-14d8-4701-beac-8f6eeafc5d84-kube-api-access-jvfmj\") pod \"openstack-operator-index-46vqg\" (UID: \"ee54ff6a-14d8-4701-beac-8f6eeafc5d84\") " pod="openstack-operators/openstack-operator-index-46vqg" Feb 17 13:50:58 crc kubenswrapper[4768]: I0217 13:50:58.657888 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jvfmj\" (UniqueName: \"kubernetes.io/projected/ee54ff6a-14d8-4701-beac-8f6eeafc5d84-kube-api-access-jvfmj\") pod \"openstack-operator-index-46vqg\" (UID: \"ee54ff6a-14d8-4701-beac-8f6eeafc5d84\") " pod="openstack-operators/openstack-operator-index-46vqg" Feb 17 13:50:58 crc kubenswrapper[4768]: I0217 13:50:58.676466 4768 generic.go:334] "Generic (PLEG): container finished" podID="6b8a2819-6947-42eb-8421-3dfd4da9cab4" containerID="1162788d18df35f8f0a1714a8f4354a19d805b7afcc56e1bf729cc5078ad8ca0" exitCode=0 Feb 17 13:50:58 crc kubenswrapper[4768]: I0217 13:50:58.677301 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nsd2m" event={"ID":"6b8a2819-6947-42eb-8421-3dfd4da9cab4","Type":"ContainerDied","Data":"1162788d18df35f8f0a1714a8f4354a19d805b7afcc56e1bf729cc5078ad8ca0"} Feb 17 13:50:58 crc kubenswrapper[4768]: I0217 13:50:58.739437 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-46vqg" Feb 17 13:50:58 crc kubenswrapper[4768]: I0217 13:50:58.950804 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-46vqg"] Feb 17 13:50:59 crc kubenswrapper[4768]: I0217 13:50:59.684045 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nsd2m" event={"ID":"6b8a2819-6947-42eb-8421-3dfd4da9cab4","Type":"ContainerStarted","Data":"427c35df720b971339aa6f43ea6dc3f07153282e5148107cd61f004b228f9d11"} Feb 17 13:50:59 crc kubenswrapper[4768]: I0217 13:50:59.685073 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-46vqg" event={"ID":"ee54ff6a-14d8-4701-beac-8f6eeafc5d84","Type":"ContainerStarted","Data":"2b9698b79ff6668e5d9b73cdfcbde4ab06cdc21f32b3f33094943872aa297a7b"} Feb 17 13:50:59 crc kubenswrapper[4768]: I0217 13:50:59.694486 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-qq5lm" Feb 17 13:50:59 crc kubenswrapper[4768]: I0217 13:50:59.703014 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-nsd2m" podStartSLOduration=1.954817668 podStartE2EDuration="4.702990423s" podCreationTimestamp="2026-02-17 13:50:55 +0000 UTC" firstStartedPulling="2026-02-17 13:50:56.659760143 +0000 UTC m=+875.939146585" lastFinishedPulling="2026-02-17 13:50:59.407932888 +0000 UTC m=+878.687319340" observedRunningTime="2026-02-17 13:50:59.702791817 +0000 UTC m=+878.982178259" watchObservedRunningTime="2026-02-17 13:50:59.702990423 +0000 UTC m=+878.982376865" Feb 17 13:50:59 crc kubenswrapper[4768]: I0217 13:50:59.703138 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-hr4bc" Feb 17 13:50:59 crc kubenswrapper[4768]: I0217 13:50:59.759295 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/controller-69bbfbf88f-nv4f7" Feb 17 13:51:02 crc kubenswrapper[4768]: I0217 13:51:02.721283 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-46vqg" event={"ID":"ee54ff6a-14d8-4701-beac-8f6eeafc5d84","Type":"ContainerStarted","Data":"94c58fd577fe2db8827ad960005aa3dabc36e6a316db4ac530b438466786e9bd"} Feb 17 13:51:02 crc kubenswrapper[4768]: I0217 13:51:02.742938 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-46vqg" podStartSLOduration=2.002665275 podStartE2EDuration="4.742912192s" podCreationTimestamp="2026-02-17 13:50:58 +0000 UTC" firstStartedPulling="2026-02-17 13:50:58.961263554 +0000 UTC m=+878.240649996" lastFinishedPulling="2026-02-17 13:51:01.701510461 +0000 UTC m=+880.980896913" observedRunningTime="2026-02-17 13:51:02.736274759 +0000 UTC m=+882.015661201" watchObservedRunningTime="2026-02-17 13:51:02.742912192 +0000 UTC m=+882.022298644" Feb 17 13:51:03 crc kubenswrapper[4768]: I0217 13:51:03.705957 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-pzxtl" Feb 17 13:51:03 crc kubenswrapper[4768]: I0217 13:51:03.706031 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-pzxtl" Feb 17 13:51:03 crc kubenswrapper[4768]: I0217 13:51:03.759102 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-pzxtl" Feb 17 13:51:03 crc kubenswrapper[4768]: I0217 13:51:03.798860 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-pzxtl" Feb 17 13:51:05 crc kubenswrapper[4768]: I0217 13:51:05.683468 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-nsd2m" Feb 17 13:51:05 crc kubenswrapper[4768]: I0217 13:51:05.683810 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-nsd2m" Feb 17 13:51:05 crc kubenswrapper[4768]: I0217 13:51:05.768247 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-nsd2m" Feb 17 13:51:05 crc kubenswrapper[4768]: I0217 13:51:05.811395 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-nsd2m" Feb 17 13:51:07 crc kubenswrapper[4768]: I0217 13:51:07.361588 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-rftkv"] Feb 17 13:51:07 crc kubenswrapper[4768]: I0217 13:51:07.362851 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-rftkv" Feb 17 13:51:07 crc kubenswrapper[4768]: I0217 13:51:07.385155 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-rftkv"] Feb 17 13:51:07 crc kubenswrapper[4768]: I0217 13:51:07.456724 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-thrbt\" (UniqueName: \"kubernetes.io/projected/05ba0b25-ef30-47cb-a4fe-bff21358432e-kube-api-access-thrbt\") pod \"community-operators-rftkv\" (UID: \"05ba0b25-ef30-47cb-a4fe-bff21358432e\") " pod="openshift-marketplace/community-operators-rftkv" Feb 17 13:51:07 crc kubenswrapper[4768]: I0217 13:51:07.456792 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/05ba0b25-ef30-47cb-a4fe-bff21358432e-utilities\") pod \"community-operators-rftkv\" (UID: \"05ba0b25-ef30-47cb-a4fe-bff21358432e\") " pod="openshift-marketplace/community-operators-rftkv" Feb 17 13:51:07 crc kubenswrapper[4768]: I0217 13:51:07.456824 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/05ba0b25-ef30-47cb-a4fe-bff21358432e-catalog-content\") pod \"community-operators-rftkv\" (UID: \"05ba0b25-ef30-47cb-a4fe-bff21358432e\") " pod="openshift-marketplace/community-operators-rftkv" Feb 17 13:51:07 crc kubenswrapper[4768]: I0217 13:51:07.558759 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-thrbt\" (UniqueName: \"kubernetes.io/projected/05ba0b25-ef30-47cb-a4fe-bff21358432e-kube-api-access-thrbt\") pod \"community-operators-rftkv\" (UID: \"05ba0b25-ef30-47cb-a4fe-bff21358432e\") " pod="openshift-marketplace/community-operators-rftkv" Feb 17 13:51:07 crc kubenswrapper[4768]: I0217 13:51:07.558816 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/05ba0b25-ef30-47cb-a4fe-bff21358432e-utilities\") pod \"community-operators-rftkv\" (UID: \"05ba0b25-ef30-47cb-a4fe-bff21358432e\") " pod="openshift-marketplace/community-operators-rftkv" Feb 17 13:51:07 crc kubenswrapper[4768]: I0217 13:51:07.558846 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/05ba0b25-ef30-47cb-a4fe-bff21358432e-catalog-content\") pod \"community-operators-rftkv\" (UID: \"05ba0b25-ef30-47cb-a4fe-bff21358432e\") " pod="openshift-marketplace/community-operators-rftkv" Feb 17 13:51:07 crc kubenswrapper[4768]: I0217 13:51:07.559338 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/05ba0b25-ef30-47cb-a4fe-bff21358432e-catalog-content\") pod \"community-operators-rftkv\" (UID: \"05ba0b25-ef30-47cb-a4fe-bff21358432e\") " pod="openshift-marketplace/community-operators-rftkv" Feb 17 13:51:07 crc kubenswrapper[4768]: I0217 13:51:07.559460 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/05ba0b25-ef30-47cb-a4fe-bff21358432e-utilities\") pod \"community-operators-rftkv\" (UID: \"05ba0b25-ef30-47cb-a4fe-bff21358432e\") " pod="openshift-marketplace/community-operators-rftkv" Feb 17 13:51:07 crc kubenswrapper[4768]: I0217 13:51:07.583648 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-thrbt\" (UniqueName: \"kubernetes.io/projected/05ba0b25-ef30-47cb-a4fe-bff21358432e-kube-api-access-thrbt\") pod \"community-operators-rftkv\" (UID: \"05ba0b25-ef30-47cb-a4fe-bff21358432e\") " pod="openshift-marketplace/community-operators-rftkv" Feb 17 13:51:07 crc kubenswrapper[4768]: I0217 13:51:07.701714 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-rftkv" Feb 17 13:51:08 crc kubenswrapper[4768]: I0217 13:51:08.195387 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-rftkv"] Feb 17 13:51:08 crc kubenswrapper[4768]: I0217 13:51:08.739992 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/openstack-operator-index-46vqg" Feb 17 13:51:08 crc kubenswrapper[4768]: I0217 13:51:08.740266 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-index-46vqg" Feb 17 13:51:08 crc kubenswrapper[4768]: I0217 13:51:08.755434 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-pzxtl"] Feb 17 13:51:08 crc kubenswrapper[4768]: I0217 13:51:08.755667 4768 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-pzxtl" podUID="2575a01b-7720-4c86-ba23-e0cfb4150f85" containerName="registry-server" containerID="cri-o://da645af803d078e59f27dbedc502a1487493ba0136aa01124612e605e3de8205" gracePeriod=2 Feb 17 13:51:08 crc kubenswrapper[4768]: I0217 13:51:08.768347 4768 generic.go:334] "Generic (PLEG): container finished" podID="05ba0b25-ef30-47cb-a4fe-bff21358432e" containerID="d8c40f23530c652b6594eee11f1fbb7a32781b3997d1be0619e6ba48b1a92021" exitCode=0 Feb 17 13:51:08 crc kubenswrapper[4768]: I0217 13:51:08.768407 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rftkv" event={"ID":"05ba0b25-ef30-47cb-a4fe-bff21358432e","Type":"ContainerDied","Data":"d8c40f23530c652b6594eee11f1fbb7a32781b3997d1be0619e6ba48b1a92021"} Feb 17 13:51:08 crc kubenswrapper[4768]: I0217 13:51:08.768437 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rftkv" event={"ID":"05ba0b25-ef30-47cb-a4fe-bff21358432e","Type":"ContainerStarted","Data":"4fb16a9f20c810f8ff08f26895c480f5a01953bb59857348ca5a220bfb0c26ea"} Feb 17 13:51:08 crc kubenswrapper[4768]: I0217 13:51:08.779136 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/openstack-operator-index-46vqg" Feb 17 13:51:08 crc kubenswrapper[4768]: I0217 13:51:08.809687 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-index-46vqg" Feb 17 13:51:09 crc kubenswrapper[4768]: I0217 13:51:09.123741 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-pzxtl" Feb 17 13:51:09 crc kubenswrapper[4768]: I0217 13:51:09.280170 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5qc5t\" (UniqueName: \"kubernetes.io/projected/2575a01b-7720-4c86-ba23-e0cfb4150f85-kube-api-access-5qc5t\") pod \"2575a01b-7720-4c86-ba23-e0cfb4150f85\" (UID: \"2575a01b-7720-4c86-ba23-e0cfb4150f85\") " Feb 17 13:51:09 crc kubenswrapper[4768]: I0217 13:51:09.280291 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2575a01b-7720-4c86-ba23-e0cfb4150f85-utilities\") pod \"2575a01b-7720-4c86-ba23-e0cfb4150f85\" (UID: \"2575a01b-7720-4c86-ba23-e0cfb4150f85\") " Feb 17 13:51:09 crc kubenswrapper[4768]: I0217 13:51:09.280578 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2575a01b-7720-4c86-ba23-e0cfb4150f85-catalog-content\") pod \"2575a01b-7720-4c86-ba23-e0cfb4150f85\" (UID: \"2575a01b-7720-4c86-ba23-e0cfb4150f85\") " Feb 17 13:51:09 crc kubenswrapper[4768]: I0217 13:51:09.281035 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2575a01b-7720-4c86-ba23-e0cfb4150f85-utilities" (OuterVolumeSpecName: "utilities") pod "2575a01b-7720-4c86-ba23-e0cfb4150f85" (UID: "2575a01b-7720-4c86-ba23-e0cfb4150f85"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 13:51:09 crc kubenswrapper[4768]: I0217 13:51:09.286127 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2575a01b-7720-4c86-ba23-e0cfb4150f85-kube-api-access-5qc5t" (OuterVolumeSpecName: "kube-api-access-5qc5t") pod "2575a01b-7720-4c86-ba23-e0cfb4150f85" (UID: "2575a01b-7720-4c86-ba23-e0cfb4150f85"). InnerVolumeSpecName "kube-api-access-5qc5t". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 13:51:09 crc kubenswrapper[4768]: I0217 13:51:09.290759 4768 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2575a01b-7720-4c86-ba23-e0cfb4150f85-utilities\") on node \"crc\" DevicePath \"\"" Feb 17 13:51:09 crc kubenswrapper[4768]: I0217 13:51:09.290781 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5qc5t\" (UniqueName: \"kubernetes.io/projected/2575a01b-7720-4c86-ba23-e0cfb4150f85-kube-api-access-5qc5t\") on node \"crc\" DevicePath \"\"" Feb 17 13:51:09 crc kubenswrapper[4768]: I0217 13:51:09.340300 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2575a01b-7720-4c86-ba23-e0cfb4150f85-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "2575a01b-7720-4c86-ba23-e0cfb4150f85" (UID: "2575a01b-7720-4c86-ba23-e0cfb4150f85"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 13:51:09 crc kubenswrapper[4768]: I0217 13:51:09.392615 4768 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2575a01b-7720-4c86-ba23-e0cfb4150f85-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 17 13:51:09 crc kubenswrapper[4768]: I0217 13:51:09.779192 4768 generic.go:334] "Generic (PLEG): container finished" podID="2575a01b-7720-4c86-ba23-e0cfb4150f85" containerID="da645af803d078e59f27dbedc502a1487493ba0136aa01124612e605e3de8205" exitCode=0 Feb 17 13:51:09 crc kubenswrapper[4768]: I0217 13:51:09.779259 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pzxtl" event={"ID":"2575a01b-7720-4c86-ba23-e0cfb4150f85","Type":"ContainerDied","Data":"da645af803d078e59f27dbedc502a1487493ba0136aa01124612e605e3de8205"} Feb 17 13:51:09 crc kubenswrapper[4768]: I0217 13:51:09.779307 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-pzxtl" Feb 17 13:51:09 crc kubenswrapper[4768]: I0217 13:51:09.779320 4768 scope.go:117] "RemoveContainer" containerID="da645af803d078e59f27dbedc502a1487493ba0136aa01124612e605e3de8205" Feb 17 13:51:09 crc kubenswrapper[4768]: I0217 13:51:09.779307 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pzxtl" event={"ID":"2575a01b-7720-4c86-ba23-e0cfb4150f85","Type":"ContainerDied","Data":"e4233e429b3cbd32e3ba087aacf67a16376b27103f9ffda8e304135a7e60ec46"} Feb 17 13:51:09 crc kubenswrapper[4768]: I0217 13:51:09.805484 4768 scope.go:117] "RemoveContainer" containerID="96e8779e31445cfb3c13378ade5dabbf147f5f2f9f83c87d69b5714e4bc077e3" Feb 17 13:51:09 crc kubenswrapper[4768]: I0217 13:51:09.810233 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-pzxtl"] Feb 17 13:51:09 crc kubenswrapper[4768]: I0217 13:51:09.820868 4768 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-pzxtl"] Feb 17 13:51:09 crc kubenswrapper[4768]: I0217 13:51:09.836010 4768 scope.go:117] "RemoveContainer" containerID="72cb50012e35699ad3344f72ac82285bac3dd24901477c58c5d2382147dc91ff" Feb 17 13:51:09 crc kubenswrapper[4768]: I0217 13:51:09.857719 4768 scope.go:117] "RemoveContainer" containerID="da645af803d078e59f27dbedc502a1487493ba0136aa01124612e605e3de8205" Feb 17 13:51:09 crc kubenswrapper[4768]: E0217 13:51:09.858184 4768 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"da645af803d078e59f27dbedc502a1487493ba0136aa01124612e605e3de8205\": container with ID starting with da645af803d078e59f27dbedc502a1487493ba0136aa01124612e605e3de8205 not found: ID does not exist" containerID="da645af803d078e59f27dbedc502a1487493ba0136aa01124612e605e3de8205" Feb 17 13:51:09 crc kubenswrapper[4768]: I0217 13:51:09.858233 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"da645af803d078e59f27dbedc502a1487493ba0136aa01124612e605e3de8205"} err="failed to get container status \"da645af803d078e59f27dbedc502a1487493ba0136aa01124612e605e3de8205\": rpc error: code = NotFound desc = could not find container \"da645af803d078e59f27dbedc502a1487493ba0136aa01124612e605e3de8205\": container with ID starting with da645af803d078e59f27dbedc502a1487493ba0136aa01124612e605e3de8205 not found: ID does not exist" Feb 17 13:51:09 crc kubenswrapper[4768]: I0217 13:51:09.858269 4768 scope.go:117] "RemoveContainer" containerID="96e8779e31445cfb3c13378ade5dabbf147f5f2f9f83c87d69b5714e4bc077e3" Feb 17 13:51:09 crc kubenswrapper[4768]: E0217 13:51:09.858804 4768 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"96e8779e31445cfb3c13378ade5dabbf147f5f2f9f83c87d69b5714e4bc077e3\": container with ID starting with 96e8779e31445cfb3c13378ade5dabbf147f5f2f9f83c87d69b5714e4bc077e3 not found: ID does not exist" containerID="96e8779e31445cfb3c13378ade5dabbf147f5f2f9f83c87d69b5714e4bc077e3" Feb 17 13:51:09 crc kubenswrapper[4768]: I0217 13:51:09.858848 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"96e8779e31445cfb3c13378ade5dabbf147f5f2f9f83c87d69b5714e4bc077e3"} err="failed to get container status \"96e8779e31445cfb3c13378ade5dabbf147f5f2f9f83c87d69b5714e4bc077e3\": rpc error: code = NotFound desc = could not find container \"96e8779e31445cfb3c13378ade5dabbf147f5f2f9f83c87d69b5714e4bc077e3\": container with ID starting with 96e8779e31445cfb3c13378ade5dabbf147f5f2f9f83c87d69b5714e4bc077e3 not found: ID does not exist" Feb 17 13:51:09 crc kubenswrapper[4768]: I0217 13:51:09.858875 4768 scope.go:117] "RemoveContainer" containerID="72cb50012e35699ad3344f72ac82285bac3dd24901477c58c5d2382147dc91ff" Feb 17 13:51:09 crc kubenswrapper[4768]: E0217 13:51:09.859263 4768 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"72cb50012e35699ad3344f72ac82285bac3dd24901477c58c5d2382147dc91ff\": container with ID starting with 72cb50012e35699ad3344f72ac82285bac3dd24901477c58c5d2382147dc91ff not found: ID does not exist" containerID="72cb50012e35699ad3344f72ac82285bac3dd24901477c58c5d2382147dc91ff" Feb 17 13:51:09 crc kubenswrapper[4768]: I0217 13:51:09.859295 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"72cb50012e35699ad3344f72ac82285bac3dd24901477c58c5d2382147dc91ff"} err="failed to get container status \"72cb50012e35699ad3344f72ac82285bac3dd24901477c58c5d2382147dc91ff\": rpc error: code = NotFound desc = could not find container \"72cb50012e35699ad3344f72ac82285bac3dd24901477c58c5d2382147dc91ff\": container with ID starting with 72cb50012e35699ad3344f72ac82285bac3dd24901477c58c5d2382147dc91ff not found: ID does not exist" Feb 17 13:51:10 crc kubenswrapper[4768]: I0217 13:51:10.788176 4768 generic.go:334] "Generic (PLEG): container finished" podID="05ba0b25-ef30-47cb-a4fe-bff21358432e" containerID="405c89ec22602051345439084c534dfdb1d7c760b6d3d1132d8926bcb0110f64" exitCode=0 Feb 17 13:51:10 crc kubenswrapper[4768]: I0217 13:51:10.788263 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rftkv" event={"ID":"05ba0b25-ef30-47cb-a4fe-bff21358432e","Type":"ContainerDied","Data":"405c89ec22602051345439084c534dfdb1d7c760b6d3d1132d8926bcb0110f64"} Feb 17 13:51:11 crc kubenswrapper[4768]: I0217 13:51:11.541163 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2575a01b-7720-4c86-ba23-e0cfb4150f85" path="/var/lib/kubelet/pods/2575a01b-7720-4c86-ba23-e0cfb4150f85/volumes" Feb 17 13:51:11 crc kubenswrapper[4768]: I0217 13:51:11.797769 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rftkv" event={"ID":"05ba0b25-ef30-47cb-a4fe-bff21358432e","Type":"ContainerStarted","Data":"95925a766e98b829f3f301f677e67644e7165038cc5e9c67e7ed0ab8a0643402"} Feb 17 13:51:11 crc kubenswrapper[4768]: I0217 13:51:11.816707 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-rftkv" podStartSLOduration=2.362232364 podStartE2EDuration="4.816687972s" podCreationTimestamp="2026-02-17 13:51:07 +0000 UTC" firstStartedPulling="2026-02-17 13:51:08.772345961 +0000 UTC m=+888.051732423" lastFinishedPulling="2026-02-17 13:51:11.226801589 +0000 UTC m=+890.506188031" observedRunningTime="2026-02-17 13:51:11.815004195 +0000 UTC m=+891.094390637" watchObservedRunningTime="2026-02-17 13:51:11.816687972 +0000 UTC m=+891.096074414" Feb 17 13:51:12 crc kubenswrapper[4768]: I0217 13:51:12.596376 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/02aaacdbb2cdc34212ef0d4f992a08d2443727e2a4312d7c57a1078608n5lkj"] Feb 17 13:51:12 crc kubenswrapper[4768]: E0217 13:51:12.596903 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2575a01b-7720-4c86-ba23-e0cfb4150f85" containerName="extract-utilities" Feb 17 13:51:12 crc kubenswrapper[4768]: I0217 13:51:12.596972 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="2575a01b-7720-4c86-ba23-e0cfb4150f85" containerName="extract-utilities" Feb 17 13:51:12 crc kubenswrapper[4768]: E0217 13:51:12.597052 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2575a01b-7720-4c86-ba23-e0cfb4150f85" containerName="registry-server" Feb 17 13:51:12 crc kubenswrapper[4768]: I0217 13:51:12.597156 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="2575a01b-7720-4c86-ba23-e0cfb4150f85" containerName="registry-server" Feb 17 13:51:12 crc kubenswrapper[4768]: E0217 13:51:12.597241 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2575a01b-7720-4c86-ba23-e0cfb4150f85" containerName="extract-content" Feb 17 13:51:12 crc kubenswrapper[4768]: I0217 13:51:12.597312 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="2575a01b-7720-4c86-ba23-e0cfb4150f85" containerName="extract-content" Feb 17 13:51:12 crc kubenswrapper[4768]: I0217 13:51:12.597474 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="2575a01b-7720-4c86-ba23-e0cfb4150f85" containerName="registry-server" Feb 17 13:51:12 crc kubenswrapper[4768]: I0217 13:51:12.598397 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/02aaacdbb2cdc34212ef0d4f992a08d2443727e2a4312d7c57a1078608n5lkj" Feb 17 13:51:12 crc kubenswrapper[4768]: I0217 13:51:12.601868 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"default-dockercfg-5qj7m" Feb 17 13:51:12 crc kubenswrapper[4768]: I0217 13:51:12.605940 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/02aaacdbb2cdc34212ef0d4f992a08d2443727e2a4312d7c57a1078608n5lkj"] Feb 17 13:51:12 crc kubenswrapper[4768]: I0217 13:51:12.740671 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gbsj7\" (UniqueName: \"kubernetes.io/projected/536648e3-7aff-4027-8132-3aed7835b43f-kube-api-access-gbsj7\") pod \"02aaacdbb2cdc34212ef0d4f992a08d2443727e2a4312d7c57a1078608n5lkj\" (UID: \"536648e3-7aff-4027-8132-3aed7835b43f\") " pod="openstack-operators/02aaacdbb2cdc34212ef0d4f992a08d2443727e2a4312d7c57a1078608n5lkj" Feb 17 13:51:12 crc kubenswrapper[4768]: I0217 13:51:12.740735 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/536648e3-7aff-4027-8132-3aed7835b43f-util\") pod \"02aaacdbb2cdc34212ef0d4f992a08d2443727e2a4312d7c57a1078608n5lkj\" (UID: \"536648e3-7aff-4027-8132-3aed7835b43f\") " pod="openstack-operators/02aaacdbb2cdc34212ef0d4f992a08d2443727e2a4312d7c57a1078608n5lkj" Feb 17 13:51:12 crc kubenswrapper[4768]: I0217 13:51:12.740814 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/536648e3-7aff-4027-8132-3aed7835b43f-bundle\") pod \"02aaacdbb2cdc34212ef0d4f992a08d2443727e2a4312d7c57a1078608n5lkj\" (UID: \"536648e3-7aff-4027-8132-3aed7835b43f\") " pod="openstack-operators/02aaacdbb2cdc34212ef0d4f992a08d2443727e2a4312d7c57a1078608n5lkj" Feb 17 13:51:12 crc kubenswrapper[4768]: I0217 13:51:12.844863 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/536648e3-7aff-4027-8132-3aed7835b43f-bundle\") pod \"02aaacdbb2cdc34212ef0d4f992a08d2443727e2a4312d7c57a1078608n5lkj\" (UID: \"536648e3-7aff-4027-8132-3aed7835b43f\") " pod="openstack-operators/02aaacdbb2cdc34212ef0d4f992a08d2443727e2a4312d7c57a1078608n5lkj" Feb 17 13:51:12 crc kubenswrapper[4768]: I0217 13:51:12.845320 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gbsj7\" (UniqueName: \"kubernetes.io/projected/536648e3-7aff-4027-8132-3aed7835b43f-kube-api-access-gbsj7\") pod \"02aaacdbb2cdc34212ef0d4f992a08d2443727e2a4312d7c57a1078608n5lkj\" (UID: \"536648e3-7aff-4027-8132-3aed7835b43f\") " pod="openstack-operators/02aaacdbb2cdc34212ef0d4f992a08d2443727e2a4312d7c57a1078608n5lkj" Feb 17 13:51:12 crc kubenswrapper[4768]: I0217 13:51:12.845352 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/536648e3-7aff-4027-8132-3aed7835b43f-util\") pod \"02aaacdbb2cdc34212ef0d4f992a08d2443727e2a4312d7c57a1078608n5lkj\" (UID: \"536648e3-7aff-4027-8132-3aed7835b43f\") " pod="openstack-operators/02aaacdbb2cdc34212ef0d4f992a08d2443727e2a4312d7c57a1078608n5lkj" Feb 17 13:51:12 crc kubenswrapper[4768]: I0217 13:51:12.845947 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/536648e3-7aff-4027-8132-3aed7835b43f-util\") pod \"02aaacdbb2cdc34212ef0d4f992a08d2443727e2a4312d7c57a1078608n5lkj\" (UID: \"536648e3-7aff-4027-8132-3aed7835b43f\") " pod="openstack-operators/02aaacdbb2cdc34212ef0d4f992a08d2443727e2a4312d7c57a1078608n5lkj" Feb 17 13:51:12 crc kubenswrapper[4768]: I0217 13:51:12.846543 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/536648e3-7aff-4027-8132-3aed7835b43f-bundle\") pod \"02aaacdbb2cdc34212ef0d4f992a08d2443727e2a4312d7c57a1078608n5lkj\" (UID: \"536648e3-7aff-4027-8132-3aed7835b43f\") " pod="openstack-operators/02aaacdbb2cdc34212ef0d4f992a08d2443727e2a4312d7c57a1078608n5lkj" Feb 17 13:51:12 crc kubenswrapper[4768]: I0217 13:51:12.870907 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gbsj7\" (UniqueName: \"kubernetes.io/projected/536648e3-7aff-4027-8132-3aed7835b43f-kube-api-access-gbsj7\") pod \"02aaacdbb2cdc34212ef0d4f992a08d2443727e2a4312d7c57a1078608n5lkj\" (UID: \"536648e3-7aff-4027-8132-3aed7835b43f\") " pod="openstack-operators/02aaacdbb2cdc34212ef0d4f992a08d2443727e2a4312d7c57a1078608n5lkj" Feb 17 13:51:12 crc kubenswrapper[4768]: I0217 13:51:12.926712 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/02aaacdbb2cdc34212ef0d4f992a08d2443727e2a4312d7c57a1078608n5lkj" Feb 17 13:51:13 crc kubenswrapper[4768]: I0217 13:51:13.346887 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/02aaacdbb2cdc34212ef0d4f992a08d2443727e2a4312d7c57a1078608n5lkj"] Feb 17 13:51:13 crc kubenswrapper[4768]: W0217 13:51:13.349541 4768 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod536648e3_7aff_4027_8132_3aed7835b43f.slice/crio-42a03b802fdb95b200bf472f39b341e9690d3f985cf2fcfe07313448551363aa WatchSource:0}: Error finding container 42a03b802fdb95b200bf472f39b341e9690d3f985cf2fcfe07313448551363aa: Status 404 returned error can't find the container with id 42a03b802fdb95b200bf472f39b341e9690d3f985cf2fcfe07313448551363aa Feb 17 13:51:13 crc kubenswrapper[4768]: I0217 13:51:13.812272 4768 generic.go:334] "Generic (PLEG): container finished" podID="536648e3-7aff-4027-8132-3aed7835b43f" containerID="0de067e6e4a58dec557d926f7a5ae5e5a257a592b2e809b2604cdd0e0e4f11ef" exitCode=0 Feb 17 13:51:13 crc kubenswrapper[4768]: I0217 13:51:13.812480 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/02aaacdbb2cdc34212ef0d4f992a08d2443727e2a4312d7c57a1078608n5lkj" event={"ID":"536648e3-7aff-4027-8132-3aed7835b43f","Type":"ContainerDied","Data":"0de067e6e4a58dec557d926f7a5ae5e5a257a592b2e809b2604cdd0e0e4f11ef"} Feb 17 13:51:13 crc kubenswrapper[4768]: I0217 13:51:13.812567 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/02aaacdbb2cdc34212ef0d4f992a08d2443727e2a4312d7c57a1078608n5lkj" event={"ID":"536648e3-7aff-4027-8132-3aed7835b43f","Type":"ContainerStarted","Data":"42a03b802fdb95b200bf472f39b341e9690d3f985cf2fcfe07313448551363aa"} Feb 17 13:51:13 crc kubenswrapper[4768]: I0217 13:51:13.958337 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-nsd2m"] Feb 17 13:51:13 crc kubenswrapper[4768]: I0217 13:51:13.958612 4768 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-nsd2m" podUID="6b8a2819-6947-42eb-8421-3dfd4da9cab4" containerName="registry-server" containerID="cri-o://427c35df720b971339aa6f43ea6dc3f07153282e5148107cd61f004b228f9d11" gracePeriod=2 Feb 17 13:51:14 crc kubenswrapper[4768]: I0217 13:51:14.462473 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-nsd2m" Feb 17 13:51:14 crc kubenswrapper[4768]: I0217 13:51:14.568251 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6b8a2819-6947-42eb-8421-3dfd4da9cab4-utilities\") pod \"6b8a2819-6947-42eb-8421-3dfd4da9cab4\" (UID: \"6b8a2819-6947-42eb-8421-3dfd4da9cab4\") " Feb 17 13:51:14 crc kubenswrapper[4768]: I0217 13:51:14.568349 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6b8a2819-6947-42eb-8421-3dfd4da9cab4-catalog-content\") pod \"6b8a2819-6947-42eb-8421-3dfd4da9cab4\" (UID: \"6b8a2819-6947-42eb-8421-3dfd4da9cab4\") " Feb 17 13:51:14 crc kubenswrapper[4768]: I0217 13:51:14.568427 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7hdcx\" (UniqueName: \"kubernetes.io/projected/6b8a2819-6947-42eb-8421-3dfd4da9cab4-kube-api-access-7hdcx\") pod \"6b8a2819-6947-42eb-8421-3dfd4da9cab4\" (UID: \"6b8a2819-6947-42eb-8421-3dfd4da9cab4\") " Feb 17 13:51:14 crc kubenswrapper[4768]: I0217 13:51:14.569993 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6b8a2819-6947-42eb-8421-3dfd4da9cab4-utilities" (OuterVolumeSpecName: "utilities") pod "6b8a2819-6947-42eb-8421-3dfd4da9cab4" (UID: "6b8a2819-6947-42eb-8421-3dfd4da9cab4"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 13:51:14 crc kubenswrapper[4768]: I0217 13:51:14.578287 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6b8a2819-6947-42eb-8421-3dfd4da9cab4-kube-api-access-7hdcx" (OuterVolumeSpecName: "kube-api-access-7hdcx") pod "6b8a2819-6947-42eb-8421-3dfd4da9cab4" (UID: "6b8a2819-6947-42eb-8421-3dfd4da9cab4"). InnerVolumeSpecName "kube-api-access-7hdcx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 13:51:14 crc kubenswrapper[4768]: I0217 13:51:14.590214 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6b8a2819-6947-42eb-8421-3dfd4da9cab4-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "6b8a2819-6947-42eb-8421-3dfd4da9cab4" (UID: "6b8a2819-6947-42eb-8421-3dfd4da9cab4"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 13:51:14 crc kubenswrapper[4768]: I0217 13:51:14.670056 4768 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6b8a2819-6947-42eb-8421-3dfd4da9cab4-utilities\") on node \"crc\" DevicePath \"\"" Feb 17 13:51:14 crc kubenswrapper[4768]: I0217 13:51:14.670089 4768 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6b8a2819-6947-42eb-8421-3dfd4da9cab4-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 17 13:51:14 crc kubenswrapper[4768]: I0217 13:51:14.670116 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7hdcx\" (UniqueName: \"kubernetes.io/projected/6b8a2819-6947-42eb-8421-3dfd4da9cab4-kube-api-access-7hdcx\") on node \"crc\" DevicePath \"\"" Feb 17 13:51:14 crc kubenswrapper[4768]: I0217 13:51:14.823641 4768 generic.go:334] "Generic (PLEG): container finished" podID="6b8a2819-6947-42eb-8421-3dfd4da9cab4" containerID="427c35df720b971339aa6f43ea6dc3f07153282e5148107cd61f004b228f9d11" exitCode=0 Feb 17 13:51:14 crc kubenswrapper[4768]: I0217 13:51:14.823724 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nsd2m" event={"ID":"6b8a2819-6947-42eb-8421-3dfd4da9cab4","Type":"ContainerDied","Data":"427c35df720b971339aa6f43ea6dc3f07153282e5148107cd61f004b228f9d11"} Feb 17 13:51:14 crc kubenswrapper[4768]: I0217 13:51:14.823758 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nsd2m" event={"ID":"6b8a2819-6947-42eb-8421-3dfd4da9cab4","Type":"ContainerDied","Data":"339c181b07207e7582e9b3acaa739c7a552cf9ccb3610a8a2652bb9079e4009b"} Feb 17 13:51:14 crc kubenswrapper[4768]: I0217 13:51:14.823784 4768 scope.go:117] "RemoveContainer" containerID="427c35df720b971339aa6f43ea6dc3f07153282e5148107cd61f004b228f9d11" Feb 17 13:51:14 crc kubenswrapper[4768]: I0217 13:51:14.823931 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-nsd2m" Feb 17 13:51:14 crc kubenswrapper[4768]: I0217 13:51:14.827348 4768 generic.go:334] "Generic (PLEG): container finished" podID="536648e3-7aff-4027-8132-3aed7835b43f" containerID="560298fb38bc6137f0fe2915f0f7ce676b9e61af1c49c7579afee9c3643a9129" exitCode=0 Feb 17 13:51:14 crc kubenswrapper[4768]: I0217 13:51:14.827411 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/02aaacdbb2cdc34212ef0d4f992a08d2443727e2a4312d7c57a1078608n5lkj" event={"ID":"536648e3-7aff-4027-8132-3aed7835b43f","Type":"ContainerDied","Data":"560298fb38bc6137f0fe2915f0f7ce676b9e61af1c49c7579afee9c3643a9129"} Feb 17 13:51:14 crc kubenswrapper[4768]: I0217 13:51:14.869963 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-nsd2m"] Feb 17 13:51:14 crc kubenswrapper[4768]: I0217 13:51:14.875524 4768 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-nsd2m"] Feb 17 13:51:14 crc kubenswrapper[4768]: I0217 13:51:14.875964 4768 scope.go:117] "RemoveContainer" containerID="1162788d18df35f8f0a1714a8f4354a19d805b7afcc56e1bf729cc5078ad8ca0" Feb 17 13:51:14 crc kubenswrapper[4768]: I0217 13:51:14.891097 4768 scope.go:117] "RemoveContainer" containerID="bd3e45c210d21e01b9787b9e79f1c19a256635ef63b797ef8652896b2495aeda" Feb 17 13:51:14 crc kubenswrapper[4768]: I0217 13:51:14.913392 4768 scope.go:117] "RemoveContainer" containerID="427c35df720b971339aa6f43ea6dc3f07153282e5148107cd61f004b228f9d11" Feb 17 13:51:14 crc kubenswrapper[4768]: E0217 13:51:14.913842 4768 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"427c35df720b971339aa6f43ea6dc3f07153282e5148107cd61f004b228f9d11\": container with ID starting with 427c35df720b971339aa6f43ea6dc3f07153282e5148107cd61f004b228f9d11 not found: ID does not exist" containerID="427c35df720b971339aa6f43ea6dc3f07153282e5148107cd61f004b228f9d11" Feb 17 13:51:14 crc kubenswrapper[4768]: I0217 13:51:14.913879 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"427c35df720b971339aa6f43ea6dc3f07153282e5148107cd61f004b228f9d11"} err="failed to get container status \"427c35df720b971339aa6f43ea6dc3f07153282e5148107cd61f004b228f9d11\": rpc error: code = NotFound desc = could not find container \"427c35df720b971339aa6f43ea6dc3f07153282e5148107cd61f004b228f9d11\": container with ID starting with 427c35df720b971339aa6f43ea6dc3f07153282e5148107cd61f004b228f9d11 not found: ID does not exist" Feb 17 13:51:14 crc kubenswrapper[4768]: I0217 13:51:14.913903 4768 scope.go:117] "RemoveContainer" containerID="1162788d18df35f8f0a1714a8f4354a19d805b7afcc56e1bf729cc5078ad8ca0" Feb 17 13:51:14 crc kubenswrapper[4768]: E0217 13:51:14.914462 4768 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1162788d18df35f8f0a1714a8f4354a19d805b7afcc56e1bf729cc5078ad8ca0\": container with ID starting with 1162788d18df35f8f0a1714a8f4354a19d805b7afcc56e1bf729cc5078ad8ca0 not found: ID does not exist" containerID="1162788d18df35f8f0a1714a8f4354a19d805b7afcc56e1bf729cc5078ad8ca0" Feb 17 13:51:14 crc kubenswrapper[4768]: I0217 13:51:14.914494 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1162788d18df35f8f0a1714a8f4354a19d805b7afcc56e1bf729cc5078ad8ca0"} err="failed to get container status \"1162788d18df35f8f0a1714a8f4354a19d805b7afcc56e1bf729cc5078ad8ca0\": rpc error: code = NotFound desc = could not find container \"1162788d18df35f8f0a1714a8f4354a19d805b7afcc56e1bf729cc5078ad8ca0\": container with ID starting with 1162788d18df35f8f0a1714a8f4354a19d805b7afcc56e1bf729cc5078ad8ca0 not found: ID does not exist" Feb 17 13:51:14 crc kubenswrapper[4768]: I0217 13:51:14.914548 4768 scope.go:117] "RemoveContainer" containerID="bd3e45c210d21e01b9787b9e79f1c19a256635ef63b797ef8652896b2495aeda" Feb 17 13:51:14 crc kubenswrapper[4768]: E0217 13:51:14.914976 4768 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bd3e45c210d21e01b9787b9e79f1c19a256635ef63b797ef8652896b2495aeda\": container with ID starting with bd3e45c210d21e01b9787b9e79f1c19a256635ef63b797ef8652896b2495aeda not found: ID does not exist" containerID="bd3e45c210d21e01b9787b9e79f1c19a256635ef63b797ef8652896b2495aeda" Feb 17 13:51:14 crc kubenswrapper[4768]: I0217 13:51:14.915002 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bd3e45c210d21e01b9787b9e79f1c19a256635ef63b797ef8652896b2495aeda"} err="failed to get container status \"bd3e45c210d21e01b9787b9e79f1c19a256635ef63b797ef8652896b2495aeda\": rpc error: code = NotFound desc = could not find container \"bd3e45c210d21e01b9787b9e79f1c19a256635ef63b797ef8652896b2495aeda\": container with ID starting with bd3e45c210d21e01b9787b9e79f1c19a256635ef63b797ef8652896b2495aeda not found: ID does not exist" Feb 17 13:51:15 crc kubenswrapper[4768]: I0217 13:51:15.546469 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6b8a2819-6947-42eb-8421-3dfd4da9cab4" path="/var/lib/kubelet/pods/6b8a2819-6947-42eb-8421-3dfd4da9cab4/volumes" Feb 17 13:51:15 crc kubenswrapper[4768]: I0217 13:51:15.839224 4768 generic.go:334] "Generic (PLEG): container finished" podID="536648e3-7aff-4027-8132-3aed7835b43f" containerID="f088a633ec914281100c1b9e63c3986ba548791cdf800f6d2ad97abe164902db" exitCode=0 Feb 17 13:51:15 crc kubenswrapper[4768]: I0217 13:51:15.839286 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/02aaacdbb2cdc34212ef0d4f992a08d2443727e2a4312d7c57a1078608n5lkj" event={"ID":"536648e3-7aff-4027-8132-3aed7835b43f","Type":"ContainerDied","Data":"f088a633ec914281100c1b9e63c3986ba548791cdf800f6d2ad97abe164902db"} Feb 17 13:51:17 crc kubenswrapper[4768]: I0217 13:51:17.129669 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/02aaacdbb2cdc34212ef0d4f992a08d2443727e2a4312d7c57a1078608n5lkj" Feb 17 13:51:17 crc kubenswrapper[4768]: I0217 13:51:17.203921 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/536648e3-7aff-4027-8132-3aed7835b43f-util\") pod \"536648e3-7aff-4027-8132-3aed7835b43f\" (UID: \"536648e3-7aff-4027-8132-3aed7835b43f\") " Feb 17 13:51:17 crc kubenswrapper[4768]: I0217 13:51:17.203991 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/536648e3-7aff-4027-8132-3aed7835b43f-bundle\") pod \"536648e3-7aff-4027-8132-3aed7835b43f\" (UID: \"536648e3-7aff-4027-8132-3aed7835b43f\") " Feb 17 13:51:17 crc kubenswrapper[4768]: I0217 13:51:17.204027 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gbsj7\" (UniqueName: \"kubernetes.io/projected/536648e3-7aff-4027-8132-3aed7835b43f-kube-api-access-gbsj7\") pod \"536648e3-7aff-4027-8132-3aed7835b43f\" (UID: \"536648e3-7aff-4027-8132-3aed7835b43f\") " Feb 17 13:51:17 crc kubenswrapper[4768]: I0217 13:51:17.204708 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/536648e3-7aff-4027-8132-3aed7835b43f-bundle" (OuterVolumeSpecName: "bundle") pod "536648e3-7aff-4027-8132-3aed7835b43f" (UID: "536648e3-7aff-4027-8132-3aed7835b43f"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 13:51:17 crc kubenswrapper[4768]: I0217 13:51:17.208463 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/536648e3-7aff-4027-8132-3aed7835b43f-kube-api-access-gbsj7" (OuterVolumeSpecName: "kube-api-access-gbsj7") pod "536648e3-7aff-4027-8132-3aed7835b43f" (UID: "536648e3-7aff-4027-8132-3aed7835b43f"). InnerVolumeSpecName "kube-api-access-gbsj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 13:51:17 crc kubenswrapper[4768]: I0217 13:51:17.216896 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/536648e3-7aff-4027-8132-3aed7835b43f-util" (OuterVolumeSpecName: "util") pod "536648e3-7aff-4027-8132-3aed7835b43f" (UID: "536648e3-7aff-4027-8132-3aed7835b43f"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 13:51:17 crc kubenswrapper[4768]: I0217 13:51:17.305229 4768 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/536648e3-7aff-4027-8132-3aed7835b43f-util\") on node \"crc\" DevicePath \"\"" Feb 17 13:51:17 crc kubenswrapper[4768]: I0217 13:51:17.305266 4768 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/536648e3-7aff-4027-8132-3aed7835b43f-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 13:51:17 crc kubenswrapper[4768]: I0217 13:51:17.305276 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gbsj7\" (UniqueName: \"kubernetes.io/projected/536648e3-7aff-4027-8132-3aed7835b43f-kube-api-access-gbsj7\") on node \"crc\" DevicePath \"\"" Feb 17 13:51:17 crc kubenswrapper[4768]: I0217 13:51:17.702473 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-rftkv" Feb 17 13:51:17 crc kubenswrapper[4768]: I0217 13:51:17.702781 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-rftkv" Feb 17 13:51:17 crc kubenswrapper[4768]: I0217 13:51:17.750131 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-rftkv" Feb 17 13:51:17 crc kubenswrapper[4768]: I0217 13:51:17.854601 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/02aaacdbb2cdc34212ef0d4f992a08d2443727e2a4312d7c57a1078608n5lkj" event={"ID":"536648e3-7aff-4027-8132-3aed7835b43f","Type":"ContainerDied","Data":"42a03b802fdb95b200bf472f39b341e9690d3f985cf2fcfe07313448551363aa"} Feb 17 13:51:17 crc kubenswrapper[4768]: I0217 13:51:17.854657 4768 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="42a03b802fdb95b200bf472f39b341e9690d3f985cf2fcfe07313448551363aa" Feb 17 13:51:17 crc kubenswrapper[4768]: I0217 13:51:17.854972 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/02aaacdbb2cdc34212ef0d4f992a08d2443727e2a4312d7c57a1078608n5lkj" Feb 17 13:51:17 crc kubenswrapper[4768]: I0217 13:51:17.900154 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-rftkv" Feb 17 13:51:20 crc kubenswrapper[4768]: I0217 13:51:20.354477 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-rftkv"] Feb 17 13:51:20 crc kubenswrapper[4768]: I0217 13:51:20.874945 4768 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-rftkv" podUID="05ba0b25-ef30-47cb-a4fe-bff21358432e" containerName="registry-server" containerID="cri-o://95925a766e98b829f3f301f677e67644e7165038cc5e9c67e7ed0ab8a0643402" gracePeriod=2 Feb 17 13:51:21 crc kubenswrapper[4768]: I0217 13:51:21.885139 4768 generic.go:334] "Generic (PLEG): container finished" podID="05ba0b25-ef30-47cb-a4fe-bff21358432e" containerID="95925a766e98b829f3f301f677e67644e7165038cc5e9c67e7ed0ab8a0643402" exitCode=0 Feb 17 13:51:21 crc kubenswrapper[4768]: I0217 13:51:21.885179 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rftkv" event={"ID":"05ba0b25-ef30-47cb-a4fe-bff21358432e","Type":"ContainerDied","Data":"95925a766e98b829f3f301f677e67644e7165038cc5e9c67e7ed0ab8a0643402"} Feb 17 13:51:22 crc kubenswrapper[4768]: I0217 13:51:22.379010 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-rftkv" Feb 17 13:51:22 crc kubenswrapper[4768]: I0217 13:51:22.477470 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-thrbt\" (UniqueName: \"kubernetes.io/projected/05ba0b25-ef30-47cb-a4fe-bff21358432e-kube-api-access-thrbt\") pod \"05ba0b25-ef30-47cb-a4fe-bff21358432e\" (UID: \"05ba0b25-ef30-47cb-a4fe-bff21358432e\") " Feb 17 13:51:22 crc kubenswrapper[4768]: I0217 13:51:22.477684 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/05ba0b25-ef30-47cb-a4fe-bff21358432e-utilities\") pod \"05ba0b25-ef30-47cb-a4fe-bff21358432e\" (UID: \"05ba0b25-ef30-47cb-a4fe-bff21358432e\") " Feb 17 13:51:22 crc kubenswrapper[4768]: I0217 13:51:22.477789 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/05ba0b25-ef30-47cb-a4fe-bff21358432e-catalog-content\") pod \"05ba0b25-ef30-47cb-a4fe-bff21358432e\" (UID: \"05ba0b25-ef30-47cb-a4fe-bff21358432e\") " Feb 17 13:51:22 crc kubenswrapper[4768]: I0217 13:51:22.478566 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/05ba0b25-ef30-47cb-a4fe-bff21358432e-utilities" (OuterVolumeSpecName: "utilities") pod "05ba0b25-ef30-47cb-a4fe-bff21358432e" (UID: "05ba0b25-ef30-47cb-a4fe-bff21358432e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 13:51:22 crc kubenswrapper[4768]: I0217 13:51:22.484998 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/05ba0b25-ef30-47cb-a4fe-bff21358432e-kube-api-access-thrbt" (OuterVolumeSpecName: "kube-api-access-thrbt") pod "05ba0b25-ef30-47cb-a4fe-bff21358432e" (UID: "05ba0b25-ef30-47cb-a4fe-bff21358432e"). InnerVolumeSpecName "kube-api-access-thrbt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 13:51:22 crc kubenswrapper[4768]: I0217 13:51:22.543517 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/05ba0b25-ef30-47cb-a4fe-bff21358432e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "05ba0b25-ef30-47cb-a4fe-bff21358432e" (UID: "05ba0b25-ef30-47cb-a4fe-bff21358432e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 13:51:22 crc kubenswrapper[4768]: I0217 13:51:22.579917 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-thrbt\" (UniqueName: \"kubernetes.io/projected/05ba0b25-ef30-47cb-a4fe-bff21358432e-kube-api-access-thrbt\") on node \"crc\" DevicePath \"\"" Feb 17 13:51:22 crc kubenswrapper[4768]: I0217 13:51:22.579969 4768 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/05ba0b25-ef30-47cb-a4fe-bff21358432e-utilities\") on node \"crc\" DevicePath \"\"" Feb 17 13:51:22 crc kubenswrapper[4768]: I0217 13:51:22.579984 4768 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/05ba0b25-ef30-47cb-a4fe-bff21358432e-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 17 13:51:22 crc kubenswrapper[4768]: I0217 13:51:22.892827 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rftkv" event={"ID":"05ba0b25-ef30-47cb-a4fe-bff21358432e","Type":"ContainerDied","Data":"4fb16a9f20c810f8ff08f26895c480f5a01953bb59857348ca5a220bfb0c26ea"} Feb 17 13:51:22 crc kubenswrapper[4768]: I0217 13:51:22.892884 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-rftkv" Feb 17 13:51:22 crc kubenswrapper[4768]: I0217 13:51:22.893140 4768 scope.go:117] "RemoveContainer" containerID="95925a766e98b829f3f301f677e67644e7165038cc5e9c67e7ed0ab8a0643402" Feb 17 13:51:22 crc kubenswrapper[4768]: I0217 13:51:22.910744 4768 scope.go:117] "RemoveContainer" containerID="405c89ec22602051345439084c534dfdb1d7c760b6d3d1132d8926bcb0110f64" Feb 17 13:51:22 crc kubenswrapper[4768]: I0217 13:51:22.926461 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-rftkv"] Feb 17 13:51:22 crc kubenswrapper[4768]: I0217 13:51:22.926729 4768 scope.go:117] "RemoveContainer" containerID="d8c40f23530c652b6594eee11f1fbb7a32781b3997d1be0619e6ba48b1a92021" Feb 17 13:51:22 crc kubenswrapper[4768]: I0217 13:51:22.928157 4768 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-rftkv"] Feb 17 13:51:23 crc kubenswrapper[4768]: I0217 13:51:23.542026 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="05ba0b25-ef30-47cb-a4fe-bff21358432e" path="/var/lib/kubelet/pods/05ba0b25-ef30-47cb-a4fe-bff21358432e/volumes" Feb 17 13:51:23 crc kubenswrapper[4768]: I0217 13:51:23.665072 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-init-5b99dcf57b-tb622"] Feb 17 13:51:23 crc kubenswrapper[4768]: E0217 13:51:23.665364 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="05ba0b25-ef30-47cb-a4fe-bff21358432e" containerName="extract-utilities" Feb 17 13:51:23 crc kubenswrapper[4768]: I0217 13:51:23.665388 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="05ba0b25-ef30-47cb-a4fe-bff21358432e" containerName="extract-utilities" Feb 17 13:51:23 crc kubenswrapper[4768]: E0217 13:51:23.665407 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="536648e3-7aff-4027-8132-3aed7835b43f" containerName="pull" Feb 17 13:51:23 crc kubenswrapper[4768]: I0217 13:51:23.665419 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="536648e3-7aff-4027-8132-3aed7835b43f" containerName="pull" Feb 17 13:51:23 crc kubenswrapper[4768]: E0217 13:51:23.665438 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="05ba0b25-ef30-47cb-a4fe-bff21358432e" containerName="extract-content" Feb 17 13:51:23 crc kubenswrapper[4768]: I0217 13:51:23.665450 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="05ba0b25-ef30-47cb-a4fe-bff21358432e" containerName="extract-content" Feb 17 13:51:23 crc kubenswrapper[4768]: E0217 13:51:23.665463 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="05ba0b25-ef30-47cb-a4fe-bff21358432e" containerName="registry-server" Feb 17 13:51:23 crc kubenswrapper[4768]: I0217 13:51:23.665472 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="05ba0b25-ef30-47cb-a4fe-bff21358432e" containerName="registry-server" Feb 17 13:51:23 crc kubenswrapper[4768]: E0217 13:51:23.665485 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6b8a2819-6947-42eb-8421-3dfd4da9cab4" containerName="extract-utilities" Feb 17 13:51:23 crc kubenswrapper[4768]: I0217 13:51:23.665493 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="6b8a2819-6947-42eb-8421-3dfd4da9cab4" containerName="extract-utilities" Feb 17 13:51:23 crc kubenswrapper[4768]: E0217 13:51:23.665508 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6b8a2819-6947-42eb-8421-3dfd4da9cab4" containerName="extract-content" Feb 17 13:51:23 crc kubenswrapper[4768]: I0217 13:51:23.665516 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="6b8a2819-6947-42eb-8421-3dfd4da9cab4" containerName="extract-content" Feb 17 13:51:23 crc kubenswrapper[4768]: E0217 13:51:23.665526 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="536648e3-7aff-4027-8132-3aed7835b43f" containerName="util" Feb 17 13:51:23 crc kubenswrapper[4768]: I0217 13:51:23.665533 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="536648e3-7aff-4027-8132-3aed7835b43f" containerName="util" Feb 17 13:51:23 crc kubenswrapper[4768]: E0217 13:51:23.665545 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6b8a2819-6947-42eb-8421-3dfd4da9cab4" containerName="registry-server" Feb 17 13:51:23 crc kubenswrapper[4768]: I0217 13:51:23.665553 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="6b8a2819-6947-42eb-8421-3dfd4da9cab4" containerName="registry-server" Feb 17 13:51:23 crc kubenswrapper[4768]: E0217 13:51:23.665566 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="536648e3-7aff-4027-8132-3aed7835b43f" containerName="extract" Feb 17 13:51:23 crc kubenswrapper[4768]: I0217 13:51:23.665574 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="536648e3-7aff-4027-8132-3aed7835b43f" containerName="extract" Feb 17 13:51:23 crc kubenswrapper[4768]: I0217 13:51:23.665734 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="6b8a2819-6947-42eb-8421-3dfd4da9cab4" containerName="registry-server" Feb 17 13:51:23 crc kubenswrapper[4768]: I0217 13:51:23.665749 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="05ba0b25-ef30-47cb-a4fe-bff21358432e" containerName="registry-server" Feb 17 13:51:23 crc kubenswrapper[4768]: I0217 13:51:23.665764 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="536648e3-7aff-4027-8132-3aed7835b43f" containerName="extract" Feb 17 13:51:23 crc kubenswrapper[4768]: I0217 13:51:23.666242 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-5b99dcf57b-tb622" Feb 17 13:51:23 crc kubenswrapper[4768]: I0217 13:51:23.670970 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-init-dockercfg-gb6b6" Feb 17 13:51:23 crc kubenswrapper[4768]: I0217 13:51:23.692439 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zrrdf\" (UniqueName: \"kubernetes.io/projected/3f9e32d0-4476-4d44-8266-d821ad79f322-kube-api-access-zrrdf\") pod \"openstack-operator-controller-init-5b99dcf57b-tb622\" (UID: \"3f9e32d0-4476-4d44-8266-d821ad79f322\") " pod="openstack-operators/openstack-operator-controller-init-5b99dcf57b-tb622" Feb 17 13:51:23 crc kubenswrapper[4768]: I0217 13:51:23.696713 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-5b99dcf57b-tb622"] Feb 17 13:51:23 crc kubenswrapper[4768]: I0217 13:51:23.793607 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zrrdf\" (UniqueName: \"kubernetes.io/projected/3f9e32d0-4476-4d44-8266-d821ad79f322-kube-api-access-zrrdf\") pod \"openstack-operator-controller-init-5b99dcf57b-tb622\" (UID: \"3f9e32d0-4476-4d44-8266-d821ad79f322\") " pod="openstack-operators/openstack-operator-controller-init-5b99dcf57b-tb622" Feb 17 13:51:23 crc kubenswrapper[4768]: I0217 13:51:23.819695 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zrrdf\" (UniqueName: \"kubernetes.io/projected/3f9e32d0-4476-4d44-8266-d821ad79f322-kube-api-access-zrrdf\") pod \"openstack-operator-controller-init-5b99dcf57b-tb622\" (UID: \"3f9e32d0-4476-4d44-8266-d821ad79f322\") " pod="openstack-operators/openstack-operator-controller-init-5b99dcf57b-tb622" Feb 17 13:51:23 crc kubenswrapper[4768]: I0217 13:51:23.985641 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-5b99dcf57b-tb622" Feb 17 13:51:24 crc kubenswrapper[4768]: I0217 13:51:24.214597 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-5b99dcf57b-tb622"] Feb 17 13:51:24 crc kubenswrapper[4768]: I0217 13:51:24.914613 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-5b99dcf57b-tb622" event={"ID":"3f9e32d0-4476-4d44-8266-d821ad79f322","Type":"ContainerStarted","Data":"44ba0f5034ffb2163a899619fd430e12c38685c91a9f29ba901a46868d8dc44d"} Feb 17 13:51:28 crc kubenswrapper[4768]: I0217 13:51:28.060147 4768 patch_prober.go:28] interesting pod/machine-config-daemon-p97z4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 13:51:28 crc kubenswrapper[4768]: I0217 13:51:28.060534 4768 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-p97z4" podUID="10c685ba-8fe0-425c-958c-3fb6754d3d84" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 13:51:28 crc kubenswrapper[4768]: I0217 13:51:28.961352 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-5b99dcf57b-tb622" event={"ID":"3f9e32d0-4476-4d44-8266-d821ad79f322","Type":"ContainerStarted","Data":"b1a3a83630ec0b47687f999e2e9a00f5493043d54eda2a2dd283947ba1b0a900"} Feb 17 13:51:28 crc kubenswrapper[4768]: I0217 13:51:28.962167 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-init-5b99dcf57b-tb622" Feb 17 13:51:28 crc kubenswrapper[4768]: I0217 13:51:28.990574 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-init-5b99dcf57b-tb622" podStartSLOduration=2.406770734 podStartE2EDuration="5.990509876s" podCreationTimestamp="2026-02-17 13:51:23 +0000 UTC" firstStartedPulling="2026-02-17 13:51:24.229627752 +0000 UTC m=+903.509014194" lastFinishedPulling="2026-02-17 13:51:27.813366894 +0000 UTC m=+907.092753336" observedRunningTime="2026-02-17 13:51:28.986433544 +0000 UTC m=+908.265819996" watchObservedRunningTime="2026-02-17 13:51:28.990509876 +0000 UTC m=+908.269896328" Feb 17 13:51:33 crc kubenswrapper[4768]: I0217 13:51:33.990983 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-init-5b99dcf57b-tb622" Feb 17 13:51:55 crc kubenswrapper[4768]: I0217 13:51:55.133020 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/barbican-operator-controller-manager-868647ff47-99tll"] Feb 17 13:51:55 crc kubenswrapper[4768]: I0217 13:51:55.134219 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-99tll" Feb 17 13:51:55 crc kubenswrapper[4768]: I0217 13:51:55.138600 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"barbican-operator-controller-manager-dockercfg-qbbrd" Feb 17 13:51:55 crc kubenswrapper[4768]: I0217 13:51:55.140910 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/cinder-operator-controller-manager-5d946d989d-hrkzn"] Feb 17 13:51:55 crc kubenswrapper[4768]: I0217 13:51:55.141624 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-hrkzn" Feb 17 13:51:55 crc kubenswrapper[4768]: I0217 13:51:55.146036 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"cinder-operator-controller-manager-dockercfg-b8gj7" Feb 17 13:51:55 crc kubenswrapper[4768]: I0217 13:51:55.150696 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-868647ff47-99tll"] Feb 17 13:51:55 crc kubenswrapper[4768]: I0217 13:51:55.156664 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-5d946d989d-hrkzn"] Feb 17 13:51:55 crc kubenswrapper[4768]: I0217 13:51:55.160834 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/designate-operator-controller-manager-6d8bf5c495-hn2hg"] Feb 17 13:51:55 crc kubenswrapper[4768]: I0217 13:51:55.161563 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-hn2hg" Feb 17 13:51:55 crc kubenswrapper[4768]: I0217 13:51:55.168080 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/glance-operator-controller-manager-77987464f4-7wnck"] Feb 17 13:51:55 crc kubenswrapper[4768]: I0217 13:51:55.168815 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-77987464f4-7wnck" Feb 17 13:51:55 crc kubenswrapper[4768]: I0217 13:51:55.176515 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"designate-operator-controller-manager-dockercfg-vgc87" Feb 17 13:51:55 crc kubenswrapper[4768]: I0217 13:51:55.176778 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"glance-operator-controller-manager-dockercfg-5zdhw" Feb 17 13:51:55 crc kubenswrapper[4768]: I0217 13:51:55.209678 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-6d8bf5c495-hn2hg"] Feb 17 13:51:55 crc kubenswrapper[4768]: I0217 13:51:55.228792 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-77987464f4-7wnck"] Feb 17 13:51:55 crc kubenswrapper[4768]: I0217 13:51:55.243395 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/heat-operator-controller-manager-69f49c598c-bl6rp"] Feb 17 13:51:55 crc kubenswrapper[4768]: I0217 13:51:55.271276 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-bl6rp" Feb 17 13:51:55 crc kubenswrapper[4768]: I0217 13:51:55.271929 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-69f49c598c-bl6rp"] Feb 17 13:51:55 crc kubenswrapper[4768]: I0217 13:51:55.279698 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"heat-operator-controller-manager-dockercfg-mcp2w" Feb 17 13:51:55 crc kubenswrapper[4768]: I0217 13:51:55.297273 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/horizon-operator-controller-manager-5b9b8895d5-hrb5z"] Feb 17 13:51:55 crc kubenswrapper[4768]: I0217 13:51:55.297999 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-hrb5z" Feb 17 13:51:55 crc kubenswrapper[4768]: I0217 13:51:55.304621 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"horizon-operator-controller-manager-dockercfg-tpj4d" Feb 17 13:51:55 crc kubenswrapper[4768]: I0217 13:51:55.317559 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-5b9b8895d5-hrb5z"] Feb 17 13:51:55 crc kubenswrapper[4768]: I0217 13:51:55.323573 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9qrps\" (UniqueName: \"kubernetes.io/projected/ea7f039d-d594-4b9e-9dac-06e9f13bdba2-kube-api-access-9qrps\") pod \"barbican-operator-controller-manager-868647ff47-99tll\" (UID: \"ea7f039d-d594-4b9e-9dac-06e9f13bdba2\") " pod="openstack-operators/barbican-operator-controller-manager-868647ff47-99tll" Feb 17 13:51:55 crc kubenswrapper[4768]: I0217 13:51:55.323637 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z6nb7\" (UniqueName: \"kubernetes.io/projected/f5689d3e-d755-485e-80a1-e808c460022d-kube-api-access-z6nb7\") pod \"glance-operator-controller-manager-77987464f4-7wnck\" (UID: \"f5689d3e-d755-485e-80a1-e808c460022d\") " pod="openstack-operators/glance-operator-controller-manager-77987464f4-7wnck" Feb 17 13:51:55 crc kubenswrapper[4768]: I0217 13:51:55.323664 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nfw2r\" (UniqueName: \"kubernetes.io/projected/633a0666-42b2-4422-9b47-fb69c1105655-kube-api-access-nfw2r\") pod \"cinder-operator-controller-manager-5d946d989d-hrkzn\" (UID: \"633a0666-42b2-4422-9b47-fb69c1105655\") " pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-hrkzn" Feb 17 13:51:55 crc kubenswrapper[4768]: I0217 13:51:55.323697 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tv9kg\" (UniqueName: \"kubernetes.io/projected/663c818c-0255-4f9c-827e-ccb2b430c5e3-kube-api-access-tv9kg\") pod \"designate-operator-controller-manager-6d8bf5c495-hn2hg\" (UID: \"663c818c-0255-4f9c-827e-ccb2b430c5e3\") " pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-hn2hg" Feb 17 13:51:55 crc kubenswrapper[4768]: I0217 13:51:55.350681 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/infra-operator-controller-manager-79d975b745-cpkx6"] Feb 17 13:51:55 crc kubenswrapper[4768]: I0217 13:51:55.355641 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-79d975b745-cpkx6" Feb 17 13:51:55 crc kubenswrapper[4768]: I0217 13:51:55.358056 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-webhook-server-cert" Feb 17 13:51:55 crc kubenswrapper[4768]: I0217 13:51:55.358135 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-vgzrk" Feb 17 13:51:55 crc kubenswrapper[4768]: I0217 13:51:55.376402 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ironic-operator-controller-manager-554564d7fc-pfr2g"] Feb 17 13:51:55 crc kubenswrapper[4768]: I0217 13:51:55.377198 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-pfr2g" Feb 17 13:51:55 crc kubenswrapper[4768]: I0217 13:51:55.379330 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ironic-operator-controller-manager-dockercfg-rrlqv" Feb 17 13:51:55 crc kubenswrapper[4768]: I0217 13:51:55.387137 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-79d975b745-cpkx6"] Feb 17 13:51:55 crc kubenswrapper[4768]: I0217 13:51:55.396369 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/keystone-operator-controller-manager-b4d948c87-7ntmb"] Feb 17 13:51:55 crc kubenswrapper[4768]: I0217 13:51:55.397631 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-7ntmb" Feb 17 13:51:55 crc kubenswrapper[4768]: I0217 13:51:55.401367 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"keystone-operator-controller-manager-dockercfg-s9vtz" Feb 17 13:51:55 crc kubenswrapper[4768]: I0217 13:51:55.414137 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-554564d7fc-pfr2g"] Feb 17 13:51:55 crc kubenswrapper[4768]: I0217 13:51:55.420632 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-b4d948c87-7ntmb"] Feb 17 13:51:55 crc kubenswrapper[4768]: I0217 13:51:55.424639 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-whfh7\" (UniqueName: \"kubernetes.io/projected/9e699840-e748-4e5d-8629-f0379a7cce08-kube-api-access-whfh7\") pod \"infra-operator-controller-manager-79d975b745-cpkx6\" (UID: \"9e699840-e748-4e5d-8629-f0379a7cce08\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-cpkx6" Feb 17 13:51:55 crc kubenswrapper[4768]: I0217 13:51:55.425133 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9qrps\" (UniqueName: \"kubernetes.io/projected/ea7f039d-d594-4b9e-9dac-06e9f13bdba2-kube-api-access-9qrps\") pod \"barbican-operator-controller-manager-868647ff47-99tll\" (UID: \"ea7f039d-d594-4b9e-9dac-06e9f13bdba2\") " pod="openstack-operators/barbican-operator-controller-manager-868647ff47-99tll" Feb 17 13:51:55 crc kubenswrapper[4768]: I0217 13:51:55.425194 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z6nb7\" (UniqueName: \"kubernetes.io/projected/f5689d3e-d755-485e-80a1-e808c460022d-kube-api-access-z6nb7\") pod \"glance-operator-controller-manager-77987464f4-7wnck\" (UID: \"f5689d3e-d755-485e-80a1-e808c460022d\") " pod="openstack-operators/glance-operator-controller-manager-77987464f4-7wnck" Feb 17 13:51:55 crc kubenswrapper[4768]: I0217 13:51:55.425227 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nfw2r\" (UniqueName: \"kubernetes.io/projected/633a0666-42b2-4422-9b47-fb69c1105655-kube-api-access-nfw2r\") pod \"cinder-operator-controller-manager-5d946d989d-hrkzn\" (UID: \"633a0666-42b2-4422-9b47-fb69c1105655\") " pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-hrkzn" Feb 17 13:51:55 crc kubenswrapper[4768]: I0217 13:51:55.425253 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lnnvh\" (UniqueName: \"kubernetes.io/projected/8b8ebdec-5fc0-4f66-9a22-b833d3cd4283-kube-api-access-lnnvh\") pod \"heat-operator-controller-manager-69f49c598c-bl6rp\" (UID: \"8b8ebdec-5fc0-4f66-9a22-b833d3cd4283\") " pod="openstack-operators/heat-operator-controller-manager-69f49c598c-bl6rp" Feb 17 13:51:55 crc kubenswrapper[4768]: I0217 13:51:55.425278 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2rwtf\" (UniqueName: \"kubernetes.io/projected/aa6bb524-9950-4add-9b03-04f324c9a02d-kube-api-access-2rwtf\") pod \"horizon-operator-controller-manager-5b9b8895d5-hrb5z\" (UID: \"aa6bb524-9950-4add-9b03-04f324c9a02d\") " pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-hrb5z" Feb 17 13:51:55 crc kubenswrapper[4768]: I0217 13:51:55.425308 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tv9kg\" (UniqueName: \"kubernetes.io/projected/663c818c-0255-4f9c-827e-ccb2b430c5e3-kube-api-access-tv9kg\") pod \"designate-operator-controller-manager-6d8bf5c495-hn2hg\" (UID: \"663c818c-0255-4f9c-827e-ccb2b430c5e3\") " pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-hn2hg" Feb 17 13:51:55 crc kubenswrapper[4768]: I0217 13:51:55.425355 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/9e699840-e748-4e5d-8629-f0379a7cce08-cert\") pod \"infra-operator-controller-manager-79d975b745-cpkx6\" (UID: \"9e699840-e748-4e5d-8629-f0379a7cce08\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-cpkx6" Feb 17 13:51:55 crc kubenswrapper[4768]: I0217 13:51:55.435569 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/manila-operator-controller-manager-54f6768c69-qfz4j"] Feb 17 13:51:55 crc kubenswrapper[4768]: I0217 13:51:55.436414 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-54f6768c69-qfz4j" Feb 17 13:51:55 crc kubenswrapper[4768]: I0217 13:51:55.442767 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-6994f66f48-v5svk"] Feb 17 13:51:55 crc kubenswrapper[4768]: I0217 13:51:55.444092 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-v5svk" Feb 17 13:51:55 crc kubenswrapper[4768]: I0217 13:51:55.446467 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"mariadb-operator-controller-manager-dockercfg-xnsm4" Feb 17 13:51:55 crc kubenswrapper[4768]: I0217 13:51:55.446684 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"manila-operator-controller-manager-dockercfg-r2m54" Feb 17 13:51:55 crc kubenswrapper[4768]: I0217 13:51:55.452331 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nfw2r\" (UniqueName: \"kubernetes.io/projected/633a0666-42b2-4422-9b47-fb69c1105655-kube-api-access-nfw2r\") pod \"cinder-operator-controller-manager-5d946d989d-hrkzn\" (UID: \"633a0666-42b2-4422-9b47-fb69c1105655\") " pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-hrkzn" Feb 17 13:51:55 crc kubenswrapper[4768]: I0217 13:51:55.456801 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9qrps\" (UniqueName: \"kubernetes.io/projected/ea7f039d-d594-4b9e-9dac-06e9f13bdba2-kube-api-access-9qrps\") pod \"barbican-operator-controller-manager-868647ff47-99tll\" (UID: \"ea7f039d-d594-4b9e-9dac-06e9f13bdba2\") " pod="openstack-operators/barbican-operator-controller-manager-868647ff47-99tll" Feb 17 13:51:55 crc kubenswrapper[4768]: I0217 13:51:55.456907 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-54f6768c69-qfz4j"] Feb 17 13:51:55 crc kubenswrapper[4768]: I0217 13:51:55.457598 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-hrkzn" Feb 17 13:51:55 crc kubenswrapper[4768]: I0217 13:51:55.460140 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tv9kg\" (UniqueName: \"kubernetes.io/projected/663c818c-0255-4f9c-827e-ccb2b430c5e3-kube-api-access-tv9kg\") pod \"designate-operator-controller-manager-6d8bf5c495-hn2hg\" (UID: \"663c818c-0255-4f9c-827e-ccb2b430c5e3\") " pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-hn2hg" Feb 17 13:51:55 crc kubenswrapper[4768]: I0217 13:51:55.461264 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z6nb7\" (UniqueName: \"kubernetes.io/projected/f5689d3e-d755-485e-80a1-e808c460022d-kube-api-access-z6nb7\") pod \"glance-operator-controller-manager-77987464f4-7wnck\" (UID: \"f5689d3e-d755-485e-80a1-e808c460022d\") " pod="openstack-operators/glance-operator-controller-manager-77987464f4-7wnck" Feb 17 13:51:55 crc kubenswrapper[4768]: I0217 13:51:55.461335 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-6994f66f48-v5svk"] Feb 17 13:51:55 crc kubenswrapper[4768]: I0217 13:51:55.464337 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/octavia-operator-controller-manager-69f8888797-j4n52"] Feb 17 13:51:55 crc kubenswrapper[4768]: I0217 13:51:55.465154 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-j4n52" Feb 17 13:51:55 crc kubenswrapper[4768]: I0217 13:51:55.468281 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/neutron-operator-controller-manager-64ddbf8bb-4wm78"] Feb 17 13:51:55 crc kubenswrapper[4768]: I0217 13:51:55.469680 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-4wm78" Feb 17 13:51:55 crc kubenswrapper[4768]: I0217 13:51:55.471865 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/nova-operator-controller-manager-567668f5cf-9krmz"] Feb 17 13:51:55 crc kubenswrapper[4768]: I0217 13:51:55.472356 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-9krmz" Feb 17 13:51:55 crc kubenswrapper[4768]: I0217 13:51:55.477356 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"octavia-operator-controller-manager-dockercfg-q55t9" Feb 17 13:51:55 crc kubenswrapper[4768]: I0217 13:51:55.477464 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"neutron-operator-controller-manager-dockercfg-f2m9b" Feb 17 13:51:55 crc kubenswrapper[4768]: I0217 13:51:55.477679 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"nova-operator-controller-manager-dockercfg-vl2kn" Feb 17 13:51:55 crc kubenswrapper[4768]: I0217 13:51:55.482835 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-64ddbf8bb-4wm78"] Feb 17 13:51:55 crc kubenswrapper[4768]: I0217 13:51:55.487190 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-hn2hg" Feb 17 13:51:55 crc kubenswrapper[4768]: I0217 13:51:55.492467 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-77987464f4-7wnck" Feb 17 13:51:55 crc kubenswrapper[4768]: I0217 13:51:55.494548 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-69f8888797-j4n52"] Feb 17 13:51:55 crc kubenswrapper[4768]: I0217 13:51:55.509362 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-567668f5cf-9krmz"] Feb 17 13:51:55 crc kubenswrapper[4768]: I0217 13:51:55.526954 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-47vdn\" (UniqueName: \"kubernetes.io/projected/c040f799-8668-44a6-b694-0b253aaf7930-kube-api-access-47vdn\") pod \"manila-operator-controller-manager-54f6768c69-qfz4j\" (UID: \"c040f799-8668-44a6-b694-0b253aaf7930\") " pod="openstack-operators/manila-operator-controller-manager-54f6768c69-qfz4j" Feb 17 13:51:55 crc kubenswrapper[4768]: I0217 13:51:55.527011 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lnnvh\" (UniqueName: \"kubernetes.io/projected/8b8ebdec-5fc0-4f66-9a22-b833d3cd4283-kube-api-access-lnnvh\") pod \"heat-operator-controller-manager-69f49c598c-bl6rp\" (UID: \"8b8ebdec-5fc0-4f66-9a22-b833d3cd4283\") " pod="openstack-operators/heat-operator-controller-manager-69f49c598c-bl6rp" Feb 17 13:51:55 crc kubenswrapper[4768]: I0217 13:51:55.527050 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2rwtf\" (UniqueName: \"kubernetes.io/projected/aa6bb524-9950-4add-9b03-04f324c9a02d-kube-api-access-2rwtf\") pod \"horizon-operator-controller-manager-5b9b8895d5-hrb5z\" (UID: \"aa6bb524-9950-4add-9b03-04f324c9a02d\") " pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-hrb5z" Feb 17 13:51:55 crc kubenswrapper[4768]: I0217 13:51:55.527119 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qxx74\" (UniqueName: \"kubernetes.io/projected/2e1d5b4f-7ff1-43bf-99ed-48cb79cb86df-kube-api-access-qxx74\") pod \"neutron-operator-controller-manager-64ddbf8bb-4wm78\" (UID: \"2e1d5b4f-7ff1-43bf-99ed-48cb79cb86df\") " pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-4wm78" Feb 17 13:51:55 crc kubenswrapper[4768]: I0217 13:51:55.527151 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lpw5v\" (UniqueName: \"kubernetes.io/projected/2560f30e-2ede-4f2e-a3a1-e3e7e96b5792-kube-api-access-lpw5v\") pod \"ironic-operator-controller-manager-554564d7fc-pfr2g\" (UID: \"2560f30e-2ede-4f2e-a3a1-e3e7e96b5792\") " pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-pfr2g" Feb 17 13:51:55 crc kubenswrapper[4768]: I0217 13:51:55.527190 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9q44w\" (UniqueName: \"kubernetes.io/projected/93f56e48-e402-471a-b9c0-0fac088f7a7e-kube-api-access-9q44w\") pod \"mariadb-operator-controller-manager-6994f66f48-v5svk\" (UID: \"93f56e48-e402-471a-b9c0-0fac088f7a7e\") " pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-v5svk" Feb 17 13:51:55 crc kubenswrapper[4768]: I0217 13:51:55.527217 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/9e699840-e748-4e5d-8629-f0379a7cce08-cert\") pod \"infra-operator-controller-manager-79d975b745-cpkx6\" (UID: \"9e699840-e748-4e5d-8629-f0379a7cce08\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-cpkx6" Feb 17 13:51:55 crc kubenswrapper[4768]: I0217 13:51:55.527254 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l527h\" (UniqueName: \"kubernetes.io/projected/4f912c2e-e494-46c0-9231-40c106b00c40-kube-api-access-l527h\") pod \"keystone-operator-controller-manager-b4d948c87-7ntmb\" (UID: \"4f912c2e-e494-46c0-9231-40c106b00c40\") " pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-7ntmb" Feb 17 13:51:55 crc kubenswrapper[4768]: I0217 13:51:55.527293 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-whfh7\" (UniqueName: \"kubernetes.io/projected/9e699840-e748-4e5d-8629-f0379a7cce08-kube-api-access-whfh7\") pod \"infra-operator-controller-manager-79d975b745-cpkx6\" (UID: \"9e699840-e748-4e5d-8629-f0379a7cce08\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-cpkx6" Feb 17 13:51:55 crc kubenswrapper[4768]: I0217 13:51:55.527348 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8bkr6\" (UniqueName: \"kubernetes.io/projected/2567e2d5-83bd-4345-b94b-36527465ce1b-kube-api-access-8bkr6\") pod \"octavia-operator-controller-manager-69f8888797-j4n52\" (UID: \"2567e2d5-83bd-4345-b94b-36527465ce1b\") " pod="openstack-operators/octavia-operator-controller-manager-69f8888797-j4n52" Feb 17 13:51:55 crc kubenswrapper[4768]: E0217 13:51:55.527491 4768 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 17 13:51:55 crc kubenswrapper[4768]: E0217 13:51:55.527546 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9e699840-e748-4e5d-8629-f0379a7cce08-cert podName:9e699840-e748-4e5d-8629-f0379a7cce08 nodeName:}" failed. No retries permitted until 2026-02-17 13:51:56.027524809 +0000 UTC m=+935.306911351 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/9e699840-e748-4e5d-8629-f0379a7cce08-cert") pod "infra-operator-controller-manager-79d975b745-cpkx6" (UID: "9e699840-e748-4e5d-8629-f0379a7cce08") : secret "infra-operator-webhook-server-cert" not found Feb 17 13:51:55 crc kubenswrapper[4768]: I0217 13:51:55.533481 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ovn-operator-controller-manager-d44cf6b75-dp9tq"] Feb 17 13:51:55 crc kubenswrapper[4768]: I0217 13:51:55.534343 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-dp9tq" Feb 17 13:51:55 crc kubenswrapper[4768]: I0217 13:51:55.546626 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ovn-operator-controller-manager-dockercfg-75nk2" Feb 17 13:51:55 crc kubenswrapper[4768]: I0217 13:51:55.550071 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cg22r7"] Feb 17 13:51:55 crc kubenswrapper[4768]: I0217 13:51:55.552440 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cg22r7" Feb 17 13:51:55 crc kubenswrapper[4768]: I0217 13:51:55.553902 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-webhook-server-cert" Feb 17 13:51:55 crc kubenswrapper[4768]: I0217 13:51:55.555403 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-nkdk2" Feb 17 13:51:55 crc kubenswrapper[4768]: I0217 13:51:55.560660 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/placement-operator-controller-manager-8497b45c89-pvjkl"] Feb 17 13:51:55 crc kubenswrapper[4768]: I0217 13:51:55.562534 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-pvjkl" Feb 17 13:51:55 crc kubenswrapper[4768]: I0217 13:51:55.564346 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2rwtf\" (UniqueName: \"kubernetes.io/projected/aa6bb524-9950-4add-9b03-04f324c9a02d-kube-api-access-2rwtf\") pod \"horizon-operator-controller-manager-5b9b8895d5-hrb5z\" (UID: \"aa6bb524-9950-4add-9b03-04f324c9a02d\") " pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-hrb5z" Feb 17 13:51:55 crc kubenswrapper[4768]: I0217 13:51:55.564611 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"placement-operator-controller-manager-dockercfg-zz7lj" Feb 17 13:51:55 crc kubenswrapper[4768]: I0217 13:51:55.574849 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-d44cf6b75-dp9tq"] Feb 17 13:51:55 crc kubenswrapper[4768]: I0217 13:51:55.577726 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-whfh7\" (UniqueName: \"kubernetes.io/projected/9e699840-e748-4e5d-8629-f0379a7cce08-kube-api-access-whfh7\") pod \"infra-operator-controller-manager-79d975b745-cpkx6\" (UID: \"9e699840-e748-4e5d-8629-f0379a7cce08\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-cpkx6" Feb 17 13:51:55 crc kubenswrapper[4768]: I0217 13:51:55.581159 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lnnvh\" (UniqueName: \"kubernetes.io/projected/8b8ebdec-5fc0-4f66-9a22-b833d3cd4283-kube-api-access-lnnvh\") pod \"heat-operator-controller-manager-69f49c598c-bl6rp\" (UID: \"8b8ebdec-5fc0-4f66-9a22-b833d3cd4283\") " pod="openstack-operators/heat-operator-controller-manager-69f49c598c-bl6rp" Feb 17 13:51:55 crc kubenswrapper[4768]: I0217 13:51:55.592007 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cg22r7"] Feb 17 13:51:55 crc kubenswrapper[4768]: I0217 13:51:55.598200 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-bl6rp" Feb 17 13:51:55 crc kubenswrapper[4768]: I0217 13:51:55.599807 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-8497b45c89-pvjkl"] Feb 17 13:51:55 crc kubenswrapper[4768]: I0217 13:51:55.613483 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/swift-operator-controller-manager-68f46476f-kmgl4"] Feb 17 13:51:55 crc kubenswrapper[4768]: I0217 13:51:55.622907 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-68f46476f-kmgl4" Feb 17 13:51:55 crc kubenswrapper[4768]: I0217 13:51:55.626695 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"swift-operator-controller-manager-dockercfg-wqp2m" Feb 17 13:51:55 crc kubenswrapper[4768]: I0217 13:51:55.628351 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5xsfm\" (UniqueName: \"kubernetes.io/projected/e48b6c11-496b-4f36-9155-119bbfb506f8-kube-api-access-5xsfm\") pod \"nova-operator-controller-manager-567668f5cf-9krmz\" (UID: \"e48b6c11-496b-4f36-9155-119bbfb506f8\") " pod="openstack-operators/nova-operator-controller-manager-567668f5cf-9krmz" Feb 17 13:51:55 crc kubenswrapper[4768]: I0217 13:51:55.628395 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8bkr6\" (UniqueName: \"kubernetes.io/projected/2567e2d5-83bd-4345-b94b-36527465ce1b-kube-api-access-8bkr6\") pod \"octavia-operator-controller-manager-69f8888797-j4n52\" (UID: \"2567e2d5-83bd-4345-b94b-36527465ce1b\") " pod="openstack-operators/octavia-operator-controller-manager-69f8888797-j4n52" Feb 17 13:51:55 crc kubenswrapper[4768]: I0217 13:51:55.628432 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jk4dm\" (UniqueName: \"kubernetes.io/projected/09c0d3ef-49e2-4dec-a95f-951be73d5740-kube-api-access-jk4dm\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9cg22r7\" (UID: \"09c0d3ef-49e2-4dec-a95f-951be73d5740\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cg22r7" Feb 17 13:51:55 crc kubenswrapper[4768]: I0217 13:51:55.628457 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bdcqd\" (UniqueName: \"kubernetes.io/projected/96ea84f4-0ad3-49bc-9eb5-83bfbbb7ee0a-kube-api-access-bdcqd\") pod \"ovn-operator-controller-manager-d44cf6b75-dp9tq\" (UID: \"96ea84f4-0ad3-49bc-9eb5-83bfbbb7ee0a\") " pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-dp9tq" Feb 17 13:51:55 crc kubenswrapper[4768]: I0217 13:51:55.628488 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-47vdn\" (UniqueName: \"kubernetes.io/projected/c040f799-8668-44a6-b694-0b253aaf7930-kube-api-access-47vdn\") pod \"manila-operator-controller-manager-54f6768c69-qfz4j\" (UID: \"c040f799-8668-44a6-b694-0b253aaf7930\") " pod="openstack-operators/manila-operator-controller-manager-54f6768c69-qfz4j" Feb 17 13:51:55 crc kubenswrapper[4768]: I0217 13:51:55.628538 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-75jz9\" (UniqueName: \"kubernetes.io/projected/9aee7c4a-404a-434e-8aa9-b671553532d2-kube-api-access-75jz9\") pod \"placement-operator-controller-manager-8497b45c89-pvjkl\" (UID: \"9aee7c4a-404a-434e-8aa9-b671553532d2\") " pod="openstack-operators/placement-operator-controller-manager-8497b45c89-pvjkl" Feb 17 13:51:55 crc kubenswrapper[4768]: I0217 13:51:55.628568 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qxx74\" (UniqueName: \"kubernetes.io/projected/2e1d5b4f-7ff1-43bf-99ed-48cb79cb86df-kube-api-access-qxx74\") pod \"neutron-operator-controller-manager-64ddbf8bb-4wm78\" (UID: \"2e1d5b4f-7ff1-43bf-99ed-48cb79cb86df\") " pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-4wm78" Feb 17 13:51:55 crc kubenswrapper[4768]: I0217 13:51:55.628594 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lpw5v\" (UniqueName: \"kubernetes.io/projected/2560f30e-2ede-4f2e-a3a1-e3e7e96b5792-kube-api-access-lpw5v\") pod \"ironic-operator-controller-manager-554564d7fc-pfr2g\" (UID: \"2560f30e-2ede-4f2e-a3a1-e3e7e96b5792\") " pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-pfr2g" Feb 17 13:51:55 crc kubenswrapper[4768]: I0217 13:51:55.628627 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9q44w\" (UniqueName: \"kubernetes.io/projected/93f56e48-e402-471a-b9c0-0fac088f7a7e-kube-api-access-9q44w\") pod \"mariadb-operator-controller-manager-6994f66f48-v5svk\" (UID: \"93f56e48-e402-471a-b9c0-0fac088f7a7e\") " pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-v5svk" Feb 17 13:51:55 crc kubenswrapper[4768]: I0217 13:51:55.628651 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/09c0d3ef-49e2-4dec-a95f-951be73d5740-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9cg22r7\" (UID: \"09c0d3ef-49e2-4dec-a95f-951be73d5740\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cg22r7" Feb 17 13:51:55 crc kubenswrapper[4768]: I0217 13:51:55.628692 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l527h\" (UniqueName: \"kubernetes.io/projected/4f912c2e-e494-46c0-9231-40c106b00c40-kube-api-access-l527h\") pod \"keystone-operator-controller-manager-b4d948c87-7ntmb\" (UID: \"4f912c2e-e494-46c0-9231-40c106b00c40\") " pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-7ntmb" Feb 17 13:51:55 crc kubenswrapper[4768]: I0217 13:51:55.658876 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-hrb5z" Feb 17 13:51:55 crc kubenswrapper[4768]: I0217 13:51:55.686521 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8bkr6\" (UniqueName: \"kubernetes.io/projected/2567e2d5-83bd-4345-b94b-36527465ce1b-kube-api-access-8bkr6\") pod \"octavia-operator-controller-manager-69f8888797-j4n52\" (UID: \"2567e2d5-83bd-4345-b94b-36527465ce1b\") " pod="openstack-operators/octavia-operator-controller-manager-69f8888797-j4n52" Feb 17 13:51:55 crc kubenswrapper[4768]: I0217 13:51:55.687154 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qxx74\" (UniqueName: \"kubernetes.io/projected/2e1d5b4f-7ff1-43bf-99ed-48cb79cb86df-kube-api-access-qxx74\") pod \"neutron-operator-controller-manager-64ddbf8bb-4wm78\" (UID: \"2e1d5b4f-7ff1-43bf-99ed-48cb79cb86df\") " pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-4wm78" Feb 17 13:51:55 crc kubenswrapper[4768]: I0217 13:51:55.688885 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l527h\" (UniqueName: \"kubernetes.io/projected/4f912c2e-e494-46c0-9231-40c106b00c40-kube-api-access-l527h\") pod \"keystone-operator-controller-manager-b4d948c87-7ntmb\" (UID: \"4f912c2e-e494-46c0-9231-40c106b00c40\") " pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-7ntmb" Feb 17 13:51:55 crc kubenswrapper[4768]: I0217 13:51:55.689569 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-47vdn\" (UniqueName: \"kubernetes.io/projected/c040f799-8668-44a6-b694-0b253aaf7930-kube-api-access-47vdn\") pod \"manila-operator-controller-manager-54f6768c69-qfz4j\" (UID: \"c040f799-8668-44a6-b694-0b253aaf7930\") " pod="openstack-operators/manila-operator-controller-manager-54f6768c69-qfz4j" Feb 17 13:51:55 crc kubenswrapper[4768]: I0217 13:51:55.693378 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9q44w\" (UniqueName: \"kubernetes.io/projected/93f56e48-e402-471a-b9c0-0fac088f7a7e-kube-api-access-9q44w\") pod \"mariadb-operator-controller-manager-6994f66f48-v5svk\" (UID: \"93f56e48-e402-471a-b9c0-0fac088f7a7e\") " pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-v5svk" Feb 17 13:51:55 crc kubenswrapper[4768]: I0217 13:51:55.693608 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lpw5v\" (UniqueName: \"kubernetes.io/projected/2560f30e-2ede-4f2e-a3a1-e3e7e96b5792-kube-api-access-lpw5v\") pod \"ironic-operator-controller-manager-554564d7fc-pfr2g\" (UID: \"2560f30e-2ede-4f2e-a3a1-e3e7e96b5792\") " pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-pfr2g" Feb 17 13:51:55 crc kubenswrapper[4768]: I0217 13:51:55.693686 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-68f46476f-kmgl4"] Feb 17 13:51:55 crc kubenswrapper[4768]: I0217 13:51:55.695593 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-pfr2g" Feb 17 13:51:55 crc kubenswrapper[4768]: I0217 13:51:55.726453 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-7ntmb" Feb 17 13:51:55 crc kubenswrapper[4768]: I0217 13:51:55.730491 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5xsfm\" (UniqueName: \"kubernetes.io/projected/e48b6c11-496b-4f36-9155-119bbfb506f8-kube-api-access-5xsfm\") pod \"nova-operator-controller-manager-567668f5cf-9krmz\" (UID: \"e48b6c11-496b-4f36-9155-119bbfb506f8\") " pod="openstack-operators/nova-operator-controller-manager-567668f5cf-9krmz" Feb 17 13:51:55 crc kubenswrapper[4768]: I0217 13:51:55.731748 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jk4dm\" (UniqueName: \"kubernetes.io/projected/09c0d3ef-49e2-4dec-a95f-951be73d5740-kube-api-access-jk4dm\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9cg22r7\" (UID: \"09c0d3ef-49e2-4dec-a95f-951be73d5740\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cg22r7" Feb 17 13:51:55 crc kubenswrapper[4768]: I0217 13:51:55.731783 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bdcqd\" (UniqueName: \"kubernetes.io/projected/96ea84f4-0ad3-49bc-9eb5-83bfbbb7ee0a-kube-api-access-bdcqd\") pod \"ovn-operator-controller-manager-d44cf6b75-dp9tq\" (UID: \"96ea84f4-0ad3-49bc-9eb5-83bfbbb7ee0a\") " pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-dp9tq" Feb 17 13:51:55 crc kubenswrapper[4768]: I0217 13:51:55.731837 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-trwgl\" (UniqueName: \"kubernetes.io/projected/a9e933fb-130b-4a7e-91c4-9ca5f2747e35-kube-api-access-trwgl\") pod \"swift-operator-controller-manager-68f46476f-kmgl4\" (UID: \"a9e933fb-130b-4a7e-91c4-9ca5f2747e35\") " pod="openstack-operators/swift-operator-controller-manager-68f46476f-kmgl4" Feb 17 13:51:55 crc kubenswrapper[4768]: I0217 13:51:55.731894 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-75jz9\" (UniqueName: \"kubernetes.io/projected/9aee7c4a-404a-434e-8aa9-b671553532d2-kube-api-access-75jz9\") pod \"placement-operator-controller-manager-8497b45c89-pvjkl\" (UID: \"9aee7c4a-404a-434e-8aa9-b671553532d2\") " pod="openstack-operators/placement-operator-controller-manager-8497b45c89-pvjkl" Feb 17 13:51:55 crc kubenswrapper[4768]: I0217 13:51:55.731983 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/09c0d3ef-49e2-4dec-a95f-951be73d5740-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9cg22r7\" (UID: \"09c0d3ef-49e2-4dec-a95f-951be73d5740\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cg22r7" Feb 17 13:51:55 crc kubenswrapper[4768]: E0217 13:51:55.732179 4768 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 17 13:51:55 crc kubenswrapper[4768]: E0217 13:51:55.732261 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/09c0d3ef-49e2-4dec-a95f-951be73d5740-cert podName:09c0d3ef-49e2-4dec-a95f-951be73d5740 nodeName:}" failed. No retries permitted until 2026-02-17 13:51:56.232240342 +0000 UTC m=+935.511626784 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/09c0d3ef-49e2-4dec-a95f-951be73d5740-cert") pod "openstack-baremetal-operator-controller-manager-7c6767dc9cg22r7" (UID: "09c0d3ef-49e2-4dec-a95f-951be73d5740") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 17 13:51:55 crc kubenswrapper[4768]: I0217 13:51:55.750463 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-99tll" Feb 17 13:51:55 crc kubenswrapper[4768]: I0217 13:51:55.753776 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-54f6768c69-qfz4j" Feb 17 13:51:55 crc kubenswrapper[4768]: I0217 13:51:55.764401 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5xsfm\" (UniqueName: \"kubernetes.io/projected/e48b6c11-496b-4f36-9155-119bbfb506f8-kube-api-access-5xsfm\") pod \"nova-operator-controller-manager-567668f5cf-9krmz\" (UID: \"e48b6c11-496b-4f36-9155-119bbfb506f8\") " pod="openstack-operators/nova-operator-controller-manager-567668f5cf-9krmz" Feb 17 13:51:55 crc kubenswrapper[4768]: I0217 13:51:55.769095 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-75jz9\" (UniqueName: \"kubernetes.io/projected/9aee7c4a-404a-434e-8aa9-b671553532d2-kube-api-access-75jz9\") pod \"placement-operator-controller-manager-8497b45c89-pvjkl\" (UID: \"9aee7c4a-404a-434e-8aa9-b671553532d2\") " pod="openstack-operators/placement-operator-controller-manager-8497b45c89-pvjkl" Feb 17 13:51:55 crc kubenswrapper[4768]: I0217 13:51:55.770198 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-7f45b4ff68-4c9sb"] Feb 17 13:51:55 crc kubenswrapper[4768]: I0217 13:51:55.782875 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-7f45b4ff68-4c9sb" Feb 17 13:51:55 crc kubenswrapper[4768]: I0217 13:51:55.783059 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jk4dm\" (UniqueName: \"kubernetes.io/projected/09c0d3ef-49e2-4dec-a95f-951be73d5740-kube-api-access-jk4dm\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9cg22r7\" (UID: \"09c0d3ef-49e2-4dec-a95f-951be73d5740\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cg22r7" Feb 17 13:51:55 crc kubenswrapper[4768]: I0217 13:51:55.789389 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"telemetry-operator-controller-manager-dockercfg-bmwkh" Feb 17 13:51:55 crc kubenswrapper[4768]: I0217 13:51:55.793735 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-7f45b4ff68-4c9sb"] Feb 17 13:51:55 crc kubenswrapper[4768]: I0217 13:51:55.796335 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bdcqd\" (UniqueName: \"kubernetes.io/projected/96ea84f4-0ad3-49bc-9eb5-83bfbbb7ee0a-kube-api-access-bdcqd\") pod \"ovn-operator-controller-manager-d44cf6b75-dp9tq\" (UID: \"96ea84f4-0ad3-49bc-9eb5-83bfbbb7ee0a\") " pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-dp9tq" Feb 17 13:51:55 crc kubenswrapper[4768]: I0217 13:51:55.799440 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-pvjkl" Feb 17 13:51:55 crc kubenswrapper[4768]: I0217 13:51:55.833234 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-trwgl\" (UniqueName: \"kubernetes.io/projected/a9e933fb-130b-4a7e-91c4-9ca5f2747e35-kube-api-access-trwgl\") pod \"swift-operator-controller-manager-68f46476f-kmgl4\" (UID: \"a9e933fb-130b-4a7e-91c4-9ca5f2747e35\") " pod="openstack-operators/swift-operator-controller-manager-68f46476f-kmgl4" Feb 17 13:51:55 crc kubenswrapper[4768]: I0217 13:51:55.833595 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-74ttq\" (UniqueName: \"kubernetes.io/projected/d8d8a911-905e-45e3-a4ed-35338f74806f-kube-api-access-74ttq\") pod \"telemetry-operator-controller-manager-7f45b4ff68-4c9sb\" (UID: \"d8d8a911-905e-45e3-a4ed-35338f74806f\") " pod="openstack-operators/telemetry-operator-controller-manager-7f45b4ff68-4c9sb" Feb 17 13:51:55 crc kubenswrapper[4768]: I0217 13:51:55.854190 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/test-operator-controller-manager-7866795846-ln8v5"] Feb 17 13:51:55 crc kubenswrapper[4768]: I0217 13:51:55.855399 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-7866795846-ln8v5" Feb 17 13:51:55 crc kubenswrapper[4768]: I0217 13:51:55.856543 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-7866795846-ln8v5"] Feb 17 13:51:55 crc kubenswrapper[4768]: I0217 13:51:55.867717 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"test-operator-controller-manager-dockercfg-l9qbv" Feb 17 13:51:55 crc kubenswrapper[4768]: I0217 13:51:55.879228 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-v5svk" Feb 17 13:51:55 crc kubenswrapper[4768]: I0217 13:51:55.890715 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-j4n52" Feb 17 13:51:55 crc kubenswrapper[4768]: I0217 13:51:55.890896 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/watcher-operator-controller-manager-5db88f68c-xx7vm"] Feb 17 13:51:55 crc kubenswrapper[4768]: I0217 13:51:55.892241 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-xx7vm" Feb 17 13:51:55 crc kubenswrapper[4768]: I0217 13:51:55.894643 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-trwgl\" (UniqueName: \"kubernetes.io/projected/a9e933fb-130b-4a7e-91c4-9ca5f2747e35-kube-api-access-trwgl\") pod \"swift-operator-controller-manager-68f46476f-kmgl4\" (UID: \"a9e933fb-130b-4a7e-91c4-9ca5f2747e35\") " pod="openstack-operators/swift-operator-controller-manager-68f46476f-kmgl4" Feb 17 13:51:55 crc kubenswrapper[4768]: I0217 13:51:55.898575 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"watcher-operator-controller-manager-dockercfg-5c5mj" Feb 17 13:51:55 crc kubenswrapper[4768]: I0217 13:51:55.900192 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-5db88f68c-xx7vm"] Feb 17 13:51:55 crc kubenswrapper[4768]: I0217 13:51:55.900773 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-4wm78" Feb 17 13:51:55 crc kubenswrapper[4768]: I0217 13:51:55.929023 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-9krmz" Feb 17 13:51:55 crc kubenswrapper[4768]: I0217 13:51:55.936021 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-74ttq\" (UniqueName: \"kubernetes.io/projected/d8d8a911-905e-45e3-a4ed-35338f74806f-kube-api-access-74ttq\") pod \"telemetry-operator-controller-manager-7f45b4ff68-4c9sb\" (UID: \"d8d8a911-905e-45e3-a4ed-35338f74806f\") " pod="openstack-operators/telemetry-operator-controller-manager-7f45b4ff68-4c9sb" Feb 17 13:51:55 crc kubenswrapper[4768]: I0217 13:51:55.936164 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xcrvw\" (UniqueName: \"kubernetes.io/projected/c8a74650-d867-4ab8-92a3-fcdc815247c4-kube-api-access-xcrvw\") pod \"test-operator-controller-manager-7866795846-ln8v5\" (UID: \"c8a74650-d867-4ab8-92a3-fcdc815247c4\") " pod="openstack-operators/test-operator-controller-manager-7866795846-ln8v5" Feb 17 13:51:55 crc kubenswrapper[4768]: I0217 13:51:55.957657 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-manager-57785b79bf-sjndd"] Feb 17 13:51:55 crc kubenswrapper[4768]: I0217 13:51:55.958506 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-57785b79bf-sjndd" Feb 17 13:51:55 crc kubenswrapper[4768]: I0217 13:51:55.962784 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-hkfbg" Feb 17 13:51:55 crc kubenswrapper[4768]: I0217 13:51:55.963027 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"webhook-server-cert" Feb 17 13:51:55 crc kubenswrapper[4768]: I0217 13:51:55.964453 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"metrics-server-cert" Feb 17 13:51:55 crc kubenswrapper[4768]: I0217 13:51:55.976196 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-74ttq\" (UniqueName: \"kubernetes.io/projected/d8d8a911-905e-45e3-a4ed-35338f74806f-kube-api-access-74ttq\") pod \"telemetry-operator-controller-manager-7f45b4ff68-4c9sb\" (UID: \"d8d8a911-905e-45e3-a4ed-35338f74806f\") " pod="openstack-operators/telemetry-operator-controller-manager-7f45b4ff68-4c9sb" Feb 17 13:51:56 crc kubenswrapper[4768]: I0217 13:51:56.007688 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-57785b79bf-sjndd"] Feb 17 13:51:56 crc kubenswrapper[4768]: I0217 13:51:56.035741 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-pqvzt"] Feb 17 13:51:56 crc kubenswrapper[4768]: I0217 13:51:56.037452 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/e7b1071b-c742-4578-8226-12a6cce613f1-webhook-certs\") pod \"openstack-operator-controller-manager-57785b79bf-sjndd\" (UID: \"e7b1071b-c742-4578-8226-12a6cce613f1\") " pod="openstack-operators/openstack-operator-controller-manager-57785b79bf-sjndd" Feb 17 13:51:56 crc kubenswrapper[4768]: I0217 13:51:56.037495 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gwzdd\" (UniqueName: \"kubernetes.io/projected/71305e38-208f-43be-9bb9-32341555750c-kube-api-access-gwzdd\") pod \"watcher-operator-controller-manager-5db88f68c-xx7vm\" (UID: \"71305e38-208f-43be-9bb9-32341555750c\") " pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-xx7vm" Feb 17 13:51:56 crc kubenswrapper[4768]: I0217 13:51:56.037529 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e7b1071b-c742-4578-8226-12a6cce613f1-metrics-certs\") pod \"openstack-operator-controller-manager-57785b79bf-sjndd\" (UID: \"e7b1071b-c742-4578-8226-12a6cce613f1\") " pod="openstack-operators/openstack-operator-controller-manager-57785b79bf-sjndd" Feb 17 13:51:56 crc kubenswrapper[4768]: I0217 13:51:56.037594 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/9e699840-e748-4e5d-8629-f0379a7cce08-cert\") pod \"infra-operator-controller-manager-79d975b745-cpkx6\" (UID: \"9e699840-e748-4e5d-8629-f0379a7cce08\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-cpkx6" Feb 17 13:51:56 crc kubenswrapper[4768]: I0217 13:51:56.037626 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xcrvw\" (UniqueName: \"kubernetes.io/projected/c8a74650-d867-4ab8-92a3-fcdc815247c4-kube-api-access-xcrvw\") pod \"test-operator-controller-manager-7866795846-ln8v5\" (UID: \"c8a74650-d867-4ab8-92a3-fcdc815247c4\") " pod="openstack-operators/test-operator-controller-manager-7866795846-ln8v5" Feb 17 13:51:56 crc kubenswrapper[4768]: E0217 13:51:56.037775 4768 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 17 13:51:56 crc kubenswrapper[4768]: E0217 13:51:56.037846 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9e699840-e748-4e5d-8629-f0379a7cce08-cert podName:9e699840-e748-4e5d-8629-f0379a7cce08 nodeName:}" failed. No retries permitted until 2026-02-17 13:51:57.037824197 +0000 UTC m=+936.317210729 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/9e699840-e748-4e5d-8629-f0379a7cce08-cert") pod "infra-operator-controller-manager-79d975b745-cpkx6" (UID: "9e699840-e748-4e5d-8629-f0379a7cce08") : secret "infra-operator-webhook-server-cert" not found Feb 17 13:51:56 crc kubenswrapper[4768]: I0217 13:51:56.037998 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cqsff\" (UniqueName: \"kubernetes.io/projected/e7b1071b-c742-4578-8226-12a6cce613f1-kube-api-access-cqsff\") pod \"openstack-operator-controller-manager-57785b79bf-sjndd\" (UID: \"e7b1071b-c742-4578-8226-12a6cce613f1\") " pod="openstack-operators/openstack-operator-controller-manager-57785b79bf-sjndd" Feb 17 13:51:56 crc kubenswrapper[4768]: I0217 13:51:56.038188 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-pqvzt" Feb 17 13:51:56 crc kubenswrapper[4768]: I0217 13:51:56.043466 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"rabbitmq-cluster-operator-controller-manager-dockercfg-bhsqr" Feb 17 13:51:56 crc kubenswrapper[4768]: I0217 13:51:56.046322 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-dp9tq" Feb 17 13:51:56 crc kubenswrapper[4768]: I0217 13:51:56.063532 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xcrvw\" (UniqueName: \"kubernetes.io/projected/c8a74650-d867-4ab8-92a3-fcdc815247c4-kube-api-access-xcrvw\") pod \"test-operator-controller-manager-7866795846-ln8v5\" (UID: \"c8a74650-d867-4ab8-92a3-fcdc815247c4\") " pod="openstack-operators/test-operator-controller-manager-7866795846-ln8v5" Feb 17 13:51:56 crc kubenswrapper[4768]: I0217 13:51:56.064973 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-pqvzt"] Feb 17 13:51:56 crc kubenswrapper[4768]: I0217 13:51:56.109247 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-6d8bf5c495-hn2hg"] Feb 17 13:51:56 crc kubenswrapper[4768]: W0217 13:51:56.137739 4768 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod663c818c_0255_4f9c_827e_ccb2b430c5e3.slice/crio-4d3f815a1c04f56fd231cdaece56ad9042933c3533375ac05ee6078de0859509 WatchSource:0}: Error finding container 4d3f815a1c04f56fd231cdaece56ad9042933c3533375ac05ee6078de0859509: Status 404 returned error can't find the container with id 4d3f815a1c04f56fd231cdaece56ad9042933c3533375ac05ee6078de0859509 Feb 17 13:51:56 crc kubenswrapper[4768]: I0217 13:51:56.140561 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-68f46476f-kmgl4" Feb 17 13:51:56 crc kubenswrapper[4768]: I0217 13:51:56.142876 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gws9x\" (UniqueName: \"kubernetes.io/projected/a5348349-c195-4af1-b367-a6cb0842305b-kube-api-access-gws9x\") pod \"rabbitmq-cluster-operator-manager-668c99d594-pqvzt\" (UID: \"a5348349-c195-4af1-b367-a6cb0842305b\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-pqvzt" Feb 17 13:51:56 crc kubenswrapper[4768]: I0217 13:51:56.142928 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqsff\" (UniqueName: \"kubernetes.io/projected/e7b1071b-c742-4578-8226-12a6cce613f1-kube-api-access-cqsff\") pod \"openstack-operator-controller-manager-57785b79bf-sjndd\" (UID: \"e7b1071b-c742-4578-8226-12a6cce613f1\") " pod="openstack-operators/openstack-operator-controller-manager-57785b79bf-sjndd" Feb 17 13:51:56 crc kubenswrapper[4768]: I0217 13:51:56.142991 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/e7b1071b-c742-4578-8226-12a6cce613f1-webhook-certs\") pod \"openstack-operator-controller-manager-57785b79bf-sjndd\" (UID: \"e7b1071b-c742-4578-8226-12a6cce613f1\") " pod="openstack-operators/openstack-operator-controller-manager-57785b79bf-sjndd" Feb 17 13:51:56 crc kubenswrapper[4768]: I0217 13:51:56.143015 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gwzdd\" (UniqueName: \"kubernetes.io/projected/71305e38-208f-43be-9bb9-32341555750c-kube-api-access-gwzdd\") pod \"watcher-operator-controller-manager-5db88f68c-xx7vm\" (UID: \"71305e38-208f-43be-9bb9-32341555750c\") " pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-xx7vm" Feb 17 13:51:56 crc kubenswrapper[4768]: I0217 13:51:56.143041 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e7b1071b-c742-4578-8226-12a6cce613f1-metrics-certs\") pod \"openstack-operator-controller-manager-57785b79bf-sjndd\" (UID: \"e7b1071b-c742-4578-8226-12a6cce613f1\") " pod="openstack-operators/openstack-operator-controller-manager-57785b79bf-sjndd" Feb 17 13:51:56 crc kubenswrapper[4768]: E0217 13:51:56.143156 4768 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 17 13:51:56 crc kubenswrapper[4768]: E0217 13:51:56.143191 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e7b1071b-c742-4578-8226-12a6cce613f1-metrics-certs podName:e7b1071b-c742-4578-8226-12a6cce613f1 nodeName:}" failed. No retries permitted until 2026-02-17 13:51:56.643179032 +0000 UTC m=+935.922565464 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/e7b1071b-c742-4578-8226-12a6cce613f1-metrics-certs") pod "openstack-operator-controller-manager-57785b79bf-sjndd" (UID: "e7b1071b-c742-4578-8226-12a6cce613f1") : secret "metrics-server-cert" not found Feb 17 13:51:56 crc kubenswrapper[4768]: E0217 13:51:56.143523 4768 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 17 13:51:56 crc kubenswrapper[4768]: E0217 13:51:56.143555 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e7b1071b-c742-4578-8226-12a6cce613f1-webhook-certs podName:e7b1071b-c742-4578-8226-12a6cce613f1 nodeName:}" failed. No retries permitted until 2026-02-17 13:51:56.643545382 +0000 UTC m=+935.922931824 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/e7b1071b-c742-4578-8226-12a6cce613f1-webhook-certs") pod "openstack-operator-controller-manager-57785b79bf-sjndd" (UID: "e7b1071b-c742-4578-8226-12a6cce613f1") : secret "webhook-server-cert" not found Feb 17 13:51:56 crc kubenswrapper[4768]: I0217 13:51:56.167937 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqsff\" (UniqueName: \"kubernetes.io/projected/e7b1071b-c742-4578-8226-12a6cce613f1-kube-api-access-cqsff\") pod \"openstack-operator-controller-manager-57785b79bf-sjndd\" (UID: \"e7b1071b-c742-4578-8226-12a6cce613f1\") " pod="openstack-operators/openstack-operator-controller-manager-57785b79bf-sjndd" Feb 17 13:51:56 crc kubenswrapper[4768]: I0217 13:51:56.171780 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gwzdd\" (UniqueName: \"kubernetes.io/projected/71305e38-208f-43be-9bb9-32341555750c-kube-api-access-gwzdd\") pod \"watcher-operator-controller-manager-5db88f68c-xx7vm\" (UID: \"71305e38-208f-43be-9bb9-32341555750c\") " pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-xx7vm" Feb 17 13:51:56 crc kubenswrapper[4768]: I0217 13:51:56.173873 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-7f45b4ff68-4c9sb" Feb 17 13:51:56 crc kubenswrapper[4768]: I0217 13:51:56.188182 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-7866795846-ln8v5" Feb 17 13:51:56 crc kubenswrapper[4768]: I0217 13:51:56.228555 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-xx7vm" Feb 17 13:51:56 crc kubenswrapper[4768]: I0217 13:51:56.244555 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gws9x\" (UniqueName: \"kubernetes.io/projected/a5348349-c195-4af1-b367-a6cb0842305b-kube-api-access-gws9x\") pod \"rabbitmq-cluster-operator-manager-668c99d594-pqvzt\" (UID: \"a5348349-c195-4af1-b367-a6cb0842305b\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-pqvzt" Feb 17 13:51:56 crc kubenswrapper[4768]: I0217 13:51:56.244599 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/09c0d3ef-49e2-4dec-a95f-951be73d5740-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9cg22r7\" (UID: \"09c0d3ef-49e2-4dec-a95f-951be73d5740\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cg22r7" Feb 17 13:51:56 crc kubenswrapper[4768]: E0217 13:51:56.244754 4768 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 17 13:51:56 crc kubenswrapper[4768]: E0217 13:51:56.244821 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/09c0d3ef-49e2-4dec-a95f-951be73d5740-cert podName:09c0d3ef-49e2-4dec-a95f-951be73d5740 nodeName:}" failed. No retries permitted until 2026-02-17 13:51:57.244785103 +0000 UTC m=+936.524171545 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/09c0d3ef-49e2-4dec-a95f-951be73d5740-cert") pod "openstack-baremetal-operator-controller-manager-7c6767dc9cg22r7" (UID: "09c0d3ef-49e2-4dec-a95f-951be73d5740") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 17 13:51:56 crc kubenswrapper[4768]: I0217 13:51:56.263452 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gws9x\" (UniqueName: \"kubernetes.io/projected/a5348349-c195-4af1-b367-a6cb0842305b-kube-api-access-gws9x\") pod \"rabbitmq-cluster-operator-manager-668c99d594-pqvzt\" (UID: \"a5348349-c195-4af1-b367-a6cb0842305b\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-pqvzt" Feb 17 13:51:56 crc kubenswrapper[4768]: I0217 13:51:56.351272 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-69f49c598c-bl6rp"] Feb 17 13:51:56 crc kubenswrapper[4768]: I0217 13:51:56.356495 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-77987464f4-7wnck"] Feb 17 13:51:56 crc kubenswrapper[4768]: I0217 13:51:56.371596 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-pqvzt" Feb 17 13:51:56 crc kubenswrapper[4768]: W0217 13:51:56.392367 4768 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8b8ebdec_5fc0_4f66_9a22_b833d3cd4283.slice/crio-682f49300bdec05e100ed12a02b3a8f590941c6a0a0de40ffe0bc07829026c6a WatchSource:0}: Error finding container 682f49300bdec05e100ed12a02b3a8f590941c6a0a0de40ffe0bc07829026c6a: Status 404 returned error can't find the container with id 682f49300bdec05e100ed12a02b3a8f590941c6a0a0de40ffe0bc07829026c6a Feb 17 13:51:56 crc kubenswrapper[4768]: I0217 13:51:56.652788 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/e7b1071b-c742-4578-8226-12a6cce613f1-webhook-certs\") pod \"openstack-operator-controller-manager-57785b79bf-sjndd\" (UID: \"e7b1071b-c742-4578-8226-12a6cce613f1\") " pod="openstack-operators/openstack-operator-controller-manager-57785b79bf-sjndd" Feb 17 13:51:56 crc kubenswrapper[4768]: I0217 13:51:56.652874 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e7b1071b-c742-4578-8226-12a6cce613f1-metrics-certs\") pod \"openstack-operator-controller-manager-57785b79bf-sjndd\" (UID: \"e7b1071b-c742-4578-8226-12a6cce613f1\") " pod="openstack-operators/openstack-operator-controller-manager-57785b79bf-sjndd" Feb 17 13:51:56 crc kubenswrapper[4768]: E0217 13:51:56.654339 4768 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 17 13:51:56 crc kubenswrapper[4768]: E0217 13:51:56.654390 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e7b1071b-c742-4578-8226-12a6cce613f1-webhook-certs podName:e7b1071b-c742-4578-8226-12a6cce613f1 nodeName:}" failed. No retries permitted until 2026-02-17 13:51:57.654373665 +0000 UTC m=+936.933760117 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/e7b1071b-c742-4578-8226-12a6cce613f1-webhook-certs") pod "openstack-operator-controller-manager-57785b79bf-sjndd" (UID: "e7b1071b-c742-4578-8226-12a6cce613f1") : secret "webhook-server-cert" not found Feb 17 13:51:56 crc kubenswrapper[4768]: E0217 13:51:56.654927 4768 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 17 13:51:56 crc kubenswrapper[4768]: E0217 13:51:56.654961 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e7b1071b-c742-4578-8226-12a6cce613f1-metrics-certs podName:e7b1071b-c742-4578-8226-12a6cce613f1 nodeName:}" failed. No retries permitted until 2026-02-17 13:51:57.654950411 +0000 UTC m=+936.934336873 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/e7b1071b-c742-4578-8226-12a6cce613f1-metrics-certs") pod "openstack-operator-controller-manager-57785b79bf-sjndd" (UID: "e7b1071b-c742-4578-8226-12a6cce613f1") : secret "metrics-server-cert" not found Feb 17 13:51:56 crc kubenswrapper[4768]: I0217 13:51:56.761161 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-554564d7fc-pfr2g"] Feb 17 13:51:56 crc kubenswrapper[4768]: I0217 13:51:56.766233 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-5d946d989d-hrkzn"] Feb 17 13:51:56 crc kubenswrapper[4768]: W0217 13:51:56.785888 4768 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod633a0666_42b2_4422_9b47_fb69c1105655.slice/crio-11b1efb6b73b9f555f1b7e715c625ae3a5c77f254e401c8bcc3bbfdfaba06693 WatchSource:0}: Error finding container 11b1efb6b73b9f555f1b7e715c625ae3a5c77f254e401c8bcc3bbfdfaba06693: Status 404 returned error can't find the container with id 11b1efb6b73b9f555f1b7e715c625ae3a5c77f254e401c8bcc3bbfdfaba06693 Feb 17 13:51:56 crc kubenswrapper[4768]: I0217 13:51:56.997552 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-54f6768c69-qfz4j"] Feb 17 13:51:57 crc kubenswrapper[4768]: I0217 13:51:57.006885 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-64ddbf8bb-4wm78"] Feb 17 13:51:57 crc kubenswrapper[4768]: I0217 13:51:57.021772 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-69f8888797-j4n52"] Feb 17 13:51:57 crc kubenswrapper[4768]: I0217 13:51:57.026946 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-b4d948c87-7ntmb"] Feb 17 13:51:57 crc kubenswrapper[4768]: I0217 13:51:57.035883 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-8497b45c89-pvjkl"] Feb 17 13:51:57 crc kubenswrapper[4768]: I0217 13:51:57.054758 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-868647ff47-99tll"] Feb 17 13:51:57 crc kubenswrapper[4768]: I0217 13:51:57.063936 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/9e699840-e748-4e5d-8629-f0379a7cce08-cert\") pod \"infra-operator-controller-manager-79d975b745-cpkx6\" (UID: \"9e699840-e748-4e5d-8629-f0379a7cce08\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-cpkx6" Feb 17 13:51:57 crc kubenswrapper[4768]: E0217 13:51:57.064062 4768 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 17 13:51:57 crc kubenswrapper[4768]: E0217 13:51:57.064135 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9e699840-e748-4e5d-8629-f0379a7cce08-cert podName:9e699840-e748-4e5d-8629-f0379a7cce08 nodeName:}" failed. No retries permitted until 2026-02-17 13:51:59.064116901 +0000 UTC m=+938.343503433 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/9e699840-e748-4e5d-8629-f0379a7cce08-cert") pod "infra-operator-controller-manager-79d975b745-cpkx6" (UID: "9e699840-e748-4e5d-8629-f0379a7cce08") : secret "infra-operator-webhook-server-cert" not found Feb 17 13:51:57 crc kubenswrapper[4768]: I0217 13:51:57.074675 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-5b9b8895d5-hrb5z"] Feb 17 13:51:57 crc kubenswrapper[4768]: I0217 13:51:57.082287 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-d44cf6b75-dp9tq"] Feb 17 13:51:57 crc kubenswrapper[4768]: I0217 13:51:57.089674 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-6994f66f48-v5svk"] Feb 17 13:51:57 crc kubenswrapper[4768]: I0217 13:51:57.140251 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-68f46476f-kmgl4"] Feb 17 13:51:57 crc kubenswrapper[4768]: I0217 13:51:57.153906 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-pfr2g" event={"ID":"2560f30e-2ede-4f2e-a3a1-e3e7e96b5792","Type":"ContainerStarted","Data":"279dc04ce1a34c51b837613aa4d3d0034d5c0ff56b7ea9f6d5ccf2fbf903480d"} Feb 17 13:51:57 crc kubenswrapper[4768]: I0217 13:51:57.155348 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-7866795846-ln8v5"] Feb 17 13:51:57 crc kubenswrapper[4768]: I0217 13:51:57.165833 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-4wm78" event={"ID":"2e1d5b4f-7ff1-43bf-99ed-48cb79cb86df","Type":"ContainerStarted","Data":"563f564299ba75d374f94c9513b39c8bdbd743c7ecba249536a523796000457f"} Feb 17 13:51:57 crc kubenswrapper[4768]: W0217 13:51:57.170685 4768 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda9e933fb_130b_4a7e_91c4_9ca5f2747e35.slice/crio-c1eb89b7c35ee0cc13e7a91e461cc9c83f38e7ad853046b798fb5b44f935b800 WatchSource:0}: Error finding container c1eb89b7c35ee0cc13e7a91e461cc9c83f38e7ad853046b798fb5b44f935b800: Status 404 returned error can't find the container with id c1eb89b7c35ee0cc13e7a91e461cc9c83f38e7ad853046b798fb5b44f935b800 Feb 17 13:51:57 crc kubenswrapper[4768]: I0217 13:51:57.172543 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-pvjkl" event={"ID":"9aee7c4a-404a-434e-8aa9-b671553532d2","Type":"ContainerStarted","Data":"57fd20012ff2a59d27fee890a14cc15a55c0d61201e0756f2e357ea2fc802118"} Feb 17 13:51:57 crc kubenswrapper[4768]: E0217 13:51:57.173227 4768 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/test-operator@sha256:f0fabdf79095def0f8b1c0442925548a94ca94bed4de2d3b171277129f8079e6,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-xcrvw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod test-operator-controller-manager-7866795846-ln8v5_openstack-operators(c8a74650-d867-4ab8-92a3-fcdc815247c4): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Feb 17 13:51:57 crc kubenswrapper[4768]: E0217 13:51:57.174428 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/test-operator-controller-manager-7866795846-ln8v5" podUID="c8a74650-d867-4ab8-92a3-fcdc815247c4" Feb 17 13:51:57 crc kubenswrapper[4768]: I0217 13:51:57.174520 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-v5svk" event={"ID":"93f56e48-e402-471a-b9c0-0fac088f7a7e","Type":"ContainerStarted","Data":"65a004e80cd41240ce0c79eeb5ea988409b9a61872d7628651d08833ef059013"} Feb 17 13:51:57 crc kubenswrapper[4768]: E0217 13:51:57.175313 4768 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/swift-operator@sha256:3d676f1281e24ef07de617570d2f7fbf625032e41866d1551a856c052248bb04,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-trwgl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod swift-operator-controller-manager-68f46476f-kmgl4_openstack-operators(a9e933fb-130b-4a7e-91c4-9ca5f2747e35): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Feb 17 13:51:57 crc kubenswrapper[4768]: I0217 13:51:57.176545 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-j4n52" event={"ID":"2567e2d5-83bd-4345-b94b-36527465ce1b","Type":"ContainerStarted","Data":"251dae10d13970fc38fe3d4e6fc21b236e79344719b695e953f241d2de29e097"} Feb 17 13:51:57 crc kubenswrapper[4768]: E0217 13:51:57.176898 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/swift-operator-controller-manager-68f46476f-kmgl4" podUID="a9e933fb-130b-4a7e-91c4-9ca5f2747e35" Feb 17 13:51:57 crc kubenswrapper[4768]: I0217 13:51:57.178174 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-99tll" event={"ID":"ea7f039d-d594-4b9e-9dac-06e9f13bdba2","Type":"ContainerStarted","Data":"98c69eb68fa2e8a9e05b2a32f6e52c3aed92c3437809dc8891fe5da56420751f"} Feb 17 13:51:57 crc kubenswrapper[4768]: I0217 13:51:57.179056 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-pqvzt"] Feb 17 13:51:57 crc kubenswrapper[4768]: I0217 13:51:57.181062 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-77987464f4-7wnck" event={"ID":"f5689d3e-d755-485e-80a1-e808c460022d","Type":"ContainerStarted","Data":"bf9a2098323e2f8ce6102d2841ba266e074afe1677723548eeff1b9d3f38863d"} Feb 17 13:51:57 crc kubenswrapper[4768]: E0217 13:51:57.187021 4768 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:operator,Image:quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2,Command:[/manager],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:9782,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OPERATOR_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{200 -3} {} 200m DecimalSI},memory: {{524288000 0} {} 500Mi BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-gws9x,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cluster-operator-manager-668c99d594-pqvzt_openstack-operators(a5348349-c195-4af1-b367-a6cb0842305b): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Feb 17 13:51:57 crc kubenswrapper[4768]: I0217 13:51:57.187315 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-hrkzn" event={"ID":"633a0666-42b2-4422-9b47-fb69c1105655","Type":"ContainerStarted","Data":"11b1efb6b73b9f555f1b7e715c625ae3a5c77f254e401c8bcc3bbfdfaba06693"} Feb 17 13:51:57 crc kubenswrapper[4768]: I0217 13:51:57.188079 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-567668f5cf-9krmz"] Feb 17 13:51:57 crc kubenswrapper[4768]: E0217 13:51:57.188248 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-pqvzt" podUID="a5348349-c195-4af1-b367-a6cb0842305b" Feb 17 13:51:57 crc kubenswrapper[4768]: I0217 13:51:57.194255 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-7f45b4ff68-4c9sb"] Feb 17 13:51:57 crc kubenswrapper[4768]: I0217 13:51:57.194914 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-hrb5z" event={"ID":"aa6bb524-9950-4add-9b03-04f324c9a02d","Type":"ContainerStarted","Data":"eb166dcf6273a40a66f1b74dc72e2faf38cae4ba9b94cd8fe8abcea658b89b29"} Feb 17 13:51:57 crc kubenswrapper[4768]: I0217 13:51:57.203326 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-hn2hg" event={"ID":"663c818c-0255-4f9c-827e-ccb2b430c5e3","Type":"ContainerStarted","Data":"4d3f815a1c04f56fd231cdaece56ad9042933c3533375ac05ee6078de0859509"} Feb 17 13:51:57 crc kubenswrapper[4768]: I0217 13:51:57.211274 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-5db88f68c-xx7vm"] Feb 17 13:51:57 crc kubenswrapper[4768]: I0217 13:51:57.214528 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-bl6rp" event={"ID":"8b8ebdec-5fc0-4f66-9a22-b833d3cd4283","Type":"ContainerStarted","Data":"682f49300bdec05e100ed12a02b3a8f590941c6a0a0de40ffe0bc07829026c6a"} Feb 17 13:51:57 crc kubenswrapper[4768]: W0217 13:51:57.215091 4768 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd8d8a911_905e_45e3_a4ed_35338f74806f.slice/crio-44e0d09e820f4ab7c5abf91f03d62626e68e75e39a948e3022e83a89189c01dd WatchSource:0}: Error finding container 44e0d09e820f4ab7c5abf91f03d62626e68e75e39a948e3022e83a89189c01dd: Status 404 returned error can't find the container with id 44e0d09e820f4ab7c5abf91f03d62626e68e75e39a948e3022e83a89189c01dd Feb 17 13:51:57 crc kubenswrapper[4768]: W0217 13:51:57.217522 4768 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod71305e38_208f_43be_9bb9_32341555750c.slice/crio-49ef38412f1f68c66a66a26ddf6178b224dbb8c2bc87798dfc6516c706489a6d WatchSource:0}: Error finding container 49ef38412f1f68c66a66a26ddf6178b224dbb8c2bc87798dfc6516c706489a6d: Status 404 returned error can't find the container with id 49ef38412f1f68c66a66a26ddf6178b224dbb8c2bc87798dfc6516c706489a6d Feb 17 13:51:57 crc kubenswrapper[4768]: I0217 13:51:57.217857 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-54f6768c69-qfz4j" event={"ID":"c040f799-8668-44a6-b694-0b253aaf7930","Type":"ContainerStarted","Data":"eb28843b82dbd9005cc8cfdcb022b6abefcdf05d6ad5533c64d1d29e502d2ab8"} Feb 17 13:51:57 crc kubenswrapper[4768]: E0217 13:51:57.218762 4768 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/telemetry-operator@sha256:66a4b9322ebb573313178ea88e31026d4532f461592b9fae2dff71efd9256d99,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-74ttq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod telemetry-operator-controller-manager-7f45b4ff68-4c9sb_openstack-operators(d8d8a911-905e-45e3-a4ed-35338f74806f): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Feb 17 13:51:57 crc kubenswrapper[4768]: I0217 13:51:57.219142 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-dp9tq" event={"ID":"96ea84f4-0ad3-49bc-9eb5-83bfbbb7ee0a","Type":"ContainerStarted","Data":"b01a0aac9fed92ab789aa0ecacf36fa6054b295ea7364d5d1d68b56c8ef9d494"} Feb 17 13:51:57 crc kubenswrapper[4768]: E0217 13:51:57.220030 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/telemetry-operator-controller-manager-7f45b4ff68-4c9sb" podUID="d8d8a911-905e-45e3-a4ed-35338f74806f" Feb 17 13:51:57 crc kubenswrapper[4768]: E0217 13:51:57.220377 4768 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/watcher-operator@sha256:d01ae848290e880c09127d5297418dea40fc7f090fdab9bf2c578c7e7f53aec0,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-gwzdd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod watcher-operator-controller-manager-5db88f68c-xx7vm_openstack-operators(71305e38-208f-43be-9bb9-32341555750c): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Feb 17 13:51:57 crc kubenswrapper[4768]: I0217 13:51:57.220934 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-7ntmb" event={"ID":"4f912c2e-e494-46c0-9231-40c106b00c40","Type":"ContainerStarted","Data":"6b6364aac63d13a908d85ecb17c8672d64346f79b7ae816d9f2bd9be47ed8b95"} Feb 17 13:51:57 crc kubenswrapper[4768]: E0217 13:51:57.221529 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-xx7vm" podUID="71305e38-208f-43be-9bb9-32341555750c" Feb 17 13:51:57 crc kubenswrapper[4768]: I0217 13:51:57.268213 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/09c0d3ef-49e2-4dec-a95f-951be73d5740-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9cg22r7\" (UID: \"09c0d3ef-49e2-4dec-a95f-951be73d5740\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cg22r7" Feb 17 13:51:57 crc kubenswrapper[4768]: E0217 13:51:57.268421 4768 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 17 13:51:57 crc kubenswrapper[4768]: E0217 13:51:57.268599 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/09c0d3ef-49e2-4dec-a95f-951be73d5740-cert podName:09c0d3ef-49e2-4dec-a95f-951be73d5740 nodeName:}" failed. No retries permitted until 2026-02-17 13:51:59.268582268 +0000 UTC m=+938.547968710 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/09c0d3ef-49e2-4dec-a95f-951be73d5740-cert") pod "openstack-baremetal-operator-controller-manager-7c6767dc9cg22r7" (UID: "09c0d3ef-49e2-4dec-a95f-951be73d5740") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 17 13:51:57 crc kubenswrapper[4768]: I0217 13:51:57.673751 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/e7b1071b-c742-4578-8226-12a6cce613f1-webhook-certs\") pod \"openstack-operator-controller-manager-57785b79bf-sjndd\" (UID: \"e7b1071b-c742-4578-8226-12a6cce613f1\") " pod="openstack-operators/openstack-operator-controller-manager-57785b79bf-sjndd" Feb 17 13:51:57 crc kubenswrapper[4768]: I0217 13:51:57.673806 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e7b1071b-c742-4578-8226-12a6cce613f1-metrics-certs\") pod \"openstack-operator-controller-manager-57785b79bf-sjndd\" (UID: \"e7b1071b-c742-4578-8226-12a6cce613f1\") " pod="openstack-operators/openstack-operator-controller-manager-57785b79bf-sjndd" Feb 17 13:51:57 crc kubenswrapper[4768]: E0217 13:51:57.673854 4768 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 17 13:51:57 crc kubenswrapper[4768]: E0217 13:51:57.673916 4768 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 17 13:51:57 crc kubenswrapper[4768]: E0217 13:51:57.673926 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e7b1071b-c742-4578-8226-12a6cce613f1-webhook-certs podName:e7b1071b-c742-4578-8226-12a6cce613f1 nodeName:}" failed. No retries permitted until 2026-02-17 13:51:59.673905053 +0000 UTC m=+938.953291555 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/e7b1071b-c742-4578-8226-12a6cce613f1-webhook-certs") pod "openstack-operator-controller-manager-57785b79bf-sjndd" (UID: "e7b1071b-c742-4578-8226-12a6cce613f1") : secret "webhook-server-cert" not found Feb 17 13:51:57 crc kubenswrapper[4768]: E0217 13:51:57.673993 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e7b1071b-c742-4578-8226-12a6cce613f1-metrics-certs podName:e7b1071b-c742-4578-8226-12a6cce613f1 nodeName:}" failed. No retries permitted until 2026-02-17 13:51:59.673973745 +0000 UTC m=+938.953360237 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/e7b1071b-c742-4578-8226-12a6cce613f1-metrics-certs") pod "openstack-operator-controller-manager-57785b79bf-sjndd" (UID: "e7b1071b-c742-4578-8226-12a6cce613f1") : secret "metrics-server-cert" not found Feb 17 13:51:58 crc kubenswrapper[4768]: I0217 13:51:58.059423 4768 patch_prober.go:28] interesting pod/machine-config-daemon-p97z4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 13:51:58 crc kubenswrapper[4768]: I0217 13:51:58.059473 4768 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-p97z4" podUID="10c685ba-8fe0-425c-958c-3fb6754d3d84" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 13:51:58 crc kubenswrapper[4768]: I0217 13:51:58.059509 4768 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-p97z4" Feb 17 13:51:58 crc kubenswrapper[4768]: I0217 13:51:58.060164 4768 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"83ffe2b5d1ed0faaa82ed446a55b456fa3a71e8473ab304c756bbf132bdab653"} pod="openshift-machine-config-operator/machine-config-daemon-p97z4" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 17 13:51:58 crc kubenswrapper[4768]: I0217 13:51:58.060227 4768 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-p97z4" podUID="10c685ba-8fe0-425c-958c-3fb6754d3d84" containerName="machine-config-daemon" containerID="cri-o://83ffe2b5d1ed0faaa82ed446a55b456fa3a71e8473ab304c756bbf132bdab653" gracePeriod=600 Feb 17 13:51:58 crc kubenswrapper[4768]: I0217 13:51:58.232031 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-pqvzt" event={"ID":"a5348349-c195-4af1-b367-a6cb0842305b","Type":"ContainerStarted","Data":"af00a743e2412f587bc433b45f9d86defc25596e1bddfdf6da50c9694faf4aa6"} Feb 17 13:51:58 crc kubenswrapper[4768]: E0217 13:51:58.235819 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-pqvzt" podUID="a5348349-c195-4af1-b367-a6cb0842305b" Feb 17 13:51:58 crc kubenswrapper[4768]: I0217 13:51:58.240474 4768 generic.go:334] "Generic (PLEG): container finished" podID="10c685ba-8fe0-425c-958c-3fb6754d3d84" containerID="83ffe2b5d1ed0faaa82ed446a55b456fa3a71e8473ab304c756bbf132bdab653" exitCode=0 Feb 17 13:51:58 crc kubenswrapper[4768]: I0217 13:51:58.240515 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-p97z4" event={"ID":"10c685ba-8fe0-425c-958c-3fb6754d3d84","Type":"ContainerDied","Data":"83ffe2b5d1ed0faaa82ed446a55b456fa3a71e8473ab304c756bbf132bdab653"} Feb 17 13:51:58 crc kubenswrapper[4768]: I0217 13:51:58.240568 4768 scope.go:117] "RemoveContainer" containerID="261ea7265dca6d9f9150a1c46ec950cce5894a7910bfec8d9ee8e08fac1f7c8f" Feb 17 13:51:58 crc kubenswrapper[4768]: I0217 13:51:58.247724 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-68f46476f-kmgl4" event={"ID":"a9e933fb-130b-4a7e-91c4-9ca5f2747e35","Type":"ContainerStarted","Data":"c1eb89b7c35ee0cc13e7a91e461cc9c83f38e7ad853046b798fb5b44f935b800"} Feb 17 13:51:58 crc kubenswrapper[4768]: E0217 13:51:58.252508 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/swift-operator@sha256:3d676f1281e24ef07de617570d2f7fbf625032e41866d1551a856c052248bb04\\\"\"" pod="openstack-operators/swift-operator-controller-manager-68f46476f-kmgl4" podUID="a9e933fb-130b-4a7e-91c4-9ca5f2747e35" Feb 17 13:51:58 crc kubenswrapper[4768]: I0217 13:51:58.252847 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-7f45b4ff68-4c9sb" event={"ID":"d8d8a911-905e-45e3-a4ed-35338f74806f","Type":"ContainerStarted","Data":"44e0d09e820f4ab7c5abf91f03d62626e68e75e39a948e3022e83a89189c01dd"} Feb 17 13:51:58 crc kubenswrapper[4768]: I0217 13:51:58.255159 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-9krmz" event={"ID":"e48b6c11-496b-4f36-9155-119bbfb506f8","Type":"ContainerStarted","Data":"c3c294df167b58e3ff3a7b7d910649c5cd010c57c322b0414da1839b88ca90d0"} Feb 17 13:51:58 crc kubenswrapper[4768]: E0217 13:51:58.255600 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/telemetry-operator@sha256:66a4b9322ebb573313178ea88e31026d4532f461592b9fae2dff71efd9256d99\\\"\"" pod="openstack-operators/telemetry-operator-controller-manager-7f45b4ff68-4c9sb" podUID="d8d8a911-905e-45e3-a4ed-35338f74806f" Feb 17 13:51:58 crc kubenswrapper[4768]: I0217 13:51:58.259812 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-7866795846-ln8v5" event={"ID":"c8a74650-d867-4ab8-92a3-fcdc815247c4","Type":"ContainerStarted","Data":"31b44353114b6d73065297c18fabcd9028f9b38c96156c8d2935422d234f8b37"} Feb 17 13:51:58 crc kubenswrapper[4768]: E0217 13:51:58.262151 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:f0fabdf79095def0f8b1c0442925548a94ca94bed4de2d3b171277129f8079e6\\\"\"" pod="openstack-operators/test-operator-controller-manager-7866795846-ln8v5" podUID="c8a74650-d867-4ab8-92a3-fcdc815247c4" Feb 17 13:51:58 crc kubenswrapper[4768]: I0217 13:51:58.263459 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-xx7vm" event={"ID":"71305e38-208f-43be-9bb9-32341555750c","Type":"ContainerStarted","Data":"49ef38412f1f68c66a66a26ddf6178b224dbb8c2bc87798dfc6516c706489a6d"} Feb 17 13:51:58 crc kubenswrapper[4768]: E0217 13:51:58.264685 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/watcher-operator@sha256:d01ae848290e880c09127d5297418dea40fc7f090fdab9bf2c578c7e7f53aec0\\\"\"" pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-xx7vm" podUID="71305e38-208f-43be-9bb9-32341555750c" Feb 17 13:51:59 crc kubenswrapper[4768]: I0217 13:51:59.091167 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/9e699840-e748-4e5d-8629-f0379a7cce08-cert\") pod \"infra-operator-controller-manager-79d975b745-cpkx6\" (UID: \"9e699840-e748-4e5d-8629-f0379a7cce08\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-cpkx6" Feb 17 13:51:59 crc kubenswrapper[4768]: E0217 13:51:59.091537 4768 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 17 13:51:59 crc kubenswrapper[4768]: E0217 13:51:59.091683 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9e699840-e748-4e5d-8629-f0379a7cce08-cert podName:9e699840-e748-4e5d-8629-f0379a7cce08 nodeName:}" failed. No retries permitted until 2026-02-17 13:52:03.09166084 +0000 UTC m=+942.371047332 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/9e699840-e748-4e5d-8629-f0379a7cce08-cert") pod "infra-operator-controller-manager-79d975b745-cpkx6" (UID: "9e699840-e748-4e5d-8629-f0379a7cce08") : secret "infra-operator-webhook-server-cert" not found Feb 17 13:51:59 crc kubenswrapper[4768]: E0217 13:51:59.287306 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/watcher-operator@sha256:d01ae848290e880c09127d5297418dea40fc7f090fdab9bf2c578c7e7f53aec0\\\"\"" pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-xx7vm" podUID="71305e38-208f-43be-9bb9-32341555750c" Feb 17 13:51:59 crc kubenswrapper[4768]: E0217 13:51:59.287674 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-pqvzt" podUID="a5348349-c195-4af1-b367-a6cb0842305b" Feb 17 13:51:59 crc kubenswrapper[4768]: E0217 13:51:59.287810 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/telemetry-operator@sha256:66a4b9322ebb573313178ea88e31026d4532f461592b9fae2dff71efd9256d99\\\"\"" pod="openstack-operators/telemetry-operator-controller-manager-7f45b4ff68-4c9sb" podUID="d8d8a911-905e-45e3-a4ed-35338f74806f" Feb 17 13:51:59 crc kubenswrapper[4768]: E0217 13:51:59.287980 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:f0fabdf79095def0f8b1c0442925548a94ca94bed4de2d3b171277129f8079e6\\\"\"" pod="openstack-operators/test-operator-controller-manager-7866795846-ln8v5" podUID="c8a74650-d867-4ab8-92a3-fcdc815247c4" Feb 17 13:51:59 crc kubenswrapper[4768]: E0217 13:51:59.290212 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/swift-operator@sha256:3d676f1281e24ef07de617570d2f7fbf625032e41866d1551a856c052248bb04\\\"\"" pod="openstack-operators/swift-operator-controller-manager-68f46476f-kmgl4" podUID="a9e933fb-130b-4a7e-91c4-9ca5f2747e35" Feb 17 13:51:59 crc kubenswrapper[4768]: I0217 13:51:59.298147 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/09c0d3ef-49e2-4dec-a95f-951be73d5740-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9cg22r7\" (UID: \"09c0d3ef-49e2-4dec-a95f-951be73d5740\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cg22r7" Feb 17 13:51:59 crc kubenswrapper[4768]: E0217 13:51:59.298421 4768 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 17 13:51:59 crc kubenswrapper[4768]: E0217 13:51:59.298472 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/09c0d3ef-49e2-4dec-a95f-951be73d5740-cert podName:09c0d3ef-49e2-4dec-a95f-951be73d5740 nodeName:}" failed. No retries permitted until 2026-02-17 13:52:03.298457021 +0000 UTC m=+942.577843463 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/09c0d3ef-49e2-4dec-a95f-951be73d5740-cert") pod "openstack-baremetal-operator-controller-manager-7c6767dc9cg22r7" (UID: "09c0d3ef-49e2-4dec-a95f-951be73d5740") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 17 13:51:59 crc kubenswrapper[4768]: I0217 13:51:59.704995 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e7b1071b-c742-4578-8226-12a6cce613f1-metrics-certs\") pod \"openstack-operator-controller-manager-57785b79bf-sjndd\" (UID: \"e7b1071b-c742-4578-8226-12a6cce613f1\") " pod="openstack-operators/openstack-operator-controller-manager-57785b79bf-sjndd" Feb 17 13:51:59 crc kubenswrapper[4768]: I0217 13:51:59.705246 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/e7b1071b-c742-4578-8226-12a6cce613f1-webhook-certs\") pod \"openstack-operator-controller-manager-57785b79bf-sjndd\" (UID: \"e7b1071b-c742-4578-8226-12a6cce613f1\") " pod="openstack-operators/openstack-operator-controller-manager-57785b79bf-sjndd" Feb 17 13:51:59 crc kubenswrapper[4768]: E0217 13:51:59.705430 4768 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 17 13:51:59 crc kubenswrapper[4768]: E0217 13:51:59.705491 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e7b1071b-c742-4578-8226-12a6cce613f1-webhook-certs podName:e7b1071b-c742-4578-8226-12a6cce613f1 nodeName:}" failed. No retries permitted until 2026-02-17 13:52:03.705467512 +0000 UTC m=+942.984853954 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/e7b1071b-c742-4578-8226-12a6cce613f1-webhook-certs") pod "openstack-operator-controller-manager-57785b79bf-sjndd" (UID: "e7b1071b-c742-4578-8226-12a6cce613f1") : secret "webhook-server-cert" not found Feb 17 13:51:59 crc kubenswrapper[4768]: E0217 13:51:59.705895 4768 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 17 13:51:59 crc kubenswrapper[4768]: E0217 13:51:59.705927 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e7b1071b-c742-4578-8226-12a6cce613f1-metrics-certs podName:e7b1071b-c742-4578-8226-12a6cce613f1 nodeName:}" failed. No retries permitted until 2026-02-17 13:52:03.705916804 +0000 UTC m=+942.985303246 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/e7b1071b-c742-4578-8226-12a6cce613f1-metrics-certs") pod "openstack-operator-controller-manager-57785b79bf-sjndd" (UID: "e7b1071b-c742-4578-8226-12a6cce613f1") : secret "metrics-server-cert" not found Feb 17 13:52:03 crc kubenswrapper[4768]: I0217 13:52:03.158511 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/9e699840-e748-4e5d-8629-f0379a7cce08-cert\") pod \"infra-operator-controller-manager-79d975b745-cpkx6\" (UID: \"9e699840-e748-4e5d-8629-f0379a7cce08\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-cpkx6" Feb 17 13:52:03 crc kubenswrapper[4768]: E0217 13:52:03.158712 4768 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 17 13:52:03 crc kubenswrapper[4768]: E0217 13:52:03.158834 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9e699840-e748-4e5d-8629-f0379a7cce08-cert podName:9e699840-e748-4e5d-8629-f0379a7cce08 nodeName:}" failed. No retries permitted until 2026-02-17 13:52:11.158818919 +0000 UTC m=+950.438205351 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/9e699840-e748-4e5d-8629-f0379a7cce08-cert") pod "infra-operator-controller-manager-79d975b745-cpkx6" (UID: "9e699840-e748-4e5d-8629-f0379a7cce08") : secret "infra-operator-webhook-server-cert" not found Feb 17 13:52:03 crc kubenswrapper[4768]: I0217 13:52:03.362156 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/09c0d3ef-49e2-4dec-a95f-951be73d5740-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9cg22r7\" (UID: \"09c0d3ef-49e2-4dec-a95f-951be73d5740\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cg22r7" Feb 17 13:52:03 crc kubenswrapper[4768]: E0217 13:52:03.362309 4768 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 17 13:52:03 crc kubenswrapper[4768]: E0217 13:52:03.362361 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/09c0d3ef-49e2-4dec-a95f-951be73d5740-cert podName:09c0d3ef-49e2-4dec-a95f-951be73d5740 nodeName:}" failed. No retries permitted until 2026-02-17 13:52:11.362347211 +0000 UTC m=+950.641733653 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/09c0d3ef-49e2-4dec-a95f-951be73d5740-cert") pod "openstack-baremetal-operator-controller-manager-7c6767dc9cg22r7" (UID: "09c0d3ef-49e2-4dec-a95f-951be73d5740") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 17 13:52:03 crc kubenswrapper[4768]: I0217 13:52:03.768341 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/e7b1071b-c742-4578-8226-12a6cce613f1-webhook-certs\") pod \"openstack-operator-controller-manager-57785b79bf-sjndd\" (UID: \"e7b1071b-c742-4578-8226-12a6cce613f1\") " pod="openstack-operators/openstack-operator-controller-manager-57785b79bf-sjndd" Feb 17 13:52:03 crc kubenswrapper[4768]: I0217 13:52:03.768731 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e7b1071b-c742-4578-8226-12a6cce613f1-metrics-certs\") pod \"openstack-operator-controller-manager-57785b79bf-sjndd\" (UID: \"e7b1071b-c742-4578-8226-12a6cce613f1\") " pod="openstack-operators/openstack-operator-controller-manager-57785b79bf-sjndd" Feb 17 13:52:03 crc kubenswrapper[4768]: E0217 13:52:03.768528 4768 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 17 13:52:03 crc kubenswrapper[4768]: E0217 13:52:03.768959 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e7b1071b-c742-4578-8226-12a6cce613f1-webhook-certs podName:e7b1071b-c742-4578-8226-12a6cce613f1 nodeName:}" failed. No retries permitted until 2026-02-17 13:52:11.76894237 +0000 UTC m=+951.048328812 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/e7b1071b-c742-4578-8226-12a6cce613f1-webhook-certs") pod "openstack-operator-controller-manager-57785b79bf-sjndd" (UID: "e7b1071b-c742-4578-8226-12a6cce613f1") : secret "webhook-server-cert" not found Feb 17 13:52:03 crc kubenswrapper[4768]: E0217 13:52:03.768896 4768 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 17 13:52:03 crc kubenswrapper[4768]: E0217 13:52:03.769004 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e7b1071b-c742-4578-8226-12a6cce613f1-metrics-certs podName:e7b1071b-c742-4578-8226-12a6cce613f1 nodeName:}" failed. No retries permitted until 2026-02-17 13:52:11.768997272 +0000 UTC m=+951.048383714 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/e7b1071b-c742-4578-8226-12a6cce613f1-metrics-certs") pod "openstack-operator-controller-manager-57785b79bf-sjndd" (UID: "e7b1071b-c742-4578-8226-12a6cce613f1") : secret "metrics-server-cert" not found Feb 17 13:52:09 crc kubenswrapper[4768]: E0217 13:52:09.827505 4768 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/placement-operator@sha256:a57336b9f95b703f80453db87e43a2834ca1bdc89480796d28ebbe0a9702ecfd" Feb 17 13:52:09 crc kubenswrapper[4768]: E0217 13:52:09.828075 4768 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/placement-operator@sha256:a57336b9f95b703f80453db87e43a2834ca1bdc89480796d28ebbe0a9702ecfd,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-75jz9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod placement-operator-controller-manager-8497b45c89-pvjkl_openstack-operators(9aee7c4a-404a-434e-8aa9-b671553532d2): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 17 13:52:09 crc kubenswrapper[4768]: E0217 13:52:09.829464 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-pvjkl" podUID="9aee7c4a-404a-434e-8aa9-b671553532d2" Feb 17 13:52:10 crc kubenswrapper[4768]: E0217 13:52:10.363653 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/placement-operator@sha256:a57336b9f95b703f80453db87e43a2834ca1bdc89480796d28ebbe0a9702ecfd\\\"\"" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-pvjkl" podUID="9aee7c4a-404a-434e-8aa9-b671553532d2" Feb 17 13:52:10 crc kubenswrapper[4768]: E0217 13:52:10.523656 4768 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/keystone-operator@sha256:c6ad383f55f955902b074d1ee947a2233a5fcbf40698479ae693ce056c80dcc1" Feb 17 13:52:10 crc kubenswrapper[4768]: E0217 13:52:10.523822 4768 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/keystone-operator@sha256:c6ad383f55f955902b074d1ee947a2233a5fcbf40698479ae693ce056c80dcc1,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-l527h,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod keystone-operator-controller-manager-b4d948c87-7ntmb_openstack-operators(4f912c2e-e494-46c0-9231-40c106b00c40): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 17 13:52:10 crc kubenswrapper[4768]: E0217 13:52:10.524982 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-7ntmb" podUID="4f912c2e-e494-46c0-9231-40c106b00c40" Feb 17 13:52:11 crc kubenswrapper[4768]: I0217 13:52:11.176230 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/9e699840-e748-4e5d-8629-f0379a7cce08-cert\") pod \"infra-operator-controller-manager-79d975b745-cpkx6\" (UID: \"9e699840-e748-4e5d-8629-f0379a7cce08\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-cpkx6" Feb 17 13:52:11 crc kubenswrapper[4768]: E0217 13:52:11.176424 4768 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 17 13:52:11 crc kubenswrapper[4768]: E0217 13:52:11.176762 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9e699840-e748-4e5d-8629-f0379a7cce08-cert podName:9e699840-e748-4e5d-8629-f0379a7cce08 nodeName:}" failed. No retries permitted until 2026-02-17 13:52:27.176739548 +0000 UTC m=+966.456125990 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/9e699840-e748-4e5d-8629-f0379a7cce08-cert") pod "infra-operator-controller-manager-79d975b745-cpkx6" (UID: "9e699840-e748-4e5d-8629-f0379a7cce08") : secret "infra-operator-webhook-server-cert" not found Feb 17 13:52:11 crc kubenswrapper[4768]: E0217 13:52:11.194360 4768 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/nova-operator@sha256:fe85dd595906fac0fe1e7a42215bb306a963cf87d55e07cd2573726b690b2838" Feb 17 13:52:11 crc kubenswrapper[4768]: E0217 13:52:11.194564 4768 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/nova-operator@sha256:fe85dd595906fac0fe1e7a42215bb306a963cf87d55e07cd2573726b690b2838,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-5xsfm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod nova-operator-controller-manager-567668f5cf-9krmz_openstack-operators(e48b6c11-496b-4f36-9155-119bbfb506f8): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 17 13:52:11 crc kubenswrapper[4768]: E0217 13:52:11.195842 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-9krmz" podUID="e48b6c11-496b-4f36-9155-119bbfb506f8" Feb 17 13:52:11 crc kubenswrapper[4768]: E0217 13:52:11.371733 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/keystone-operator@sha256:c6ad383f55f955902b074d1ee947a2233a5fcbf40698479ae693ce056c80dcc1\\\"\"" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-7ntmb" podUID="4f912c2e-e494-46c0-9231-40c106b00c40" Feb 17 13:52:11 crc kubenswrapper[4768]: E0217 13:52:11.372490 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/nova-operator@sha256:fe85dd595906fac0fe1e7a42215bb306a963cf87d55e07cd2573726b690b2838\\\"\"" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-9krmz" podUID="e48b6c11-496b-4f36-9155-119bbfb506f8" Feb 17 13:52:11 crc kubenswrapper[4768]: I0217 13:52:11.381632 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/09c0d3ef-49e2-4dec-a95f-951be73d5740-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9cg22r7\" (UID: \"09c0d3ef-49e2-4dec-a95f-951be73d5740\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cg22r7" Feb 17 13:52:11 crc kubenswrapper[4768]: E0217 13:52:11.381926 4768 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 17 13:52:11 crc kubenswrapper[4768]: E0217 13:52:11.381995 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/09c0d3ef-49e2-4dec-a95f-951be73d5740-cert podName:09c0d3ef-49e2-4dec-a95f-951be73d5740 nodeName:}" failed. No retries permitted until 2026-02-17 13:52:27.381977016 +0000 UTC m=+966.661363458 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/09c0d3ef-49e2-4dec-a95f-951be73d5740-cert") pod "openstack-baremetal-operator-controller-manager-7c6767dc9cg22r7" (UID: "09c0d3ef-49e2-4dec-a95f-951be73d5740") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 17 13:52:11 crc kubenswrapper[4768]: I0217 13:52:11.788773 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/e7b1071b-c742-4578-8226-12a6cce613f1-webhook-certs\") pod \"openstack-operator-controller-manager-57785b79bf-sjndd\" (UID: \"e7b1071b-c742-4578-8226-12a6cce613f1\") " pod="openstack-operators/openstack-operator-controller-manager-57785b79bf-sjndd" Feb 17 13:52:11 crc kubenswrapper[4768]: I0217 13:52:11.788852 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e7b1071b-c742-4578-8226-12a6cce613f1-metrics-certs\") pod \"openstack-operator-controller-manager-57785b79bf-sjndd\" (UID: \"e7b1071b-c742-4578-8226-12a6cce613f1\") " pod="openstack-operators/openstack-operator-controller-manager-57785b79bf-sjndd" Feb 17 13:52:11 crc kubenswrapper[4768]: E0217 13:52:11.788969 4768 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 17 13:52:11 crc kubenswrapper[4768]: E0217 13:52:11.789028 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e7b1071b-c742-4578-8226-12a6cce613f1-webhook-certs podName:e7b1071b-c742-4578-8226-12a6cce613f1 nodeName:}" failed. No retries permitted until 2026-02-17 13:52:27.789013499 +0000 UTC m=+967.068399931 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/e7b1071b-c742-4578-8226-12a6cce613f1-webhook-certs") pod "openstack-operator-controller-manager-57785b79bf-sjndd" (UID: "e7b1071b-c742-4578-8226-12a6cce613f1") : secret "webhook-server-cert" not found Feb 17 13:52:11 crc kubenswrapper[4768]: I0217 13:52:11.800432 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e7b1071b-c742-4578-8226-12a6cce613f1-metrics-certs\") pod \"openstack-operator-controller-manager-57785b79bf-sjndd\" (UID: \"e7b1071b-c742-4578-8226-12a6cce613f1\") " pod="openstack-operators/openstack-operator-controller-manager-57785b79bf-sjndd" Feb 17 13:52:12 crc kubenswrapper[4768]: I0217 13:52:12.389493 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-hn2hg" event={"ID":"663c818c-0255-4f9c-827e-ccb2b430c5e3","Type":"ContainerStarted","Data":"0562061dba3d04bc9ac5344b981ff7f63ff6d894051f94e8dedd75514e8a1603"} Feb 17 13:52:12 crc kubenswrapper[4768]: I0217 13:52:12.389873 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-hn2hg" Feb 17 13:52:12 crc kubenswrapper[4768]: I0217 13:52:12.396372 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-4wm78" event={"ID":"2e1d5b4f-7ff1-43bf-99ed-48cb79cb86df","Type":"ContainerStarted","Data":"949ed3d4ec102f009675cb56b51949e6adfd86da59f421ec9bc8b66badd5061e"} Feb 17 13:52:12 crc kubenswrapper[4768]: I0217 13:52:12.396509 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-4wm78" Feb 17 13:52:12 crc kubenswrapper[4768]: I0217 13:52:12.403006 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-54f6768c69-qfz4j" event={"ID":"c040f799-8668-44a6-b694-0b253aaf7930","Type":"ContainerStarted","Data":"b8bcf9c5fced97b21a76146145be1e704a027d22d91a5a630941af5ec47fbac4"} Feb 17 13:52:12 crc kubenswrapper[4768]: I0217 13:52:12.403247 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/manila-operator-controller-manager-54f6768c69-qfz4j" Feb 17 13:52:12 crc kubenswrapper[4768]: I0217 13:52:12.408722 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-99tll" event={"ID":"ea7f039d-d594-4b9e-9dac-06e9f13bdba2","Type":"ContainerStarted","Data":"7de79d06bcb9ce171b0ca4c2a12494b81a96a626b6df8977f2c0229ad5358255"} Feb 17 13:52:12 crc kubenswrapper[4768]: I0217 13:52:12.409351 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-99tll" Feb 17 13:52:12 crc kubenswrapper[4768]: I0217 13:52:12.422809 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-77987464f4-7wnck" event={"ID":"f5689d3e-d755-485e-80a1-e808c460022d","Type":"ContainerStarted","Data":"9eaa244c2d1ba02c86f3dfeae20f13183a80235f785fdcf8f3f6f4119659d5e7"} Feb 17 13:52:12 crc kubenswrapper[4768]: I0217 13:52:12.422970 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/glance-operator-controller-manager-77987464f4-7wnck" Feb 17 13:52:12 crc kubenswrapper[4768]: I0217 13:52:12.423856 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-hn2hg" podStartSLOduration=2.441980049 podStartE2EDuration="17.42384218s" podCreationTimestamp="2026-02-17 13:51:55 +0000 UTC" firstStartedPulling="2026-02-17 13:51:56.146434132 +0000 UTC m=+935.425820564" lastFinishedPulling="2026-02-17 13:52:11.128296243 +0000 UTC m=+950.407682695" observedRunningTime="2026-02-17 13:52:12.420451107 +0000 UTC m=+951.699837539" watchObservedRunningTime="2026-02-17 13:52:12.42384218 +0000 UTC m=+951.703228622" Feb 17 13:52:12 crc kubenswrapper[4768]: I0217 13:52:12.426397 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-hrkzn" event={"ID":"633a0666-42b2-4422-9b47-fb69c1105655","Type":"ContainerStarted","Data":"ece47a57eb53e5a4ceaec32f9d8bcec5fbcddf4aee5573a4ddee4f57c8b7d6fd"} Feb 17 13:52:12 crc kubenswrapper[4768]: I0217 13:52:12.426460 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-hrkzn" Feb 17 13:52:12 crc kubenswrapper[4768]: I0217 13:52:12.445010 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-hrb5z" event={"ID":"aa6bb524-9950-4add-9b03-04f324c9a02d","Type":"ContainerStarted","Data":"f4d583578eae6cd5cb9f4c60b755aac6625aba83734c3a70db757e7d5c7add08"} Feb 17 13:52:12 crc kubenswrapper[4768]: I0217 13:52:12.445299 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-hrb5z" Feb 17 13:52:12 crc kubenswrapper[4768]: I0217 13:52:12.447722 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-pfr2g" event={"ID":"2560f30e-2ede-4f2e-a3a1-e3e7e96b5792","Type":"ContainerStarted","Data":"bdc0439ef02f2c599f16d6273d270b7fdc7ee2852c58283b23c4461a58fcfa7a"} Feb 17 13:52:12 crc kubenswrapper[4768]: I0217 13:52:12.448630 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-pfr2g" Feb 17 13:52:12 crc kubenswrapper[4768]: I0217 13:52:12.458134 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-99tll" podStartSLOduration=3.362907698 podStartE2EDuration="17.458101574s" podCreationTimestamp="2026-02-17 13:51:55 +0000 UTC" firstStartedPulling="2026-02-17 13:51:57.073232263 +0000 UTC m=+936.352618705" lastFinishedPulling="2026-02-17 13:52:11.168426139 +0000 UTC m=+950.447812581" observedRunningTime="2026-02-17 13:52:12.449080796 +0000 UTC m=+951.728467238" watchObservedRunningTime="2026-02-17 13:52:12.458101574 +0000 UTC m=+951.737488026" Feb 17 13:52:12 crc kubenswrapper[4768]: I0217 13:52:12.463968 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-bl6rp" event={"ID":"8b8ebdec-5fc0-4f66-9a22-b833d3cd4283","Type":"ContainerStarted","Data":"8f32e63b689a4e97c2417916a3601c52e6096981342e5c838c5d2983eb38e07b"} Feb 17 13:52:12 crc kubenswrapper[4768]: I0217 13:52:12.464630 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-bl6rp" Feb 17 13:52:12 crc kubenswrapper[4768]: I0217 13:52:12.465957 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-dp9tq" event={"ID":"96ea84f4-0ad3-49bc-9eb5-83bfbbb7ee0a","Type":"ContainerStarted","Data":"16d575941eba5f232f9b7ab9c9cd868e392952671d7cb4249b47131f4c0fae10"} Feb 17 13:52:12 crc kubenswrapper[4768]: I0217 13:52:12.466218 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-dp9tq" Feb 17 13:52:12 crc kubenswrapper[4768]: I0217 13:52:12.490525 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-j4n52" event={"ID":"2567e2d5-83bd-4345-b94b-36527465ce1b","Type":"ContainerStarted","Data":"ca45e2440eece98a441e238eddfe4ed802ee03e6ed8a62321a337748640cec02"} Feb 17 13:52:12 crc kubenswrapper[4768]: I0217 13:52:12.491278 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-j4n52" Feb 17 13:52:12 crc kubenswrapper[4768]: I0217 13:52:12.498650 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-4wm78" podStartSLOduration=3.390491329 podStartE2EDuration="17.498634112s" podCreationTimestamp="2026-02-17 13:51:55 +0000 UTC" firstStartedPulling="2026-02-17 13:51:57.02125007 +0000 UTC m=+936.300636512" lastFinishedPulling="2026-02-17 13:52:11.129392853 +0000 UTC m=+950.408779295" observedRunningTime="2026-02-17 13:52:12.495318421 +0000 UTC m=+951.774704883" watchObservedRunningTime="2026-02-17 13:52:12.498634112 +0000 UTC m=+951.778020554" Feb 17 13:52:12 crc kubenswrapper[4768]: I0217 13:52:12.513028 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-p97z4" event={"ID":"10c685ba-8fe0-425c-958c-3fb6754d3d84","Type":"ContainerStarted","Data":"8c7bac4bbfa7a551b4bc123db2f23e406ad5c1983352def084482a277bb70005"} Feb 17 13:52:12 crc kubenswrapper[4768]: I0217 13:52:12.524780 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-v5svk" event={"ID":"93f56e48-e402-471a-b9c0-0fac088f7a7e","Type":"ContainerStarted","Data":"9c0e06c7a12bd2ede28cbcd68efa224e24b261bf7083987debb4b46cbb18f490"} Feb 17 13:52:12 crc kubenswrapper[4768]: I0217 13:52:12.525473 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-v5svk" Feb 17 13:52:12 crc kubenswrapper[4768]: I0217 13:52:12.539302 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/manila-operator-controller-manager-54f6768c69-qfz4j" podStartSLOduration=3.414855599 podStartE2EDuration="17.539283642s" podCreationTimestamp="2026-02-17 13:51:55 +0000 UTC" firstStartedPulling="2026-02-17 13:51:57.005215537 +0000 UTC m=+936.284601989" lastFinishedPulling="2026-02-17 13:52:11.12964359 +0000 UTC m=+950.409030032" observedRunningTime="2026-02-17 13:52:12.530827429 +0000 UTC m=+951.810213871" watchObservedRunningTime="2026-02-17 13:52:12.539283642 +0000 UTC m=+951.818670084" Feb 17 13:52:12 crc kubenswrapper[4768]: I0217 13:52:12.576924 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-dp9tq" podStartSLOduration=3.4881674719999998 podStartE2EDuration="17.57690024s" podCreationTimestamp="2026-02-17 13:51:55 +0000 UTC" firstStartedPulling="2026-02-17 13:51:57.079507346 +0000 UTC m=+936.358893788" lastFinishedPulling="2026-02-17 13:52:11.168240114 +0000 UTC m=+950.447626556" observedRunningTime="2026-02-17 13:52:12.569522846 +0000 UTC m=+951.848909278" watchObservedRunningTime="2026-02-17 13:52:12.57690024 +0000 UTC m=+951.856286692" Feb 17 13:52:12 crc kubenswrapper[4768]: I0217 13:52:12.608423 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-bl6rp" podStartSLOduration=2.834310024 podStartE2EDuration="17.607642997s" podCreationTimestamp="2026-02-17 13:51:55 +0000 UTC" firstStartedPulling="2026-02-17 13:51:56.397505253 +0000 UTC m=+935.676891695" lastFinishedPulling="2026-02-17 13:52:11.170838226 +0000 UTC m=+950.450224668" observedRunningTime="2026-02-17 13:52:12.59940866 +0000 UTC m=+951.878795102" watchObservedRunningTime="2026-02-17 13:52:12.607642997 +0000 UTC m=+951.887029439" Feb 17 13:52:12 crc kubenswrapper[4768]: I0217 13:52:12.693134 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-pfr2g" podStartSLOduration=3.350337921 podStartE2EDuration="17.693115053s" podCreationTimestamp="2026-02-17 13:51:55 +0000 UTC" firstStartedPulling="2026-02-17 13:51:56.785302435 +0000 UTC m=+936.064688877" lastFinishedPulling="2026-02-17 13:52:11.128079567 +0000 UTC m=+950.407466009" observedRunningTime="2026-02-17 13:52:12.689331899 +0000 UTC m=+951.968718341" watchObservedRunningTime="2026-02-17 13:52:12.693115053 +0000 UTC m=+951.972501495" Feb 17 13:52:12 crc kubenswrapper[4768]: I0217 13:52:12.707755 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-j4n52" podStartSLOduration=3.568032224 podStartE2EDuration="17.707733067s" podCreationTimestamp="2026-02-17 13:51:55 +0000 UTC" firstStartedPulling="2026-02-17 13:51:57.037541349 +0000 UTC m=+936.316927781" lastFinishedPulling="2026-02-17 13:52:11.177242142 +0000 UTC m=+950.456628624" observedRunningTime="2026-02-17 13:52:12.653567683 +0000 UTC m=+951.932954125" watchObservedRunningTime="2026-02-17 13:52:12.707733067 +0000 UTC m=+951.987119509" Feb 17 13:52:12 crc kubenswrapper[4768]: I0217 13:52:12.728073 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/glance-operator-controller-manager-77987464f4-7wnck" podStartSLOduration=2.9961032149999998 podStartE2EDuration="17.728057397s" podCreationTimestamp="2026-02-17 13:51:55 +0000 UTC" firstStartedPulling="2026-02-17 13:51:56.397731189 +0000 UTC m=+935.677117631" lastFinishedPulling="2026-02-17 13:52:11.129685351 +0000 UTC m=+950.409071813" observedRunningTime="2026-02-17 13:52:12.713831645 +0000 UTC m=+951.993218087" watchObservedRunningTime="2026-02-17 13:52:12.728057397 +0000 UTC m=+952.007443839" Feb 17 13:52:12 crc kubenswrapper[4768]: I0217 13:52:12.786200 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-hrkzn" podStartSLOduration=3.437817924 podStartE2EDuration="17.78618335s" podCreationTimestamp="2026-02-17 13:51:55 +0000 UTC" firstStartedPulling="2026-02-17 13:51:56.819352814 +0000 UTC m=+936.098739246" lastFinishedPulling="2026-02-17 13:52:11.16771823 +0000 UTC m=+950.447104672" observedRunningTime="2026-02-17 13:52:12.77932525 +0000 UTC m=+952.058711692" watchObservedRunningTime="2026-02-17 13:52:12.78618335 +0000 UTC m=+952.065569792" Feb 17 13:52:12 crc kubenswrapper[4768]: I0217 13:52:12.849973 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-hrb5z" podStartSLOduration=3.759984964 podStartE2EDuration="17.849957377s" podCreationTimestamp="2026-02-17 13:51:55 +0000 UTC" firstStartedPulling="2026-02-17 13:51:57.078410125 +0000 UTC m=+936.357796557" lastFinishedPulling="2026-02-17 13:52:11.168382528 +0000 UTC m=+950.447768970" observedRunningTime="2026-02-17 13:52:12.846002539 +0000 UTC m=+952.125388981" watchObservedRunningTime="2026-02-17 13:52:12.849957377 +0000 UTC m=+952.129343819" Feb 17 13:52:12 crc kubenswrapper[4768]: I0217 13:52:12.852509 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-v5svk" podStartSLOduration=3.754771211 podStartE2EDuration="17.852500278s" podCreationTimestamp="2026-02-17 13:51:55 +0000 UTC" firstStartedPulling="2026-02-17 13:51:57.078636871 +0000 UTC m=+936.358023313" lastFinishedPulling="2026-02-17 13:52:11.176365898 +0000 UTC m=+950.455752380" observedRunningTime="2026-02-17 13:52:12.830408269 +0000 UTC m=+952.109794711" watchObservedRunningTime="2026-02-17 13:52:12.852500278 +0000 UTC m=+952.131886720" Feb 17 13:52:16 crc kubenswrapper[4768]: I0217 13:52:16.050246 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-dp9tq" Feb 17 13:52:16 crc kubenswrapper[4768]: I0217 13:52:16.563909 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-68f46476f-kmgl4" event={"ID":"a9e933fb-130b-4a7e-91c4-9ca5f2747e35","Type":"ContainerStarted","Data":"4757bbe460174f6338d93d1cb00a243ac6937fc641d7072d163b9feb6eae4259"} Feb 17 13:52:16 crc kubenswrapper[4768]: I0217 13:52:16.564161 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/swift-operator-controller-manager-68f46476f-kmgl4" Feb 17 13:52:16 crc kubenswrapper[4768]: I0217 13:52:16.566269 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-7866795846-ln8v5" event={"ID":"c8a74650-d867-4ab8-92a3-fcdc815247c4","Type":"ContainerStarted","Data":"2a02f54d6f1ab1cb540d964d04edfebd76637feda4fbc9d79f59833f7e8d2a98"} Feb 17 13:52:16 crc kubenswrapper[4768]: I0217 13:52:16.566999 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/test-operator-controller-manager-7866795846-ln8v5" Feb 17 13:52:16 crc kubenswrapper[4768]: I0217 13:52:16.574589 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-xx7vm" event={"ID":"71305e38-208f-43be-9bb9-32341555750c","Type":"ContainerStarted","Data":"b1f6e9da8fdd5d7fae06f6f77988b9cbdaab8eed99194076236f1ffba3d302f6"} Feb 17 13:52:16 crc kubenswrapper[4768]: I0217 13:52:16.575378 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-xx7vm" Feb 17 13:52:16 crc kubenswrapper[4768]: I0217 13:52:16.592690 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/swift-operator-controller-manager-68f46476f-kmgl4" podStartSLOduration=2.865258258 podStartE2EDuration="21.592662402s" podCreationTimestamp="2026-02-17 13:51:55 +0000 UTC" firstStartedPulling="2026-02-17 13:51:57.175166283 +0000 UTC m=+936.454552725" lastFinishedPulling="2026-02-17 13:52:15.902570427 +0000 UTC m=+955.181956869" observedRunningTime="2026-02-17 13:52:16.585985968 +0000 UTC m=+955.865372410" watchObservedRunningTime="2026-02-17 13:52:16.592662402 +0000 UTC m=+955.872048834" Feb 17 13:52:16 crc kubenswrapper[4768]: I0217 13:52:16.610383 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-xx7vm" podStartSLOduration=2.954126168 podStartE2EDuration="21.61036695s" podCreationTimestamp="2026-02-17 13:51:55 +0000 UTC" firstStartedPulling="2026-02-17 13:51:57.220247086 +0000 UTC m=+936.499633528" lastFinishedPulling="2026-02-17 13:52:15.876487868 +0000 UTC m=+955.155874310" observedRunningTime="2026-02-17 13:52:16.602402521 +0000 UTC m=+955.881788963" watchObservedRunningTime="2026-02-17 13:52:16.61036695 +0000 UTC m=+955.889753402" Feb 17 13:52:16 crc kubenswrapper[4768]: I0217 13:52:16.623641 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/test-operator-controller-manager-7866795846-ln8v5" podStartSLOduration=2.91902077 podStartE2EDuration="21.623631045s" podCreationTimestamp="2026-02-17 13:51:55 +0000 UTC" firstStartedPulling="2026-02-17 13:51:57.173042404 +0000 UTC m=+936.452428846" lastFinishedPulling="2026-02-17 13:52:15.877652679 +0000 UTC m=+955.157039121" observedRunningTime="2026-02-17 13:52:16.619615615 +0000 UTC m=+955.899002047" watchObservedRunningTime="2026-02-17 13:52:16.623631045 +0000 UTC m=+955.903017487" Feb 17 13:52:21 crc kubenswrapper[4768]: I0217 13:52:21.612035 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-7f45b4ff68-4c9sb" event={"ID":"d8d8a911-905e-45e3-a4ed-35338f74806f","Type":"ContainerStarted","Data":"f49e59e877b3542468cf1feaf3cc8975037ae859e5bbecf94df63dbeac8585f4"} Feb 17 13:52:21 crc kubenswrapper[4768]: I0217 13:52:21.613313 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/telemetry-operator-controller-manager-7f45b4ff68-4c9sb" Feb 17 13:52:21 crc kubenswrapper[4768]: I0217 13:52:21.614751 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-pqvzt" event={"ID":"a5348349-c195-4af1-b367-a6cb0842305b","Type":"ContainerStarted","Data":"a48bdeef62f1a67f9d6c0f6c66d3c13e888dc1d1b128ee6ba88859dbbb39d1e1"} Feb 17 13:52:21 crc kubenswrapper[4768]: I0217 13:52:21.629625 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/telemetry-operator-controller-manager-7f45b4ff68-4c9sb" podStartSLOduration=3.40978945 podStartE2EDuration="26.629601477s" podCreationTimestamp="2026-02-17 13:51:55 +0000 UTC" firstStartedPulling="2026-02-17 13:51:57.218538858 +0000 UTC m=+936.497925300" lastFinishedPulling="2026-02-17 13:52:20.438350885 +0000 UTC m=+959.717737327" observedRunningTime="2026-02-17 13:52:21.625597037 +0000 UTC m=+960.904983479" watchObservedRunningTime="2026-02-17 13:52:21.629601477 +0000 UTC m=+960.908987919" Feb 17 13:52:21 crc kubenswrapper[4768]: I0217 13:52:21.643962 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-pqvzt" podStartSLOduration=3.174036611 podStartE2EDuration="26.643929802s" podCreationTimestamp="2026-02-17 13:51:55 +0000 UTC" firstStartedPulling="2026-02-17 13:51:57.186759893 +0000 UTC m=+936.466146335" lastFinishedPulling="2026-02-17 13:52:20.656653084 +0000 UTC m=+959.936039526" observedRunningTime="2026-02-17 13:52:21.642706779 +0000 UTC m=+960.922093221" watchObservedRunningTime="2026-02-17 13:52:21.643929802 +0000 UTC m=+960.923316244" Feb 17 13:52:22 crc kubenswrapper[4768]: I0217 13:52:22.621932 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-pvjkl" event={"ID":"9aee7c4a-404a-434e-8aa9-b671553532d2","Type":"ContainerStarted","Data":"78bb7bf8dd025b18b3f3293f45aa883172cbd3fdebdb664734ec55354fda519e"} Feb 17 13:52:22 crc kubenswrapper[4768]: I0217 13:52:22.622430 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-pvjkl" Feb 17 13:52:22 crc kubenswrapper[4768]: I0217 13:52:22.639701 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-pvjkl" podStartSLOduration=2.342198708 podStartE2EDuration="27.639681905s" podCreationTimestamp="2026-02-17 13:51:55 +0000 UTC" firstStartedPulling="2026-02-17 13:51:57.049770776 +0000 UTC m=+936.329157218" lastFinishedPulling="2026-02-17 13:52:22.347253973 +0000 UTC m=+961.626640415" observedRunningTime="2026-02-17 13:52:22.635015076 +0000 UTC m=+961.914401518" watchObservedRunningTime="2026-02-17 13:52:22.639681905 +0000 UTC m=+961.919068347" Feb 17 13:52:25 crc kubenswrapper[4768]: I0217 13:52:25.460812 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-hrkzn" Feb 17 13:52:25 crc kubenswrapper[4768]: I0217 13:52:25.490849 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-hn2hg" Feb 17 13:52:25 crc kubenswrapper[4768]: I0217 13:52:25.496849 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/glance-operator-controller-manager-77987464f4-7wnck" Feb 17 13:52:25 crc kubenswrapper[4768]: I0217 13:52:25.600544 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-bl6rp" Feb 17 13:52:25 crc kubenswrapper[4768]: I0217 13:52:25.645919 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-9krmz" event={"ID":"e48b6c11-496b-4f36-9155-119bbfb506f8","Type":"ContainerStarted","Data":"0a9fda4063f54a753134a6044405efa7e5a25f70c897654d3e4eaf5c6a75313e"} Feb 17 13:52:25 crc kubenswrapper[4768]: I0217 13:52:25.646335 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-9krmz" Feb 17 13:52:25 crc kubenswrapper[4768]: I0217 13:52:25.661549 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-hrb5z" Feb 17 13:52:25 crc kubenswrapper[4768]: I0217 13:52:25.667747 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-9krmz" podStartSLOduration=2.861347591 podStartE2EDuration="30.667729277s" podCreationTimestamp="2026-02-17 13:51:55 +0000 UTC" firstStartedPulling="2026-02-17 13:51:57.200628675 +0000 UTC m=+936.480015117" lastFinishedPulling="2026-02-17 13:52:25.007010371 +0000 UTC m=+964.286396803" observedRunningTime="2026-02-17 13:52:25.661993779 +0000 UTC m=+964.941380221" watchObservedRunningTime="2026-02-17 13:52:25.667729277 +0000 UTC m=+964.947115719" Feb 17 13:52:25 crc kubenswrapper[4768]: I0217 13:52:25.698754 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-pfr2g" Feb 17 13:52:25 crc kubenswrapper[4768]: I0217 13:52:25.753608 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-99tll" Feb 17 13:52:25 crc kubenswrapper[4768]: I0217 13:52:25.756010 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/manila-operator-controller-manager-54f6768c69-qfz4j" Feb 17 13:52:25 crc kubenswrapper[4768]: I0217 13:52:25.882654 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-v5svk" Feb 17 13:52:25 crc kubenswrapper[4768]: I0217 13:52:25.893972 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-j4n52" Feb 17 13:52:25 crc kubenswrapper[4768]: I0217 13:52:25.903723 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-4wm78" Feb 17 13:52:26 crc kubenswrapper[4768]: I0217 13:52:26.144800 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/swift-operator-controller-manager-68f46476f-kmgl4" Feb 17 13:52:26 crc kubenswrapper[4768]: I0217 13:52:26.178544 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/telemetry-operator-controller-manager-7f45b4ff68-4c9sb" Feb 17 13:52:26 crc kubenswrapper[4768]: I0217 13:52:26.191856 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/test-operator-controller-manager-7866795846-ln8v5" Feb 17 13:52:26 crc kubenswrapper[4768]: I0217 13:52:26.236058 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-xx7vm" Feb 17 13:52:26 crc kubenswrapper[4768]: I0217 13:52:26.656008 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-7ntmb" event={"ID":"4f912c2e-e494-46c0-9231-40c106b00c40","Type":"ContainerStarted","Data":"0eb04b3b2631ea80bd4e9fe39548e95a7cf0b4a6244fc148f424db64dda91613"} Feb 17 13:52:26 crc kubenswrapper[4768]: I0217 13:52:26.657212 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-7ntmb" Feb 17 13:52:26 crc kubenswrapper[4768]: I0217 13:52:26.671446 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-7ntmb" podStartSLOduration=2.67013987 podStartE2EDuration="31.671427748s" podCreationTimestamp="2026-02-17 13:51:55 +0000 UTC" firstStartedPulling="2026-02-17 13:51:57.067638519 +0000 UTC m=+936.347024961" lastFinishedPulling="2026-02-17 13:52:26.068926397 +0000 UTC m=+965.348312839" observedRunningTime="2026-02-17 13:52:26.670635406 +0000 UTC m=+965.950021848" watchObservedRunningTime="2026-02-17 13:52:26.671427748 +0000 UTC m=+965.950814190" Feb 17 13:52:27 crc kubenswrapper[4768]: I0217 13:52:27.236286 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/9e699840-e748-4e5d-8629-f0379a7cce08-cert\") pod \"infra-operator-controller-manager-79d975b745-cpkx6\" (UID: \"9e699840-e748-4e5d-8629-f0379a7cce08\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-cpkx6" Feb 17 13:52:27 crc kubenswrapper[4768]: I0217 13:52:27.244384 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/9e699840-e748-4e5d-8629-f0379a7cce08-cert\") pod \"infra-operator-controller-manager-79d975b745-cpkx6\" (UID: \"9e699840-e748-4e5d-8629-f0379a7cce08\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-cpkx6" Feb 17 13:52:27 crc kubenswrapper[4768]: I0217 13:52:27.438455 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/09c0d3ef-49e2-4dec-a95f-951be73d5740-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9cg22r7\" (UID: \"09c0d3ef-49e2-4dec-a95f-951be73d5740\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cg22r7" Feb 17 13:52:27 crc kubenswrapper[4768]: I0217 13:52:27.442899 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/09c0d3ef-49e2-4dec-a95f-951be73d5740-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9cg22r7\" (UID: \"09c0d3ef-49e2-4dec-a95f-951be73d5740\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cg22r7" Feb 17 13:52:27 crc kubenswrapper[4768]: I0217 13:52:27.472570 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-vgzrk" Feb 17 13:52:27 crc kubenswrapper[4768]: I0217 13:52:27.481076 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-79d975b745-cpkx6" Feb 17 13:52:27 crc kubenswrapper[4768]: I0217 13:52:27.583157 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-nkdk2" Feb 17 13:52:27 crc kubenswrapper[4768]: I0217 13:52:27.590355 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cg22r7" Feb 17 13:52:27 crc kubenswrapper[4768]: I0217 13:52:27.834402 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cg22r7"] Feb 17 13:52:27 crc kubenswrapper[4768]: W0217 13:52:27.838280 4768 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod09c0d3ef_49e2_4dec_a95f_951be73d5740.slice/crio-b0148126c153f0a0e056793d971d060bf6127048181b3da9d71adeb3c145aa84 WatchSource:0}: Error finding container b0148126c153f0a0e056793d971d060bf6127048181b3da9d71adeb3c145aa84: Status 404 returned error can't find the container with id b0148126c153f0a0e056793d971d060bf6127048181b3da9d71adeb3c145aa84 Feb 17 13:52:27 crc kubenswrapper[4768]: I0217 13:52:27.843308 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/e7b1071b-c742-4578-8226-12a6cce613f1-webhook-certs\") pod \"openstack-operator-controller-manager-57785b79bf-sjndd\" (UID: \"e7b1071b-c742-4578-8226-12a6cce613f1\") " pod="openstack-operators/openstack-operator-controller-manager-57785b79bf-sjndd" Feb 17 13:52:27 crc kubenswrapper[4768]: I0217 13:52:27.850457 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/e7b1071b-c742-4578-8226-12a6cce613f1-webhook-certs\") pod \"openstack-operator-controller-manager-57785b79bf-sjndd\" (UID: \"e7b1071b-c742-4578-8226-12a6cce613f1\") " pod="openstack-operators/openstack-operator-controller-manager-57785b79bf-sjndd" Feb 17 13:52:27 crc kubenswrapper[4768]: I0217 13:52:27.890535 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-79d975b745-cpkx6"] Feb 17 13:52:27 crc kubenswrapper[4768]: W0217 13:52:27.892910 4768 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9e699840_e748_4e5d_8629_f0379a7cce08.slice/crio-439d22b34604cd24f1c169b25a338476452a0ea98ad95ac35035a395a8a85ce7 WatchSource:0}: Error finding container 439d22b34604cd24f1c169b25a338476452a0ea98ad95ac35035a395a8a85ce7: Status 404 returned error can't find the container with id 439d22b34604cd24f1c169b25a338476452a0ea98ad95ac35035a395a8a85ce7 Feb 17 13:52:28 crc kubenswrapper[4768]: I0217 13:52:28.102900 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-hkfbg" Feb 17 13:52:28 crc kubenswrapper[4768]: I0217 13:52:28.111466 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-57785b79bf-sjndd" Feb 17 13:52:28 crc kubenswrapper[4768]: I0217 13:52:28.626668 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-57785b79bf-sjndd"] Feb 17 13:52:28 crc kubenswrapper[4768]: W0217 13:52:28.642324 4768 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode7b1071b_c742_4578_8226_12a6cce613f1.slice/crio-0ce9479399a059f00e386d8722b092d76c031ceafa60bbeecf5a5770131f76f2 WatchSource:0}: Error finding container 0ce9479399a059f00e386d8722b092d76c031ceafa60bbeecf5a5770131f76f2: Status 404 returned error can't find the container with id 0ce9479399a059f00e386d8722b092d76c031ceafa60bbeecf5a5770131f76f2 Feb 17 13:52:28 crc kubenswrapper[4768]: I0217 13:52:28.670255 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-79d975b745-cpkx6" event={"ID":"9e699840-e748-4e5d-8629-f0379a7cce08","Type":"ContainerStarted","Data":"439d22b34604cd24f1c169b25a338476452a0ea98ad95ac35035a395a8a85ce7"} Feb 17 13:52:28 crc kubenswrapper[4768]: I0217 13:52:28.671653 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-57785b79bf-sjndd" event={"ID":"e7b1071b-c742-4578-8226-12a6cce613f1","Type":"ContainerStarted","Data":"0ce9479399a059f00e386d8722b092d76c031ceafa60bbeecf5a5770131f76f2"} Feb 17 13:52:28 crc kubenswrapper[4768]: I0217 13:52:28.672322 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cg22r7" event={"ID":"09c0d3ef-49e2-4dec-a95f-951be73d5740","Type":"ContainerStarted","Data":"b0148126c153f0a0e056793d971d060bf6127048181b3da9d71adeb3c145aa84"} Feb 17 13:52:35 crc kubenswrapper[4768]: I0217 13:52:35.731092 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-7ntmb" Feb 17 13:52:35 crc kubenswrapper[4768]: I0217 13:52:35.732014 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-57785b79bf-sjndd" event={"ID":"e7b1071b-c742-4578-8226-12a6cce613f1","Type":"ContainerStarted","Data":"44eaa75426a82078611989b3107eaa86ac5909e081ddc9d9e14438291cfb53df"} Feb 17 13:52:35 crc kubenswrapper[4768]: I0217 13:52:35.732164 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-manager-57785b79bf-sjndd" Feb 17 13:52:35 crc kubenswrapper[4768]: I0217 13:52:35.794664 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-manager-57785b79bf-sjndd" podStartSLOduration=40.794644583 podStartE2EDuration="40.794644583s" podCreationTimestamp="2026-02-17 13:51:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 13:52:35.790513769 +0000 UTC m=+975.069900231" watchObservedRunningTime="2026-02-17 13:52:35.794644583 +0000 UTC m=+975.074031025" Feb 17 13:52:35 crc kubenswrapper[4768]: I0217 13:52:35.803155 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-pvjkl" Feb 17 13:52:35 crc kubenswrapper[4768]: I0217 13:52:35.933246 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-9krmz" Feb 17 13:52:39 crc kubenswrapper[4768]: I0217 13:52:39.760458 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-79d975b745-cpkx6" event={"ID":"9e699840-e748-4e5d-8629-f0379a7cce08","Type":"ContainerStarted","Data":"0d4103d6d31c3c709ff6ea3f3d585683d61545b19e693a76fdaed856a54f5d26"} Feb 17 13:52:39 crc kubenswrapper[4768]: I0217 13:52:39.761091 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-79d975b745-cpkx6" Feb 17 13:52:39 crc kubenswrapper[4768]: I0217 13:52:39.762802 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cg22r7" event={"ID":"09c0d3ef-49e2-4dec-a95f-951be73d5740","Type":"ContainerStarted","Data":"bbc5fb167afd8668a20e64e74e001ffdc30865e5ed6503cf9a5f59773ba4881e"} Feb 17 13:52:39 crc kubenswrapper[4768]: I0217 13:52:39.762977 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cg22r7" Feb 17 13:52:39 crc kubenswrapper[4768]: I0217 13:52:39.782369 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/infra-operator-controller-manager-79d975b745-cpkx6" podStartSLOduration=33.241475092 podStartE2EDuration="44.78235206s" podCreationTimestamp="2026-02-17 13:51:55 +0000 UTC" firstStartedPulling="2026-02-17 13:52:27.895474194 +0000 UTC m=+967.174860636" lastFinishedPulling="2026-02-17 13:52:39.436351162 +0000 UTC m=+978.715737604" observedRunningTime="2026-02-17 13:52:39.780000155 +0000 UTC m=+979.059386617" watchObservedRunningTime="2026-02-17 13:52:39.78235206 +0000 UTC m=+979.061738512" Feb 17 13:52:39 crc kubenswrapper[4768]: I0217 13:52:39.812383 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cg22r7" podStartSLOduration=33.22227038 podStartE2EDuration="44.812362875s" podCreationTimestamp="2026-02-17 13:51:55 +0000 UTC" firstStartedPulling="2026-02-17 13:52:27.840583121 +0000 UTC m=+967.119969563" lastFinishedPulling="2026-02-17 13:52:39.430675616 +0000 UTC m=+978.710062058" observedRunningTime="2026-02-17 13:52:39.808715525 +0000 UTC m=+979.088101967" watchObservedRunningTime="2026-02-17 13:52:39.812362875 +0000 UTC m=+979.091749317" Feb 17 13:52:47 crc kubenswrapper[4768]: I0217 13:52:47.494218 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/infra-operator-controller-manager-79d975b745-cpkx6" Feb 17 13:52:47 crc kubenswrapper[4768]: I0217 13:52:47.597636 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cg22r7" Feb 17 13:52:48 crc kubenswrapper[4768]: I0217 13:52:48.117464 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-manager-57785b79bf-sjndd" Feb 17 13:53:03 crc kubenswrapper[4768]: I0217 13:53:03.477868 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-r8xt7"] Feb 17 13:53:03 crc kubenswrapper[4768]: I0217 13:53:03.479369 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-r8xt7" Feb 17 13:53:03 crc kubenswrapper[4768]: I0217 13:53:03.486675 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns" Feb 17 13:53:03 crc kubenswrapper[4768]: I0217 13:53:03.487606 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dnsmasq-dns-dockercfg-cjvh5" Feb 17 13:53:03 crc kubenswrapper[4768]: I0217 13:53:03.487651 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"kube-root-ca.crt" Feb 17 13:53:03 crc kubenswrapper[4768]: I0217 13:53:03.489559 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openshift-service-ca.crt" Feb 17 13:53:03 crc kubenswrapper[4768]: I0217 13:53:03.496834 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-r8xt7"] Feb 17 13:53:03 crc kubenswrapper[4768]: I0217 13:53:03.575181 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/33ea37da-9fba-4e4d-8130-e5cfae709011-config\") pod \"dnsmasq-dns-675f4bcbfc-r8xt7\" (UID: \"33ea37da-9fba-4e4d-8130-e5cfae709011\") " pod="openstack/dnsmasq-dns-675f4bcbfc-r8xt7" Feb 17 13:53:03 crc kubenswrapper[4768]: I0217 13:53:03.575294 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pmjgc\" (UniqueName: \"kubernetes.io/projected/33ea37da-9fba-4e4d-8130-e5cfae709011-kube-api-access-pmjgc\") pod \"dnsmasq-dns-675f4bcbfc-r8xt7\" (UID: \"33ea37da-9fba-4e4d-8130-e5cfae709011\") " pod="openstack/dnsmasq-dns-675f4bcbfc-r8xt7" Feb 17 13:53:03 crc kubenswrapper[4768]: I0217 13:53:03.597565 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-68xft"] Feb 17 13:53:03 crc kubenswrapper[4768]: I0217 13:53:03.598798 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-68xft" Feb 17 13:53:03 crc kubenswrapper[4768]: I0217 13:53:03.610439 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-svc" Feb 17 13:53:03 crc kubenswrapper[4768]: I0217 13:53:03.625726 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-68xft"] Feb 17 13:53:03 crc kubenswrapper[4768]: I0217 13:53:03.679779 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3e57419a-285d-4114-bf10-9b204239483f-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-68xft\" (UID: \"3e57419a-285d-4114-bf10-9b204239483f\") " pod="openstack/dnsmasq-dns-78dd6ddcc-68xft" Feb 17 13:53:03 crc kubenswrapper[4768]: I0217 13:53:03.679842 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cg2xr\" (UniqueName: \"kubernetes.io/projected/3e57419a-285d-4114-bf10-9b204239483f-kube-api-access-cg2xr\") pod \"dnsmasq-dns-78dd6ddcc-68xft\" (UID: \"3e57419a-285d-4114-bf10-9b204239483f\") " pod="openstack/dnsmasq-dns-78dd6ddcc-68xft" Feb 17 13:53:03 crc kubenswrapper[4768]: I0217 13:53:03.679881 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pmjgc\" (UniqueName: \"kubernetes.io/projected/33ea37da-9fba-4e4d-8130-e5cfae709011-kube-api-access-pmjgc\") pod \"dnsmasq-dns-675f4bcbfc-r8xt7\" (UID: \"33ea37da-9fba-4e4d-8130-e5cfae709011\") " pod="openstack/dnsmasq-dns-675f4bcbfc-r8xt7" Feb 17 13:53:03 crc kubenswrapper[4768]: I0217 13:53:03.680077 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/33ea37da-9fba-4e4d-8130-e5cfae709011-config\") pod \"dnsmasq-dns-675f4bcbfc-r8xt7\" (UID: \"33ea37da-9fba-4e4d-8130-e5cfae709011\") " pod="openstack/dnsmasq-dns-675f4bcbfc-r8xt7" Feb 17 13:53:03 crc kubenswrapper[4768]: I0217 13:53:03.680261 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3e57419a-285d-4114-bf10-9b204239483f-config\") pod \"dnsmasq-dns-78dd6ddcc-68xft\" (UID: \"3e57419a-285d-4114-bf10-9b204239483f\") " pod="openstack/dnsmasq-dns-78dd6ddcc-68xft" Feb 17 13:53:03 crc kubenswrapper[4768]: I0217 13:53:03.681237 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/33ea37da-9fba-4e4d-8130-e5cfae709011-config\") pod \"dnsmasq-dns-675f4bcbfc-r8xt7\" (UID: \"33ea37da-9fba-4e4d-8130-e5cfae709011\") " pod="openstack/dnsmasq-dns-675f4bcbfc-r8xt7" Feb 17 13:53:03 crc kubenswrapper[4768]: I0217 13:53:03.706576 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pmjgc\" (UniqueName: \"kubernetes.io/projected/33ea37da-9fba-4e4d-8130-e5cfae709011-kube-api-access-pmjgc\") pod \"dnsmasq-dns-675f4bcbfc-r8xt7\" (UID: \"33ea37da-9fba-4e4d-8130-e5cfae709011\") " pod="openstack/dnsmasq-dns-675f4bcbfc-r8xt7" Feb 17 13:53:03 crc kubenswrapper[4768]: I0217 13:53:03.781035 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3e57419a-285d-4114-bf10-9b204239483f-config\") pod \"dnsmasq-dns-78dd6ddcc-68xft\" (UID: \"3e57419a-285d-4114-bf10-9b204239483f\") " pod="openstack/dnsmasq-dns-78dd6ddcc-68xft" Feb 17 13:53:03 crc kubenswrapper[4768]: I0217 13:53:03.781121 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3e57419a-285d-4114-bf10-9b204239483f-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-68xft\" (UID: \"3e57419a-285d-4114-bf10-9b204239483f\") " pod="openstack/dnsmasq-dns-78dd6ddcc-68xft" Feb 17 13:53:03 crc kubenswrapper[4768]: I0217 13:53:03.781154 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cg2xr\" (UniqueName: \"kubernetes.io/projected/3e57419a-285d-4114-bf10-9b204239483f-kube-api-access-cg2xr\") pod \"dnsmasq-dns-78dd6ddcc-68xft\" (UID: \"3e57419a-285d-4114-bf10-9b204239483f\") " pod="openstack/dnsmasq-dns-78dd6ddcc-68xft" Feb 17 13:53:03 crc kubenswrapper[4768]: I0217 13:53:03.782045 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3e57419a-285d-4114-bf10-9b204239483f-config\") pod \"dnsmasq-dns-78dd6ddcc-68xft\" (UID: \"3e57419a-285d-4114-bf10-9b204239483f\") " pod="openstack/dnsmasq-dns-78dd6ddcc-68xft" Feb 17 13:53:03 crc kubenswrapper[4768]: I0217 13:53:03.782076 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3e57419a-285d-4114-bf10-9b204239483f-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-68xft\" (UID: \"3e57419a-285d-4114-bf10-9b204239483f\") " pod="openstack/dnsmasq-dns-78dd6ddcc-68xft" Feb 17 13:53:03 crc kubenswrapper[4768]: I0217 13:53:03.794868 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-r8xt7" Feb 17 13:53:03 crc kubenswrapper[4768]: I0217 13:53:03.802164 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cg2xr\" (UniqueName: \"kubernetes.io/projected/3e57419a-285d-4114-bf10-9b204239483f-kube-api-access-cg2xr\") pod \"dnsmasq-dns-78dd6ddcc-68xft\" (UID: \"3e57419a-285d-4114-bf10-9b204239483f\") " pod="openstack/dnsmasq-dns-78dd6ddcc-68xft" Feb 17 13:53:03 crc kubenswrapper[4768]: I0217 13:53:03.935871 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-68xft" Feb 17 13:53:04 crc kubenswrapper[4768]: I0217 13:53:04.178508 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-68xft"] Feb 17 13:53:04 crc kubenswrapper[4768]: I0217 13:53:04.189672 4768 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 17 13:53:04 crc kubenswrapper[4768]: I0217 13:53:04.243340 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-r8xt7"] Feb 17 13:53:04 crc kubenswrapper[4768]: I0217 13:53:04.946787 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78dd6ddcc-68xft" event={"ID":"3e57419a-285d-4114-bf10-9b204239483f","Type":"ContainerStarted","Data":"0dea43910180b73248188c7b1a2f7e87a189e3e4337ca2dc42b7406881da9a0e"} Feb 17 13:53:04 crc kubenswrapper[4768]: I0217 13:53:04.948199 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f4bcbfc-r8xt7" event={"ID":"33ea37da-9fba-4e4d-8130-e5cfae709011","Type":"ContainerStarted","Data":"30eda3f65cfaa44640563d9a9c38dc32563e44a344f48aa2d3d04e1905fa91af"} Feb 17 13:53:06 crc kubenswrapper[4768]: I0217 13:53:06.162612 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-r8xt7"] Feb 17 13:53:06 crc kubenswrapper[4768]: I0217 13:53:06.197732 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-99fvk"] Feb 17 13:53:06 crc kubenswrapper[4768]: I0217 13:53:06.198928 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-99fvk" Feb 17 13:53:06 crc kubenswrapper[4768]: I0217 13:53:06.209469 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-99fvk"] Feb 17 13:53:06 crc kubenswrapper[4768]: I0217 13:53:06.224622 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/738597ef-0d1a-40e5-b592-82f54af22e13-dns-svc\") pod \"dnsmasq-dns-666b6646f7-99fvk\" (UID: \"738597ef-0d1a-40e5-b592-82f54af22e13\") " pod="openstack/dnsmasq-dns-666b6646f7-99fvk" Feb 17 13:53:06 crc kubenswrapper[4768]: I0217 13:53:06.224669 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/738597ef-0d1a-40e5-b592-82f54af22e13-config\") pod \"dnsmasq-dns-666b6646f7-99fvk\" (UID: \"738597ef-0d1a-40e5-b592-82f54af22e13\") " pod="openstack/dnsmasq-dns-666b6646f7-99fvk" Feb 17 13:53:06 crc kubenswrapper[4768]: I0217 13:53:06.224840 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dzlfz\" (UniqueName: \"kubernetes.io/projected/738597ef-0d1a-40e5-b592-82f54af22e13-kube-api-access-dzlfz\") pod \"dnsmasq-dns-666b6646f7-99fvk\" (UID: \"738597ef-0d1a-40e5-b592-82f54af22e13\") " pod="openstack/dnsmasq-dns-666b6646f7-99fvk" Feb 17 13:53:06 crc kubenswrapper[4768]: I0217 13:53:06.327150 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dzlfz\" (UniqueName: \"kubernetes.io/projected/738597ef-0d1a-40e5-b592-82f54af22e13-kube-api-access-dzlfz\") pod \"dnsmasq-dns-666b6646f7-99fvk\" (UID: \"738597ef-0d1a-40e5-b592-82f54af22e13\") " pod="openstack/dnsmasq-dns-666b6646f7-99fvk" Feb 17 13:53:06 crc kubenswrapper[4768]: I0217 13:53:06.327269 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/738597ef-0d1a-40e5-b592-82f54af22e13-dns-svc\") pod \"dnsmasq-dns-666b6646f7-99fvk\" (UID: \"738597ef-0d1a-40e5-b592-82f54af22e13\") " pod="openstack/dnsmasq-dns-666b6646f7-99fvk" Feb 17 13:53:06 crc kubenswrapper[4768]: I0217 13:53:06.327316 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/738597ef-0d1a-40e5-b592-82f54af22e13-config\") pod \"dnsmasq-dns-666b6646f7-99fvk\" (UID: \"738597ef-0d1a-40e5-b592-82f54af22e13\") " pod="openstack/dnsmasq-dns-666b6646f7-99fvk" Feb 17 13:53:06 crc kubenswrapper[4768]: I0217 13:53:06.330226 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/738597ef-0d1a-40e5-b592-82f54af22e13-dns-svc\") pod \"dnsmasq-dns-666b6646f7-99fvk\" (UID: \"738597ef-0d1a-40e5-b592-82f54af22e13\") " pod="openstack/dnsmasq-dns-666b6646f7-99fvk" Feb 17 13:53:06 crc kubenswrapper[4768]: I0217 13:53:06.330823 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/738597ef-0d1a-40e5-b592-82f54af22e13-config\") pod \"dnsmasq-dns-666b6646f7-99fvk\" (UID: \"738597ef-0d1a-40e5-b592-82f54af22e13\") " pod="openstack/dnsmasq-dns-666b6646f7-99fvk" Feb 17 13:53:06 crc kubenswrapper[4768]: I0217 13:53:06.367857 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dzlfz\" (UniqueName: \"kubernetes.io/projected/738597ef-0d1a-40e5-b592-82f54af22e13-kube-api-access-dzlfz\") pod \"dnsmasq-dns-666b6646f7-99fvk\" (UID: \"738597ef-0d1a-40e5-b592-82f54af22e13\") " pod="openstack/dnsmasq-dns-666b6646f7-99fvk" Feb 17 13:53:06 crc kubenswrapper[4768]: I0217 13:53:06.491708 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-68xft"] Feb 17 13:53:06 crc kubenswrapper[4768]: I0217 13:53:06.509078 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-ftfmm"] Feb 17 13:53:06 crc kubenswrapper[4768]: I0217 13:53:06.510509 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-ftfmm" Feb 17 13:53:06 crc kubenswrapper[4768]: I0217 13:53:06.532409 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-99fvk" Feb 17 13:53:06 crc kubenswrapper[4768]: I0217 13:53:06.533188 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4ngsx\" (UniqueName: \"kubernetes.io/projected/efe2337b-7579-4cc3-9de6-4076d51d3fdf-kube-api-access-4ngsx\") pod \"dnsmasq-dns-57d769cc4f-ftfmm\" (UID: \"efe2337b-7579-4cc3-9de6-4076d51d3fdf\") " pod="openstack/dnsmasq-dns-57d769cc4f-ftfmm" Feb 17 13:53:06 crc kubenswrapper[4768]: I0217 13:53:06.534020 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/efe2337b-7579-4cc3-9de6-4076d51d3fdf-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-ftfmm\" (UID: \"efe2337b-7579-4cc3-9de6-4076d51d3fdf\") " pod="openstack/dnsmasq-dns-57d769cc4f-ftfmm" Feb 17 13:53:06 crc kubenswrapper[4768]: I0217 13:53:06.534123 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/efe2337b-7579-4cc3-9de6-4076d51d3fdf-config\") pod \"dnsmasq-dns-57d769cc4f-ftfmm\" (UID: \"efe2337b-7579-4cc3-9de6-4076d51d3fdf\") " pod="openstack/dnsmasq-dns-57d769cc4f-ftfmm" Feb 17 13:53:06 crc kubenswrapper[4768]: I0217 13:53:06.535505 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-ftfmm"] Feb 17 13:53:06 crc kubenswrapper[4768]: I0217 13:53:06.639763 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/efe2337b-7579-4cc3-9de6-4076d51d3fdf-config\") pod \"dnsmasq-dns-57d769cc4f-ftfmm\" (UID: \"efe2337b-7579-4cc3-9de6-4076d51d3fdf\") " pod="openstack/dnsmasq-dns-57d769cc4f-ftfmm" Feb 17 13:53:06 crc kubenswrapper[4768]: I0217 13:53:06.639859 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4ngsx\" (UniqueName: \"kubernetes.io/projected/efe2337b-7579-4cc3-9de6-4076d51d3fdf-kube-api-access-4ngsx\") pod \"dnsmasq-dns-57d769cc4f-ftfmm\" (UID: \"efe2337b-7579-4cc3-9de6-4076d51d3fdf\") " pod="openstack/dnsmasq-dns-57d769cc4f-ftfmm" Feb 17 13:53:06 crc kubenswrapper[4768]: I0217 13:53:06.639904 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/efe2337b-7579-4cc3-9de6-4076d51d3fdf-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-ftfmm\" (UID: \"efe2337b-7579-4cc3-9de6-4076d51d3fdf\") " pod="openstack/dnsmasq-dns-57d769cc4f-ftfmm" Feb 17 13:53:06 crc kubenswrapper[4768]: I0217 13:53:06.640664 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/efe2337b-7579-4cc3-9de6-4076d51d3fdf-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-ftfmm\" (UID: \"efe2337b-7579-4cc3-9de6-4076d51d3fdf\") " pod="openstack/dnsmasq-dns-57d769cc4f-ftfmm" Feb 17 13:53:06 crc kubenswrapper[4768]: I0217 13:53:06.642331 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/efe2337b-7579-4cc3-9de6-4076d51d3fdf-config\") pod \"dnsmasq-dns-57d769cc4f-ftfmm\" (UID: \"efe2337b-7579-4cc3-9de6-4076d51d3fdf\") " pod="openstack/dnsmasq-dns-57d769cc4f-ftfmm" Feb 17 13:53:06 crc kubenswrapper[4768]: I0217 13:53:06.675715 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4ngsx\" (UniqueName: \"kubernetes.io/projected/efe2337b-7579-4cc3-9de6-4076d51d3fdf-kube-api-access-4ngsx\") pod \"dnsmasq-dns-57d769cc4f-ftfmm\" (UID: \"efe2337b-7579-4cc3-9de6-4076d51d3fdf\") " pod="openstack/dnsmasq-dns-57d769cc4f-ftfmm" Feb 17 13:53:06 crc kubenswrapper[4768]: I0217 13:53:06.866536 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-ftfmm" Feb 17 13:53:07 crc kubenswrapper[4768]: I0217 13:53:07.131697 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-99fvk"] Feb 17 13:53:07 crc kubenswrapper[4768]: W0217 13:53:07.137276 4768 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod738597ef_0d1a_40e5_b592_82f54af22e13.slice/crio-93ab68618e5720085a0390cacd11c01ed74fbd0f7b457be228c6d51c1bbe5a76 WatchSource:0}: Error finding container 93ab68618e5720085a0390cacd11c01ed74fbd0f7b457be228c6d51c1bbe5a76: Status 404 returned error can't find the container with id 93ab68618e5720085a0390cacd11c01ed74fbd0f7b457be228c6d51c1bbe5a76 Feb 17 13:53:07 crc kubenswrapper[4768]: I0217 13:53:07.298338 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-ftfmm"] Feb 17 13:53:07 crc kubenswrapper[4768]: W0217 13:53:07.306300 4768 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podefe2337b_7579_4cc3_9de6_4076d51d3fdf.slice/crio-ae4bdc67098bd33126dc0b1ece45fe5f713557d06f572c403cdaa5eaa103d742 WatchSource:0}: Error finding container ae4bdc67098bd33126dc0b1ece45fe5f713557d06f572c403cdaa5eaa103d742: Status 404 returned error can't find the container with id ae4bdc67098bd33126dc0b1ece45fe5f713557d06f572c403cdaa5eaa103d742 Feb 17 13:53:07 crc kubenswrapper[4768]: I0217 13:53:07.372223 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Feb 17 13:53:07 crc kubenswrapper[4768]: I0217 13:53:07.373486 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 17 13:53:07 crc kubenswrapper[4768]: I0217 13:53:07.378251 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Feb 17 13:53:07 crc kubenswrapper[4768]: I0217 13:53:07.378828 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Feb 17 13:53:07 crc kubenswrapper[4768]: I0217 13:53:07.379052 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Feb 17 13:53:07 crc kubenswrapper[4768]: I0217 13:53:07.379265 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Feb 17 13:53:07 crc kubenswrapper[4768]: I0217 13:53:07.382063 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Feb 17 13:53:07 crc kubenswrapper[4768]: I0217 13:53:07.382698 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Feb 17 13:53:07 crc kubenswrapper[4768]: I0217 13:53:07.383274 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-bm4g4" Feb 17 13:53:07 crc kubenswrapper[4768]: I0217 13:53:07.385222 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 17 13:53:07 crc kubenswrapper[4768]: I0217 13:53:07.452714 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/9615d9e4-113e-4282-a091-a8c69a0c7968-pod-info\") pod \"rabbitmq-server-0\" (UID: \"9615d9e4-113e-4282-a091-a8c69a0c7968\") " pod="openstack/rabbitmq-server-0" Feb 17 13:53:07 crc kubenswrapper[4768]: I0217 13:53:07.452757 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"rabbitmq-server-0\" (UID: \"9615d9e4-113e-4282-a091-a8c69a0c7968\") " pod="openstack/rabbitmq-server-0" Feb 17 13:53:07 crc kubenswrapper[4768]: I0217 13:53:07.452795 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/9615d9e4-113e-4282-a091-a8c69a0c7968-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"9615d9e4-113e-4282-a091-a8c69a0c7968\") " pod="openstack/rabbitmq-server-0" Feb 17 13:53:07 crc kubenswrapper[4768]: I0217 13:53:07.452855 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9615d9e4-113e-4282-a091-a8c69a0c7968-config-data\") pod \"rabbitmq-server-0\" (UID: \"9615d9e4-113e-4282-a091-a8c69a0c7968\") " pod="openstack/rabbitmq-server-0" Feb 17 13:53:07 crc kubenswrapper[4768]: I0217 13:53:07.453230 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/9615d9e4-113e-4282-a091-a8c69a0c7968-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"9615d9e4-113e-4282-a091-a8c69a0c7968\") " pod="openstack/rabbitmq-server-0" Feb 17 13:53:07 crc kubenswrapper[4768]: I0217 13:53:07.453264 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vbcbv\" (UniqueName: \"kubernetes.io/projected/9615d9e4-113e-4282-a091-a8c69a0c7968-kube-api-access-vbcbv\") pod \"rabbitmq-server-0\" (UID: \"9615d9e4-113e-4282-a091-a8c69a0c7968\") " pod="openstack/rabbitmq-server-0" Feb 17 13:53:07 crc kubenswrapper[4768]: I0217 13:53:07.453304 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/9615d9e4-113e-4282-a091-a8c69a0c7968-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"9615d9e4-113e-4282-a091-a8c69a0c7968\") " pod="openstack/rabbitmq-server-0" Feb 17 13:53:07 crc kubenswrapper[4768]: I0217 13:53:07.453325 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/9615d9e4-113e-4282-a091-a8c69a0c7968-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"9615d9e4-113e-4282-a091-a8c69a0c7968\") " pod="openstack/rabbitmq-server-0" Feb 17 13:53:07 crc kubenswrapper[4768]: I0217 13:53:07.453350 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/9615d9e4-113e-4282-a091-a8c69a0c7968-server-conf\") pod \"rabbitmq-server-0\" (UID: \"9615d9e4-113e-4282-a091-a8c69a0c7968\") " pod="openstack/rabbitmq-server-0" Feb 17 13:53:07 crc kubenswrapper[4768]: I0217 13:53:07.453503 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/9615d9e4-113e-4282-a091-a8c69a0c7968-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"9615d9e4-113e-4282-a091-a8c69a0c7968\") " pod="openstack/rabbitmq-server-0" Feb 17 13:53:07 crc kubenswrapper[4768]: I0217 13:53:07.453600 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/9615d9e4-113e-4282-a091-a8c69a0c7968-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"9615d9e4-113e-4282-a091-a8c69a0c7968\") " pod="openstack/rabbitmq-server-0" Feb 17 13:53:07 crc kubenswrapper[4768]: I0217 13:53:07.554563 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/9615d9e4-113e-4282-a091-a8c69a0c7968-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"9615d9e4-113e-4282-a091-a8c69a0c7968\") " pod="openstack/rabbitmq-server-0" Feb 17 13:53:07 crc kubenswrapper[4768]: I0217 13:53:07.554942 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/9615d9e4-113e-4282-a091-a8c69a0c7968-pod-info\") pod \"rabbitmq-server-0\" (UID: \"9615d9e4-113e-4282-a091-a8c69a0c7968\") " pod="openstack/rabbitmq-server-0" Feb 17 13:53:07 crc kubenswrapper[4768]: I0217 13:53:07.554975 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"rabbitmq-server-0\" (UID: \"9615d9e4-113e-4282-a091-a8c69a0c7968\") " pod="openstack/rabbitmq-server-0" Feb 17 13:53:07 crc kubenswrapper[4768]: I0217 13:53:07.555012 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/9615d9e4-113e-4282-a091-a8c69a0c7968-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"9615d9e4-113e-4282-a091-a8c69a0c7968\") " pod="openstack/rabbitmq-server-0" Feb 17 13:53:07 crc kubenswrapper[4768]: I0217 13:53:07.555046 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9615d9e4-113e-4282-a091-a8c69a0c7968-config-data\") pod \"rabbitmq-server-0\" (UID: \"9615d9e4-113e-4282-a091-a8c69a0c7968\") " pod="openstack/rabbitmq-server-0" Feb 17 13:53:07 crc kubenswrapper[4768]: I0217 13:53:07.555071 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/9615d9e4-113e-4282-a091-a8c69a0c7968-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"9615d9e4-113e-4282-a091-a8c69a0c7968\") " pod="openstack/rabbitmq-server-0" Feb 17 13:53:07 crc kubenswrapper[4768]: I0217 13:53:07.555089 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vbcbv\" (UniqueName: \"kubernetes.io/projected/9615d9e4-113e-4282-a091-a8c69a0c7968-kube-api-access-vbcbv\") pod \"rabbitmq-server-0\" (UID: \"9615d9e4-113e-4282-a091-a8c69a0c7968\") " pod="openstack/rabbitmq-server-0" Feb 17 13:53:07 crc kubenswrapper[4768]: I0217 13:53:07.555146 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/9615d9e4-113e-4282-a091-a8c69a0c7968-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"9615d9e4-113e-4282-a091-a8c69a0c7968\") " pod="openstack/rabbitmq-server-0" Feb 17 13:53:07 crc kubenswrapper[4768]: I0217 13:53:07.555169 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/9615d9e4-113e-4282-a091-a8c69a0c7968-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"9615d9e4-113e-4282-a091-a8c69a0c7968\") " pod="openstack/rabbitmq-server-0" Feb 17 13:53:07 crc kubenswrapper[4768]: I0217 13:53:07.555192 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/9615d9e4-113e-4282-a091-a8c69a0c7968-server-conf\") pod \"rabbitmq-server-0\" (UID: \"9615d9e4-113e-4282-a091-a8c69a0c7968\") " pod="openstack/rabbitmq-server-0" Feb 17 13:53:07 crc kubenswrapper[4768]: I0217 13:53:07.555262 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/9615d9e4-113e-4282-a091-a8c69a0c7968-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"9615d9e4-113e-4282-a091-a8c69a0c7968\") " pod="openstack/rabbitmq-server-0" Feb 17 13:53:07 crc kubenswrapper[4768]: I0217 13:53:07.556250 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/9615d9e4-113e-4282-a091-a8c69a0c7968-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"9615d9e4-113e-4282-a091-a8c69a0c7968\") " pod="openstack/rabbitmq-server-0" Feb 17 13:53:07 crc kubenswrapper[4768]: I0217 13:53:07.557621 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/9615d9e4-113e-4282-a091-a8c69a0c7968-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"9615d9e4-113e-4282-a091-a8c69a0c7968\") " pod="openstack/rabbitmq-server-0" Feb 17 13:53:07 crc kubenswrapper[4768]: I0217 13:53:07.557868 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/9615d9e4-113e-4282-a091-a8c69a0c7968-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"9615d9e4-113e-4282-a091-a8c69a0c7968\") " pod="openstack/rabbitmq-server-0" Feb 17 13:53:07 crc kubenswrapper[4768]: I0217 13:53:07.557957 4768 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"rabbitmq-server-0\" (UID: \"9615d9e4-113e-4282-a091-a8c69a0c7968\") device mount path \"/mnt/openstack/pv07\"" pod="openstack/rabbitmq-server-0" Feb 17 13:53:07 crc kubenswrapper[4768]: I0217 13:53:07.558870 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/9615d9e4-113e-4282-a091-a8c69a0c7968-server-conf\") pod \"rabbitmq-server-0\" (UID: \"9615d9e4-113e-4282-a091-a8c69a0c7968\") " pod="openstack/rabbitmq-server-0" Feb 17 13:53:07 crc kubenswrapper[4768]: I0217 13:53:07.561217 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/9615d9e4-113e-4282-a091-a8c69a0c7968-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"9615d9e4-113e-4282-a091-a8c69a0c7968\") " pod="openstack/rabbitmq-server-0" Feb 17 13:53:07 crc kubenswrapper[4768]: I0217 13:53:07.564215 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9615d9e4-113e-4282-a091-a8c69a0c7968-config-data\") pod \"rabbitmq-server-0\" (UID: \"9615d9e4-113e-4282-a091-a8c69a0c7968\") " pod="openstack/rabbitmq-server-0" Feb 17 13:53:07 crc kubenswrapper[4768]: I0217 13:53:07.567507 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/9615d9e4-113e-4282-a091-a8c69a0c7968-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"9615d9e4-113e-4282-a091-a8c69a0c7968\") " pod="openstack/rabbitmq-server-0" Feb 17 13:53:07 crc kubenswrapper[4768]: I0217 13:53:07.572875 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vbcbv\" (UniqueName: \"kubernetes.io/projected/9615d9e4-113e-4282-a091-a8c69a0c7968-kube-api-access-vbcbv\") pod \"rabbitmq-server-0\" (UID: \"9615d9e4-113e-4282-a091-a8c69a0c7968\") " pod="openstack/rabbitmq-server-0" Feb 17 13:53:07 crc kubenswrapper[4768]: I0217 13:53:07.573891 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/9615d9e4-113e-4282-a091-a8c69a0c7968-pod-info\") pod \"rabbitmq-server-0\" (UID: \"9615d9e4-113e-4282-a091-a8c69a0c7968\") " pod="openstack/rabbitmq-server-0" Feb 17 13:53:07 crc kubenswrapper[4768]: I0217 13:53:07.593412 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/9615d9e4-113e-4282-a091-a8c69a0c7968-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"9615d9e4-113e-4282-a091-a8c69a0c7968\") " pod="openstack/rabbitmq-server-0" Feb 17 13:53:07 crc kubenswrapper[4768]: I0217 13:53:07.595632 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"rabbitmq-server-0\" (UID: \"9615d9e4-113e-4282-a091-a8c69a0c7968\") " pod="openstack/rabbitmq-server-0" Feb 17 13:53:07 crc kubenswrapper[4768]: I0217 13:53:07.679008 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 17 13:53:07 crc kubenswrapper[4768]: I0217 13:53:07.680473 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 17 13:53:07 crc kubenswrapper[4768]: I0217 13:53:07.688708 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Feb 17 13:53:07 crc kubenswrapper[4768]: I0217 13:53:07.688897 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-r6btj" Feb 17 13:53:07 crc kubenswrapper[4768]: I0217 13:53:07.689265 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Feb 17 13:53:07 crc kubenswrapper[4768]: I0217 13:53:07.689465 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Feb 17 13:53:07 crc kubenswrapper[4768]: I0217 13:53:07.689645 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Feb 17 13:53:07 crc kubenswrapper[4768]: I0217 13:53:07.689934 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Feb 17 13:53:07 crc kubenswrapper[4768]: I0217 13:53:07.690194 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Feb 17 13:53:07 crc kubenswrapper[4768]: I0217 13:53:07.698944 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 17 13:53:07 crc kubenswrapper[4768]: I0217 13:53:07.720611 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 17 13:53:07 crc kubenswrapper[4768]: I0217 13:53:07.863829 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/7d5df4be-f003-429d-8a84-81a239db88c0-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"7d5df4be-f003-429d-8a84-81a239db88c0\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 13:53:07 crc kubenswrapper[4768]: I0217 13:53:07.863894 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t6nk2\" (UniqueName: \"kubernetes.io/projected/7d5df4be-f003-429d-8a84-81a239db88c0-kube-api-access-t6nk2\") pod \"rabbitmq-cell1-server-0\" (UID: \"7d5df4be-f003-429d-8a84-81a239db88c0\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 13:53:07 crc kubenswrapper[4768]: I0217 13:53:07.863924 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/7d5df4be-f003-429d-8a84-81a239db88c0-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"7d5df4be-f003-429d-8a84-81a239db88c0\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 13:53:07 crc kubenswrapper[4768]: I0217 13:53:07.863985 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/7d5df4be-f003-429d-8a84-81a239db88c0-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"7d5df4be-f003-429d-8a84-81a239db88c0\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 13:53:07 crc kubenswrapper[4768]: I0217 13:53:07.864014 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/7d5df4be-f003-429d-8a84-81a239db88c0-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"7d5df4be-f003-429d-8a84-81a239db88c0\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 13:53:07 crc kubenswrapper[4768]: I0217 13:53:07.864037 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"7d5df4be-f003-429d-8a84-81a239db88c0\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 13:53:07 crc kubenswrapper[4768]: I0217 13:53:07.864075 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/7d5df4be-f003-429d-8a84-81a239db88c0-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"7d5df4be-f003-429d-8a84-81a239db88c0\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 13:53:07 crc kubenswrapper[4768]: I0217 13:53:07.864142 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/7d5df4be-f003-429d-8a84-81a239db88c0-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"7d5df4be-f003-429d-8a84-81a239db88c0\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 13:53:07 crc kubenswrapper[4768]: I0217 13:53:07.864168 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/7d5df4be-f003-429d-8a84-81a239db88c0-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"7d5df4be-f003-429d-8a84-81a239db88c0\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 13:53:07 crc kubenswrapper[4768]: I0217 13:53:07.864341 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/7d5df4be-f003-429d-8a84-81a239db88c0-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"7d5df4be-f003-429d-8a84-81a239db88c0\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 13:53:07 crc kubenswrapper[4768]: I0217 13:53:07.864418 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/7d5df4be-f003-429d-8a84-81a239db88c0-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"7d5df4be-f003-429d-8a84-81a239db88c0\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 13:53:07 crc kubenswrapper[4768]: I0217 13:53:07.965636 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/7d5df4be-f003-429d-8a84-81a239db88c0-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"7d5df4be-f003-429d-8a84-81a239db88c0\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 13:53:07 crc kubenswrapper[4768]: I0217 13:53:07.965694 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"7d5df4be-f003-429d-8a84-81a239db88c0\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 13:53:07 crc kubenswrapper[4768]: I0217 13:53:07.965734 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/7d5df4be-f003-429d-8a84-81a239db88c0-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"7d5df4be-f003-429d-8a84-81a239db88c0\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 13:53:07 crc kubenswrapper[4768]: I0217 13:53:07.965766 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/7d5df4be-f003-429d-8a84-81a239db88c0-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"7d5df4be-f003-429d-8a84-81a239db88c0\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 13:53:07 crc kubenswrapper[4768]: I0217 13:53:07.965786 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/7d5df4be-f003-429d-8a84-81a239db88c0-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"7d5df4be-f003-429d-8a84-81a239db88c0\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 13:53:07 crc kubenswrapper[4768]: I0217 13:53:07.965802 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/7d5df4be-f003-429d-8a84-81a239db88c0-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"7d5df4be-f003-429d-8a84-81a239db88c0\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 13:53:07 crc kubenswrapper[4768]: I0217 13:53:07.965822 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/7d5df4be-f003-429d-8a84-81a239db88c0-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"7d5df4be-f003-429d-8a84-81a239db88c0\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 13:53:07 crc kubenswrapper[4768]: I0217 13:53:07.965840 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/7d5df4be-f003-429d-8a84-81a239db88c0-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"7d5df4be-f003-429d-8a84-81a239db88c0\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 13:53:07 crc kubenswrapper[4768]: I0217 13:53:07.965869 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/7d5df4be-f003-429d-8a84-81a239db88c0-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"7d5df4be-f003-429d-8a84-81a239db88c0\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 13:53:07 crc kubenswrapper[4768]: I0217 13:53:07.965889 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t6nk2\" (UniqueName: \"kubernetes.io/projected/7d5df4be-f003-429d-8a84-81a239db88c0-kube-api-access-t6nk2\") pod \"rabbitmq-cell1-server-0\" (UID: \"7d5df4be-f003-429d-8a84-81a239db88c0\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 13:53:07 crc kubenswrapper[4768]: I0217 13:53:07.965910 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/7d5df4be-f003-429d-8a84-81a239db88c0-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"7d5df4be-f003-429d-8a84-81a239db88c0\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 13:53:07 crc kubenswrapper[4768]: I0217 13:53:07.965918 4768 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"7d5df4be-f003-429d-8a84-81a239db88c0\") device mount path \"/mnt/openstack/pv03\"" pod="openstack/rabbitmq-cell1-server-0" Feb 17 13:53:07 crc kubenswrapper[4768]: I0217 13:53:07.966132 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/7d5df4be-f003-429d-8a84-81a239db88c0-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"7d5df4be-f003-429d-8a84-81a239db88c0\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 13:53:07 crc kubenswrapper[4768]: I0217 13:53:07.966984 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/7d5df4be-f003-429d-8a84-81a239db88c0-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"7d5df4be-f003-429d-8a84-81a239db88c0\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 13:53:07 crc kubenswrapper[4768]: I0217 13:53:07.967503 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/7d5df4be-f003-429d-8a84-81a239db88c0-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"7d5df4be-f003-429d-8a84-81a239db88c0\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 13:53:07 crc kubenswrapper[4768]: I0217 13:53:07.967583 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/7d5df4be-f003-429d-8a84-81a239db88c0-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"7d5df4be-f003-429d-8a84-81a239db88c0\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 13:53:07 crc kubenswrapper[4768]: I0217 13:53:07.967617 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/7d5df4be-f003-429d-8a84-81a239db88c0-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"7d5df4be-f003-429d-8a84-81a239db88c0\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 13:53:07 crc kubenswrapper[4768]: I0217 13:53:07.969086 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/7d5df4be-f003-429d-8a84-81a239db88c0-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"7d5df4be-f003-429d-8a84-81a239db88c0\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 13:53:07 crc kubenswrapper[4768]: I0217 13:53:07.970820 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/7d5df4be-f003-429d-8a84-81a239db88c0-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"7d5df4be-f003-429d-8a84-81a239db88c0\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 13:53:07 crc kubenswrapper[4768]: I0217 13:53:07.971457 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/7d5df4be-f003-429d-8a84-81a239db88c0-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"7d5df4be-f003-429d-8a84-81a239db88c0\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 13:53:07 crc kubenswrapper[4768]: I0217 13:53:07.971852 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/7d5df4be-f003-429d-8a84-81a239db88c0-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"7d5df4be-f003-429d-8a84-81a239db88c0\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 13:53:07 crc kubenswrapper[4768]: I0217 13:53:07.988026 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"7d5df4be-f003-429d-8a84-81a239db88c0\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 13:53:07 crc kubenswrapper[4768]: I0217 13:53:07.995423 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-99fvk" event={"ID":"738597ef-0d1a-40e5-b592-82f54af22e13","Type":"ContainerStarted","Data":"93ab68618e5720085a0390cacd11c01ed74fbd0f7b457be228c6d51c1bbe5a76"} Feb 17 13:53:07 crc kubenswrapper[4768]: I0217 13:53:07.998292 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t6nk2\" (UniqueName: \"kubernetes.io/projected/7d5df4be-f003-429d-8a84-81a239db88c0-kube-api-access-t6nk2\") pod \"rabbitmq-cell1-server-0\" (UID: \"7d5df4be-f003-429d-8a84-81a239db88c0\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 13:53:07 crc kubenswrapper[4768]: I0217 13:53:07.999420 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 17 13:53:08 crc kubenswrapper[4768]: I0217 13:53:08.003039 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-ftfmm" event={"ID":"efe2337b-7579-4cc3-9de6-4076d51d3fdf","Type":"ContainerStarted","Data":"ae4bdc67098bd33126dc0b1ece45fe5f713557d06f572c403cdaa5eaa103d742"} Feb 17 13:53:08 crc kubenswrapper[4768]: I0217 13:53:08.864485 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-galera-0"] Feb 17 13:53:08 crc kubenswrapper[4768]: I0217 13:53:08.866446 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Feb 17 13:53:08 crc kubenswrapper[4768]: I0217 13:53:08.870044 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-svc" Feb 17 13:53:08 crc kubenswrapper[4768]: I0217 13:53:08.870515 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config-data" Feb 17 13:53:08 crc kubenswrapper[4768]: I0217 13:53:08.870565 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-scripts" Feb 17 13:53:08 crc kubenswrapper[4768]: I0217 13:53:08.870708 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-dockercfg-qqm2c" Feb 17 13:53:08 crc kubenswrapper[4768]: I0217 13:53:08.876159 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Feb 17 13:53:08 crc kubenswrapper[4768]: I0217 13:53:08.888416 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"combined-ca-bundle" Feb 17 13:53:08 crc kubenswrapper[4768]: I0217 13:53:08.892435 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/5ba1ccc6-d556-4867-8e12-a5747dba1ffa-config-data-default\") pod \"openstack-galera-0\" (UID: \"5ba1ccc6-d556-4867-8e12-a5747dba1ffa\") " pod="openstack/openstack-galera-0" Feb 17 13:53:08 crc kubenswrapper[4768]: I0217 13:53:08.892487 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"openstack-galera-0\" (UID: \"5ba1ccc6-d556-4867-8e12-a5747dba1ffa\") " pod="openstack/openstack-galera-0" Feb 17 13:53:08 crc kubenswrapper[4768]: I0217 13:53:08.892531 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5ba1ccc6-d556-4867-8e12-a5747dba1ffa-operator-scripts\") pod \"openstack-galera-0\" (UID: \"5ba1ccc6-d556-4867-8e12-a5747dba1ffa\") " pod="openstack/openstack-galera-0" Feb 17 13:53:08 crc kubenswrapper[4768]: I0217 13:53:08.892567 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x79pj\" (UniqueName: \"kubernetes.io/projected/5ba1ccc6-d556-4867-8e12-a5747dba1ffa-kube-api-access-x79pj\") pod \"openstack-galera-0\" (UID: \"5ba1ccc6-d556-4867-8e12-a5747dba1ffa\") " pod="openstack/openstack-galera-0" Feb 17 13:53:08 crc kubenswrapper[4768]: I0217 13:53:08.892591 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5ba1ccc6-d556-4867-8e12-a5747dba1ffa-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"5ba1ccc6-d556-4867-8e12-a5747dba1ffa\") " pod="openstack/openstack-galera-0" Feb 17 13:53:08 crc kubenswrapper[4768]: I0217 13:53:08.892636 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/5ba1ccc6-d556-4867-8e12-a5747dba1ffa-config-data-generated\") pod \"openstack-galera-0\" (UID: \"5ba1ccc6-d556-4867-8e12-a5747dba1ffa\") " pod="openstack/openstack-galera-0" Feb 17 13:53:08 crc kubenswrapper[4768]: I0217 13:53:08.892694 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/5ba1ccc6-d556-4867-8e12-a5747dba1ffa-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"5ba1ccc6-d556-4867-8e12-a5747dba1ffa\") " pod="openstack/openstack-galera-0" Feb 17 13:53:08 crc kubenswrapper[4768]: I0217 13:53:08.892727 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/5ba1ccc6-d556-4867-8e12-a5747dba1ffa-kolla-config\") pod \"openstack-galera-0\" (UID: \"5ba1ccc6-d556-4867-8e12-a5747dba1ffa\") " pod="openstack/openstack-galera-0" Feb 17 13:53:08 crc kubenswrapper[4768]: I0217 13:53:08.994773 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/5ba1ccc6-d556-4867-8e12-a5747dba1ffa-kolla-config\") pod \"openstack-galera-0\" (UID: \"5ba1ccc6-d556-4867-8e12-a5747dba1ffa\") " pod="openstack/openstack-galera-0" Feb 17 13:53:08 crc kubenswrapper[4768]: I0217 13:53:08.995332 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/5ba1ccc6-d556-4867-8e12-a5747dba1ffa-config-data-default\") pod \"openstack-galera-0\" (UID: \"5ba1ccc6-d556-4867-8e12-a5747dba1ffa\") " pod="openstack/openstack-galera-0" Feb 17 13:53:08 crc kubenswrapper[4768]: I0217 13:53:08.995409 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"openstack-galera-0\" (UID: \"5ba1ccc6-d556-4867-8e12-a5747dba1ffa\") " pod="openstack/openstack-galera-0" Feb 17 13:53:08 crc kubenswrapper[4768]: I0217 13:53:08.995545 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5ba1ccc6-d556-4867-8e12-a5747dba1ffa-operator-scripts\") pod \"openstack-galera-0\" (UID: \"5ba1ccc6-d556-4867-8e12-a5747dba1ffa\") " pod="openstack/openstack-galera-0" Feb 17 13:53:08 crc kubenswrapper[4768]: I0217 13:53:08.995574 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x79pj\" (UniqueName: \"kubernetes.io/projected/5ba1ccc6-d556-4867-8e12-a5747dba1ffa-kube-api-access-x79pj\") pod \"openstack-galera-0\" (UID: \"5ba1ccc6-d556-4867-8e12-a5747dba1ffa\") " pod="openstack/openstack-galera-0" Feb 17 13:53:08 crc kubenswrapper[4768]: I0217 13:53:08.995608 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5ba1ccc6-d556-4867-8e12-a5747dba1ffa-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"5ba1ccc6-d556-4867-8e12-a5747dba1ffa\") " pod="openstack/openstack-galera-0" Feb 17 13:53:08 crc kubenswrapper[4768]: I0217 13:53:08.995672 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/5ba1ccc6-d556-4867-8e12-a5747dba1ffa-config-data-generated\") pod \"openstack-galera-0\" (UID: \"5ba1ccc6-d556-4867-8e12-a5747dba1ffa\") " pod="openstack/openstack-galera-0" Feb 17 13:53:08 crc kubenswrapper[4768]: I0217 13:53:08.995778 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/5ba1ccc6-d556-4867-8e12-a5747dba1ffa-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"5ba1ccc6-d556-4867-8e12-a5747dba1ffa\") " pod="openstack/openstack-galera-0" Feb 17 13:53:08 crc kubenswrapper[4768]: I0217 13:53:08.996040 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/5ba1ccc6-d556-4867-8e12-a5747dba1ffa-kolla-config\") pod \"openstack-galera-0\" (UID: \"5ba1ccc6-d556-4867-8e12-a5747dba1ffa\") " pod="openstack/openstack-galera-0" Feb 17 13:53:08 crc kubenswrapper[4768]: I0217 13:53:08.996881 4768 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"openstack-galera-0\" (UID: \"5ba1ccc6-d556-4867-8e12-a5747dba1ffa\") device mount path \"/mnt/openstack/pv02\"" pod="openstack/openstack-galera-0" Feb 17 13:53:08 crc kubenswrapper[4768]: I0217 13:53:08.997224 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/5ba1ccc6-d556-4867-8e12-a5747dba1ffa-config-data-default\") pod \"openstack-galera-0\" (UID: \"5ba1ccc6-d556-4867-8e12-a5747dba1ffa\") " pod="openstack/openstack-galera-0" Feb 17 13:53:08 crc kubenswrapper[4768]: I0217 13:53:08.997282 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/5ba1ccc6-d556-4867-8e12-a5747dba1ffa-config-data-generated\") pod \"openstack-galera-0\" (UID: \"5ba1ccc6-d556-4867-8e12-a5747dba1ffa\") " pod="openstack/openstack-galera-0" Feb 17 13:53:08 crc kubenswrapper[4768]: I0217 13:53:08.998154 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5ba1ccc6-d556-4867-8e12-a5747dba1ffa-operator-scripts\") pod \"openstack-galera-0\" (UID: \"5ba1ccc6-d556-4867-8e12-a5747dba1ffa\") " pod="openstack/openstack-galera-0" Feb 17 13:53:09 crc kubenswrapper[4768]: I0217 13:53:09.003643 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/5ba1ccc6-d556-4867-8e12-a5747dba1ffa-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"5ba1ccc6-d556-4867-8e12-a5747dba1ffa\") " pod="openstack/openstack-galera-0" Feb 17 13:53:09 crc kubenswrapper[4768]: I0217 13:53:09.003882 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5ba1ccc6-d556-4867-8e12-a5747dba1ffa-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"5ba1ccc6-d556-4867-8e12-a5747dba1ffa\") " pod="openstack/openstack-galera-0" Feb 17 13:53:09 crc kubenswrapper[4768]: I0217 13:53:09.022031 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"openstack-galera-0\" (UID: \"5ba1ccc6-d556-4867-8e12-a5747dba1ffa\") " pod="openstack/openstack-galera-0" Feb 17 13:53:09 crc kubenswrapper[4768]: I0217 13:53:09.026566 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x79pj\" (UniqueName: \"kubernetes.io/projected/5ba1ccc6-d556-4867-8e12-a5747dba1ffa-kube-api-access-x79pj\") pod \"openstack-galera-0\" (UID: \"5ba1ccc6-d556-4867-8e12-a5747dba1ffa\") " pod="openstack/openstack-galera-0" Feb 17 13:53:09 crc kubenswrapper[4768]: I0217 13:53:09.184557 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Feb 17 13:53:10 crc kubenswrapper[4768]: I0217 13:53:10.240237 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-cell1-galera-0"] Feb 17 13:53:10 crc kubenswrapper[4768]: I0217 13:53:10.241514 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Feb 17 13:53:10 crc kubenswrapper[4768]: I0217 13:53:10.242992 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Feb 17 13:53:10 crc kubenswrapper[4768]: I0217 13:53:10.244849 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-cell1-dockercfg-d9zbz" Feb 17 13:53:10 crc kubenswrapper[4768]: I0217 13:53:10.245725 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-cell1-svc" Feb 17 13:53:10 crc kubenswrapper[4768]: I0217 13:53:10.245958 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-config-data" Feb 17 13:53:10 crc kubenswrapper[4768]: I0217 13:53:10.246126 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-scripts" Feb 17 13:53:10 crc kubenswrapper[4768]: I0217 13:53:10.314670 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/a0368ca4-d5b7-4604-b15a-a7cb4fcf5652-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"a0368ca4-d5b7-4604-b15a-a7cb4fcf5652\") " pod="openstack/openstack-cell1-galera-0" Feb 17 13:53:10 crc kubenswrapper[4768]: I0217 13:53:10.314860 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"openstack-cell1-galera-0\" (UID: \"a0368ca4-d5b7-4604-b15a-a7cb4fcf5652\") " pod="openstack/openstack-cell1-galera-0" Feb 17 13:53:10 crc kubenswrapper[4768]: I0217 13:53:10.314925 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a0368ca4-d5b7-4604-b15a-a7cb4fcf5652-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"a0368ca4-d5b7-4604-b15a-a7cb4fcf5652\") " pod="openstack/openstack-cell1-galera-0" Feb 17 13:53:10 crc kubenswrapper[4768]: I0217 13:53:10.314971 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/a0368ca4-d5b7-4604-b15a-a7cb4fcf5652-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"a0368ca4-d5b7-4604-b15a-a7cb4fcf5652\") " pod="openstack/openstack-cell1-galera-0" Feb 17 13:53:10 crc kubenswrapper[4768]: I0217 13:53:10.315037 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a0368ca4-d5b7-4604-b15a-a7cb4fcf5652-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"a0368ca4-d5b7-4604-b15a-a7cb4fcf5652\") " pod="openstack/openstack-cell1-galera-0" Feb 17 13:53:10 crc kubenswrapper[4768]: I0217 13:53:10.315117 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/a0368ca4-d5b7-4604-b15a-a7cb4fcf5652-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"a0368ca4-d5b7-4604-b15a-a7cb4fcf5652\") " pod="openstack/openstack-cell1-galera-0" Feb 17 13:53:10 crc kubenswrapper[4768]: I0217 13:53:10.315153 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/a0368ca4-d5b7-4604-b15a-a7cb4fcf5652-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"a0368ca4-d5b7-4604-b15a-a7cb4fcf5652\") " pod="openstack/openstack-cell1-galera-0" Feb 17 13:53:10 crc kubenswrapper[4768]: I0217 13:53:10.315285 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9ksb2\" (UniqueName: \"kubernetes.io/projected/a0368ca4-d5b7-4604-b15a-a7cb4fcf5652-kube-api-access-9ksb2\") pod \"openstack-cell1-galera-0\" (UID: \"a0368ca4-d5b7-4604-b15a-a7cb4fcf5652\") " pod="openstack/openstack-cell1-galera-0" Feb 17 13:53:10 crc kubenswrapper[4768]: I0217 13:53:10.344624 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/memcached-0"] Feb 17 13:53:10 crc kubenswrapper[4768]: I0217 13:53:10.349626 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Feb 17 13:53:10 crc kubenswrapper[4768]: I0217 13:53:10.352819 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"memcached-memcached-dockercfg-rnhw7" Feb 17 13:53:10 crc kubenswrapper[4768]: I0217 13:53:10.353132 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"memcached-config-data" Feb 17 13:53:10 crc kubenswrapper[4768]: I0217 13:53:10.353293 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-memcached-svc" Feb 17 13:53:10 crc kubenswrapper[4768]: I0217 13:53:10.390227 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Feb 17 13:53:10 crc kubenswrapper[4768]: I0217 13:53:10.416676 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/a0368ca4-d5b7-4604-b15a-a7cb4fcf5652-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"a0368ca4-d5b7-4604-b15a-a7cb4fcf5652\") " pod="openstack/openstack-cell1-galera-0" Feb 17 13:53:10 crc kubenswrapper[4768]: I0217 13:53:10.416715 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/a0368ca4-d5b7-4604-b15a-a7cb4fcf5652-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"a0368ca4-d5b7-4604-b15a-a7cb4fcf5652\") " pod="openstack/openstack-cell1-galera-0" Feb 17 13:53:10 crc kubenswrapper[4768]: I0217 13:53:10.416766 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9ksb2\" (UniqueName: \"kubernetes.io/projected/a0368ca4-d5b7-4604-b15a-a7cb4fcf5652-kube-api-access-9ksb2\") pod \"openstack-cell1-galera-0\" (UID: \"a0368ca4-d5b7-4604-b15a-a7cb4fcf5652\") " pod="openstack/openstack-cell1-galera-0" Feb 17 13:53:10 crc kubenswrapper[4768]: I0217 13:53:10.416800 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/a0368ca4-d5b7-4604-b15a-a7cb4fcf5652-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"a0368ca4-d5b7-4604-b15a-a7cb4fcf5652\") " pod="openstack/openstack-cell1-galera-0" Feb 17 13:53:10 crc kubenswrapper[4768]: I0217 13:53:10.416845 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"openstack-cell1-galera-0\" (UID: \"a0368ca4-d5b7-4604-b15a-a7cb4fcf5652\") " pod="openstack/openstack-cell1-galera-0" Feb 17 13:53:10 crc kubenswrapper[4768]: I0217 13:53:10.416865 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a0368ca4-d5b7-4604-b15a-a7cb4fcf5652-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"a0368ca4-d5b7-4604-b15a-a7cb4fcf5652\") " pod="openstack/openstack-cell1-galera-0" Feb 17 13:53:10 crc kubenswrapper[4768]: I0217 13:53:10.416889 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/a0368ca4-d5b7-4604-b15a-a7cb4fcf5652-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"a0368ca4-d5b7-4604-b15a-a7cb4fcf5652\") " pod="openstack/openstack-cell1-galera-0" Feb 17 13:53:10 crc kubenswrapper[4768]: I0217 13:53:10.416916 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a0368ca4-d5b7-4604-b15a-a7cb4fcf5652-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"a0368ca4-d5b7-4604-b15a-a7cb4fcf5652\") " pod="openstack/openstack-cell1-galera-0" Feb 17 13:53:10 crc kubenswrapper[4768]: I0217 13:53:10.417509 4768 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"openstack-cell1-galera-0\" (UID: \"a0368ca4-d5b7-4604-b15a-a7cb4fcf5652\") device mount path \"/mnt/openstack/pv08\"" pod="openstack/openstack-cell1-galera-0" Feb 17 13:53:10 crc kubenswrapper[4768]: I0217 13:53:10.417969 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/a0368ca4-d5b7-4604-b15a-a7cb4fcf5652-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"a0368ca4-d5b7-4604-b15a-a7cb4fcf5652\") " pod="openstack/openstack-cell1-galera-0" Feb 17 13:53:10 crc kubenswrapper[4768]: I0217 13:53:10.418383 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/a0368ca4-d5b7-4604-b15a-a7cb4fcf5652-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"a0368ca4-d5b7-4604-b15a-a7cb4fcf5652\") " pod="openstack/openstack-cell1-galera-0" Feb 17 13:53:10 crc kubenswrapper[4768]: I0217 13:53:10.419226 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a0368ca4-d5b7-4604-b15a-a7cb4fcf5652-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"a0368ca4-d5b7-4604-b15a-a7cb4fcf5652\") " pod="openstack/openstack-cell1-galera-0" Feb 17 13:53:10 crc kubenswrapper[4768]: I0217 13:53:10.422821 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/a0368ca4-d5b7-4604-b15a-a7cb4fcf5652-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"a0368ca4-d5b7-4604-b15a-a7cb4fcf5652\") " pod="openstack/openstack-cell1-galera-0" Feb 17 13:53:10 crc kubenswrapper[4768]: I0217 13:53:10.426832 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/a0368ca4-d5b7-4604-b15a-a7cb4fcf5652-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"a0368ca4-d5b7-4604-b15a-a7cb4fcf5652\") " pod="openstack/openstack-cell1-galera-0" Feb 17 13:53:10 crc kubenswrapper[4768]: I0217 13:53:10.443727 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a0368ca4-d5b7-4604-b15a-a7cb4fcf5652-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"a0368ca4-d5b7-4604-b15a-a7cb4fcf5652\") " pod="openstack/openstack-cell1-galera-0" Feb 17 13:53:10 crc kubenswrapper[4768]: I0217 13:53:10.450153 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9ksb2\" (UniqueName: \"kubernetes.io/projected/a0368ca4-d5b7-4604-b15a-a7cb4fcf5652-kube-api-access-9ksb2\") pod \"openstack-cell1-galera-0\" (UID: \"a0368ca4-d5b7-4604-b15a-a7cb4fcf5652\") " pod="openstack/openstack-cell1-galera-0" Feb 17 13:53:10 crc kubenswrapper[4768]: I0217 13:53:10.455313 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"openstack-cell1-galera-0\" (UID: \"a0368ca4-d5b7-4604-b15a-a7cb4fcf5652\") " pod="openstack/openstack-cell1-galera-0" Feb 17 13:53:10 crc kubenswrapper[4768]: I0217 13:53:10.517966 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/d87a0ca2-9789-4e14-a18b-2ed216ea5d15-memcached-tls-certs\") pod \"memcached-0\" (UID: \"d87a0ca2-9789-4e14-a18b-2ed216ea5d15\") " pod="openstack/memcached-0" Feb 17 13:53:10 crc kubenswrapper[4768]: I0217 13:53:10.518058 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/d87a0ca2-9789-4e14-a18b-2ed216ea5d15-kolla-config\") pod \"memcached-0\" (UID: \"d87a0ca2-9789-4e14-a18b-2ed216ea5d15\") " pod="openstack/memcached-0" Feb 17 13:53:10 crc kubenswrapper[4768]: I0217 13:53:10.518114 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d87a0ca2-9789-4e14-a18b-2ed216ea5d15-config-data\") pod \"memcached-0\" (UID: \"d87a0ca2-9789-4e14-a18b-2ed216ea5d15\") " pod="openstack/memcached-0" Feb 17 13:53:10 crc kubenswrapper[4768]: I0217 13:53:10.518138 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2kmrw\" (UniqueName: \"kubernetes.io/projected/d87a0ca2-9789-4e14-a18b-2ed216ea5d15-kube-api-access-2kmrw\") pod \"memcached-0\" (UID: \"d87a0ca2-9789-4e14-a18b-2ed216ea5d15\") " pod="openstack/memcached-0" Feb 17 13:53:10 crc kubenswrapper[4768]: I0217 13:53:10.518164 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d87a0ca2-9789-4e14-a18b-2ed216ea5d15-combined-ca-bundle\") pod \"memcached-0\" (UID: \"d87a0ca2-9789-4e14-a18b-2ed216ea5d15\") " pod="openstack/memcached-0" Feb 17 13:53:10 crc kubenswrapper[4768]: I0217 13:53:10.570439 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Feb 17 13:53:10 crc kubenswrapper[4768]: I0217 13:53:10.619931 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d87a0ca2-9789-4e14-a18b-2ed216ea5d15-config-data\") pod \"memcached-0\" (UID: \"d87a0ca2-9789-4e14-a18b-2ed216ea5d15\") " pod="openstack/memcached-0" Feb 17 13:53:10 crc kubenswrapper[4768]: I0217 13:53:10.619989 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2kmrw\" (UniqueName: \"kubernetes.io/projected/d87a0ca2-9789-4e14-a18b-2ed216ea5d15-kube-api-access-2kmrw\") pod \"memcached-0\" (UID: \"d87a0ca2-9789-4e14-a18b-2ed216ea5d15\") " pod="openstack/memcached-0" Feb 17 13:53:10 crc kubenswrapper[4768]: I0217 13:53:10.620029 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d87a0ca2-9789-4e14-a18b-2ed216ea5d15-combined-ca-bundle\") pod \"memcached-0\" (UID: \"d87a0ca2-9789-4e14-a18b-2ed216ea5d15\") " pod="openstack/memcached-0" Feb 17 13:53:10 crc kubenswrapper[4768]: I0217 13:53:10.620097 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/d87a0ca2-9789-4e14-a18b-2ed216ea5d15-memcached-tls-certs\") pod \"memcached-0\" (UID: \"d87a0ca2-9789-4e14-a18b-2ed216ea5d15\") " pod="openstack/memcached-0" Feb 17 13:53:10 crc kubenswrapper[4768]: I0217 13:53:10.620163 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/d87a0ca2-9789-4e14-a18b-2ed216ea5d15-kolla-config\") pod \"memcached-0\" (UID: \"d87a0ca2-9789-4e14-a18b-2ed216ea5d15\") " pod="openstack/memcached-0" Feb 17 13:53:10 crc kubenswrapper[4768]: I0217 13:53:10.621059 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/d87a0ca2-9789-4e14-a18b-2ed216ea5d15-kolla-config\") pod \"memcached-0\" (UID: \"d87a0ca2-9789-4e14-a18b-2ed216ea5d15\") " pod="openstack/memcached-0" Feb 17 13:53:10 crc kubenswrapper[4768]: I0217 13:53:10.621061 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d87a0ca2-9789-4e14-a18b-2ed216ea5d15-config-data\") pod \"memcached-0\" (UID: \"d87a0ca2-9789-4e14-a18b-2ed216ea5d15\") " pod="openstack/memcached-0" Feb 17 13:53:10 crc kubenswrapper[4768]: I0217 13:53:10.624148 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d87a0ca2-9789-4e14-a18b-2ed216ea5d15-combined-ca-bundle\") pod \"memcached-0\" (UID: \"d87a0ca2-9789-4e14-a18b-2ed216ea5d15\") " pod="openstack/memcached-0" Feb 17 13:53:10 crc kubenswrapper[4768]: I0217 13:53:10.624517 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/d87a0ca2-9789-4e14-a18b-2ed216ea5d15-memcached-tls-certs\") pod \"memcached-0\" (UID: \"d87a0ca2-9789-4e14-a18b-2ed216ea5d15\") " pod="openstack/memcached-0" Feb 17 13:53:10 crc kubenswrapper[4768]: I0217 13:53:10.635623 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2kmrw\" (UniqueName: \"kubernetes.io/projected/d87a0ca2-9789-4e14-a18b-2ed216ea5d15-kube-api-access-2kmrw\") pod \"memcached-0\" (UID: \"d87a0ca2-9789-4e14-a18b-2ed216ea5d15\") " pod="openstack/memcached-0" Feb 17 13:53:10 crc kubenswrapper[4768]: I0217 13:53:10.683616 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Feb 17 13:53:12 crc kubenswrapper[4768]: I0217 13:53:12.825960 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Feb 17 13:53:12 crc kubenswrapper[4768]: I0217 13:53:12.827154 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 17 13:53:12 crc kubenswrapper[4768]: I0217 13:53:12.832616 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-ceilometer-dockercfg-s4mcr" Feb 17 13:53:12 crc kubenswrapper[4768]: I0217 13:53:12.839175 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 17 13:53:12 crc kubenswrapper[4768]: I0217 13:53:12.952095 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z69w2\" (UniqueName: \"kubernetes.io/projected/af700d67-c9a3-4577-b872-6ffd620ce9b5-kube-api-access-z69w2\") pod \"kube-state-metrics-0\" (UID: \"af700d67-c9a3-4577-b872-6ffd620ce9b5\") " pod="openstack/kube-state-metrics-0" Feb 17 13:53:13 crc kubenswrapper[4768]: I0217 13:53:13.053742 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z69w2\" (UniqueName: \"kubernetes.io/projected/af700d67-c9a3-4577-b872-6ffd620ce9b5-kube-api-access-z69w2\") pod \"kube-state-metrics-0\" (UID: \"af700d67-c9a3-4577-b872-6ffd620ce9b5\") " pod="openstack/kube-state-metrics-0" Feb 17 13:53:13 crc kubenswrapper[4768]: I0217 13:53:13.069952 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z69w2\" (UniqueName: \"kubernetes.io/projected/af700d67-c9a3-4577-b872-6ffd620ce9b5-kube-api-access-z69w2\") pod \"kube-state-metrics-0\" (UID: \"af700d67-c9a3-4577-b872-6ffd620ce9b5\") " pod="openstack/kube-state-metrics-0" Feb 17 13:53:13 crc kubenswrapper[4768]: I0217 13:53:13.153013 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 17 13:53:15 crc kubenswrapper[4768]: I0217 13:53:15.624080 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-gnb4g"] Feb 17 13:53:15 crc kubenswrapper[4768]: I0217 13:53:15.625186 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-gnb4g" Feb 17 13:53:15 crc kubenswrapper[4768]: I0217 13:53:15.627149 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovncontroller-ovndbs" Feb 17 13:53:15 crc kubenswrapper[4768]: I0217 13:53:15.627554 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncontroller-ovncontroller-dockercfg-c5lmv" Feb 17 13:53:15 crc kubenswrapper[4768]: I0217 13:53:15.627718 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-scripts" Feb 17 13:53:15 crc kubenswrapper[4768]: I0217 13:53:15.638335 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-gnb4g"] Feb 17 13:53:15 crc kubenswrapper[4768]: I0217 13:53:15.674324 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-ovs-rkhhj"] Feb 17 13:53:15 crc kubenswrapper[4768]: I0217 13:53:15.675720 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-rkhhj" Feb 17 13:53:15 crc kubenswrapper[4768]: I0217 13:53:15.688810 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-rkhhj"] Feb 17 13:53:15 crc kubenswrapper[4768]: I0217 13:53:15.790538 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/75bf7b04-fd76-440d-b975-abf1c4972c4f-scripts\") pod \"ovn-controller-ovs-rkhhj\" (UID: \"75bf7b04-fd76-440d-b975-abf1c4972c4f\") " pod="openstack/ovn-controller-ovs-rkhhj" Feb 17 13:53:15 crc kubenswrapper[4768]: I0217 13:53:15.790585 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/75bf7b04-fd76-440d-b975-abf1c4972c4f-var-run\") pod \"ovn-controller-ovs-rkhhj\" (UID: \"75bf7b04-fd76-440d-b975-abf1c4972c4f\") " pod="openstack/ovn-controller-ovs-rkhhj" Feb 17 13:53:15 crc kubenswrapper[4768]: I0217 13:53:15.790617 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/39dede0b-4ddc-46ea-81c1-a8e7e576aa78-var-log-ovn\") pod \"ovn-controller-gnb4g\" (UID: \"39dede0b-4ddc-46ea-81c1-a8e7e576aa78\") " pod="openstack/ovn-controller-gnb4g" Feb 17 13:53:15 crc kubenswrapper[4768]: I0217 13:53:15.790635 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/39dede0b-4ddc-46ea-81c1-a8e7e576aa78-ovn-controller-tls-certs\") pod \"ovn-controller-gnb4g\" (UID: \"39dede0b-4ddc-46ea-81c1-a8e7e576aa78\") " pod="openstack/ovn-controller-gnb4g" Feb 17 13:53:15 crc kubenswrapper[4768]: I0217 13:53:15.790665 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/39dede0b-4ddc-46ea-81c1-a8e7e576aa78-var-run-ovn\") pod \"ovn-controller-gnb4g\" (UID: \"39dede0b-4ddc-46ea-81c1-a8e7e576aa78\") " pod="openstack/ovn-controller-gnb4g" Feb 17 13:53:15 crc kubenswrapper[4768]: I0217 13:53:15.790688 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/39dede0b-4ddc-46ea-81c1-a8e7e576aa78-combined-ca-bundle\") pod \"ovn-controller-gnb4g\" (UID: \"39dede0b-4ddc-46ea-81c1-a8e7e576aa78\") " pod="openstack/ovn-controller-gnb4g" Feb 17 13:53:15 crc kubenswrapper[4768]: I0217 13:53:15.790708 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/39dede0b-4ddc-46ea-81c1-a8e7e576aa78-var-run\") pod \"ovn-controller-gnb4g\" (UID: \"39dede0b-4ddc-46ea-81c1-a8e7e576aa78\") " pod="openstack/ovn-controller-gnb4g" Feb 17 13:53:15 crc kubenswrapper[4768]: I0217 13:53:15.790737 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pfjtl\" (UniqueName: \"kubernetes.io/projected/39dede0b-4ddc-46ea-81c1-a8e7e576aa78-kube-api-access-pfjtl\") pod \"ovn-controller-gnb4g\" (UID: \"39dede0b-4ddc-46ea-81c1-a8e7e576aa78\") " pod="openstack/ovn-controller-gnb4g" Feb 17 13:53:15 crc kubenswrapper[4768]: I0217 13:53:15.790757 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/75bf7b04-fd76-440d-b975-abf1c4972c4f-var-log\") pod \"ovn-controller-ovs-rkhhj\" (UID: \"75bf7b04-fd76-440d-b975-abf1c4972c4f\") " pod="openstack/ovn-controller-ovs-rkhhj" Feb 17 13:53:15 crc kubenswrapper[4768]: I0217 13:53:15.790788 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/75bf7b04-fd76-440d-b975-abf1c4972c4f-var-lib\") pod \"ovn-controller-ovs-rkhhj\" (UID: \"75bf7b04-fd76-440d-b975-abf1c4972c4f\") " pod="openstack/ovn-controller-ovs-rkhhj" Feb 17 13:53:15 crc kubenswrapper[4768]: I0217 13:53:15.790817 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/75bf7b04-fd76-440d-b975-abf1c4972c4f-etc-ovs\") pod \"ovn-controller-ovs-rkhhj\" (UID: \"75bf7b04-fd76-440d-b975-abf1c4972c4f\") " pod="openstack/ovn-controller-ovs-rkhhj" Feb 17 13:53:15 crc kubenswrapper[4768]: I0217 13:53:15.790833 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9b5j5\" (UniqueName: \"kubernetes.io/projected/75bf7b04-fd76-440d-b975-abf1c4972c4f-kube-api-access-9b5j5\") pod \"ovn-controller-ovs-rkhhj\" (UID: \"75bf7b04-fd76-440d-b975-abf1c4972c4f\") " pod="openstack/ovn-controller-ovs-rkhhj" Feb 17 13:53:15 crc kubenswrapper[4768]: I0217 13:53:15.790852 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/39dede0b-4ddc-46ea-81c1-a8e7e576aa78-scripts\") pod \"ovn-controller-gnb4g\" (UID: \"39dede0b-4ddc-46ea-81c1-a8e7e576aa78\") " pod="openstack/ovn-controller-gnb4g" Feb 17 13:53:15 crc kubenswrapper[4768]: I0217 13:53:15.892233 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/75bf7b04-fd76-440d-b975-abf1c4972c4f-var-log\") pod \"ovn-controller-ovs-rkhhj\" (UID: \"75bf7b04-fd76-440d-b975-abf1c4972c4f\") " pod="openstack/ovn-controller-ovs-rkhhj" Feb 17 13:53:15 crc kubenswrapper[4768]: I0217 13:53:15.892308 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/75bf7b04-fd76-440d-b975-abf1c4972c4f-var-lib\") pod \"ovn-controller-ovs-rkhhj\" (UID: \"75bf7b04-fd76-440d-b975-abf1c4972c4f\") " pod="openstack/ovn-controller-ovs-rkhhj" Feb 17 13:53:15 crc kubenswrapper[4768]: I0217 13:53:15.892353 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/75bf7b04-fd76-440d-b975-abf1c4972c4f-etc-ovs\") pod \"ovn-controller-ovs-rkhhj\" (UID: \"75bf7b04-fd76-440d-b975-abf1c4972c4f\") " pod="openstack/ovn-controller-ovs-rkhhj" Feb 17 13:53:15 crc kubenswrapper[4768]: I0217 13:53:15.892374 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9b5j5\" (UniqueName: \"kubernetes.io/projected/75bf7b04-fd76-440d-b975-abf1c4972c4f-kube-api-access-9b5j5\") pod \"ovn-controller-ovs-rkhhj\" (UID: \"75bf7b04-fd76-440d-b975-abf1c4972c4f\") " pod="openstack/ovn-controller-ovs-rkhhj" Feb 17 13:53:15 crc kubenswrapper[4768]: I0217 13:53:15.892402 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/39dede0b-4ddc-46ea-81c1-a8e7e576aa78-scripts\") pod \"ovn-controller-gnb4g\" (UID: \"39dede0b-4ddc-46ea-81c1-a8e7e576aa78\") " pod="openstack/ovn-controller-gnb4g" Feb 17 13:53:15 crc kubenswrapper[4768]: I0217 13:53:15.892421 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/75bf7b04-fd76-440d-b975-abf1c4972c4f-scripts\") pod \"ovn-controller-ovs-rkhhj\" (UID: \"75bf7b04-fd76-440d-b975-abf1c4972c4f\") " pod="openstack/ovn-controller-ovs-rkhhj" Feb 17 13:53:15 crc kubenswrapper[4768]: I0217 13:53:15.892446 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/75bf7b04-fd76-440d-b975-abf1c4972c4f-var-run\") pod \"ovn-controller-ovs-rkhhj\" (UID: \"75bf7b04-fd76-440d-b975-abf1c4972c4f\") " pod="openstack/ovn-controller-ovs-rkhhj" Feb 17 13:53:15 crc kubenswrapper[4768]: I0217 13:53:15.892477 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/39dede0b-4ddc-46ea-81c1-a8e7e576aa78-var-log-ovn\") pod \"ovn-controller-gnb4g\" (UID: \"39dede0b-4ddc-46ea-81c1-a8e7e576aa78\") " pod="openstack/ovn-controller-gnb4g" Feb 17 13:53:15 crc kubenswrapper[4768]: I0217 13:53:15.892498 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/39dede0b-4ddc-46ea-81c1-a8e7e576aa78-ovn-controller-tls-certs\") pod \"ovn-controller-gnb4g\" (UID: \"39dede0b-4ddc-46ea-81c1-a8e7e576aa78\") " pod="openstack/ovn-controller-gnb4g" Feb 17 13:53:15 crc kubenswrapper[4768]: I0217 13:53:15.892530 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/39dede0b-4ddc-46ea-81c1-a8e7e576aa78-var-run-ovn\") pod \"ovn-controller-gnb4g\" (UID: \"39dede0b-4ddc-46ea-81c1-a8e7e576aa78\") " pod="openstack/ovn-controller-gnb4g" Feb 17 13:53:15 crc kubenswrapper[4768]: I0217 13:53:15.892563 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/39dede0b-4ddc-46ea-81c1-a8e7e576aa78-combined-ca-bundle\") pod \"ovn-controller-gnb4g\" (UID: \"39dede0b-4ddc-46ea-81c1-a8e7e576aa78\") " pod="openstack/ovn-controller-gnb4g" Feb 17 13:53:15 crc kubenswrapper[4768]: I0217 13:53:15.892590 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/39dede0b-4ddc-46ea-81c1-a8e7e576aa78-var-run\") pod \"ovn-controller-gnb4g\" (UID: \"39dede0b-4ddc-46ea-81c1-a8e7e576aa78\") " pod="openstack/ovn-controller-gnb4g" Feb 17 13:53:15 crc kubenswrapper[4768]: I0217 13:53:15.892626 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pfjtl\" (UniqueName: \"kubernetes.io/projected/39dede0b-4ddc-46ea-81c1-a8e7e576aa78-kube-api-access-pfjtl\") pod \"ovn-controller-gnb4g\" (UID: \"39dede0b-4ddc-46ea-81c1-a8e7e576aa78\") " pod="openstack/ovn-controller-gnb4g" Feb 17 13:53:15 crc kubenswrapper[4768]: I0217 13:53:15.892874 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/75bf7b04-fd76-440d-b975-abf1c4972c4f-var-log\") pod \"ovn-controller-ovs-rkhhj\" (UID: \"75bf7b04-fd76-440d-b975-abf1c4972c4f\") " pod="openstack/ovn-controller-ovs-rkhhj" Feb 17 13:53:15 crc kubenswrapper[4768]: I0217 13:53:15.892927 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/75bf7b04-fd76-440d-b975-abf1c4972c4f-etc-ovs\") pod \"ovn-controller-ovs-rkhhj\" (UID: \"75bf7b04-fd76-440d-b975-abf1c4972c4f\") " pod="openstack/ovn-controller-ovs-rkhhj" Feb 17 13:53:15 crc kubenswrapper[4768]: I0217 13:53:15.893121 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/75bf7b04-fd76-440d-b975-abf1c4972c4f-var-run\") pod \"ovn-controller-ovs-rkhhj\" (UID: \"75bf7b04-fd76-440d-b975-abf1c4972c4f\") " pod="openstack/ovn-controller-ovs-rkhhj" Feb 17 13:53:15 crc kubenswrapper[4768]: I0217 13:53:15.893092 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/75bf7b04-fd76-440d-b975-abf1c4972c4f-var-lib\") pod \"ovn-controller-ovs-rkhhj\" (UID: \"75bf7b04-fd76-440d-b975-abf1c4972c4f\") " pod="openstack/ovn-controller-ovs-rkhhj" Feb 17 13:53:15 crc kubenswrapper[4768]: I0217 13:53:15.893233 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/39dede0b-4ddc-46ea-81c1-a8e7e576aa78-var-log-ovn\") pod \"ovn-controller-gnb4g\" (UID: \"39dede0b-4ddc-46ea-81c1-a8e7e576aa78\") " pod="openstack/ovn-controller-gnb4g" Feb 17 13:53:15 crc kubenswrapper[4768]: I0217 13:53:15.893268 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/39dede0b-4ddc-46ea-81c1-a8e7e576aa78-var-run\") pod \"ovn-controller-gnb4g\" (UID: \"39dede0b-4ddc-46ea-81c1-a8e7e576aa78\") " pod="openstack/ovn-controller-gnb4g" Feb 17 13:53:15 crc kubenswrapper[4768]: I0217 13:53:15.893525 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/39dede0b-4ddc-46ea-81c1-a8e7e576aa78-var-run-ovn\") pod \"ovn-controller-gnb4g\" (UID: \"39dede0b-4ddc-46ea-81c1-a8e7e576aa78\") " pod="openstack/ovn-controller-gnb4g" Feb 17 13:53:15 crc kubenswrapper[4768]: I0217 13:53:15.895071 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/75bf7b04-fd76-440d-b975-abf1c4972c4f-scripts\") pod \"ovn-controller-ovs-rkhhj\" (UID: \"75bf7b04-fd76-440d-b975-abf1c4972c4f\") " pod="openstack/ovn-controller-ovs-rkhhj" Feb 17 13:53:15 crc kubenswrapper[4768]: I0217 13:53:15.895269 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/39dede0b-4ddc-46ea-81c1-a8e7e576aa78-scripts\") pod \"ovn-controller-gnb4g\" (UID: \"39dede0b-4ddc-46ea-81c1-a8e7e576aa78\") " pod="openstack/ovn-controller-gnb4g" Feb 17 13:53:15 crc kubenswrapper[4768]: I0217 13:53:15.897821 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/39dede0b-4ddc-46ea-81c1-a8e7e576aa78-ovn-controller-tls-certs\") pod \"ovn-controller-gnb4g\" (UID: \"39dede0b-4ddc-46ea-81c1-a8e7e576aa78\") " pod="openstack/ovn-controller-gnb4g" Feb 17 13:53:15 crc kubenswrapper[4768]: I0217 13:53:15.908480 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/39dede0b-4ddc-46ea-81c1-a8e7e576aa78-combined-ca-bundle\") pod \"ovn-controller-gnb4g\" (UID: \"39dede0b-4ddc-46ea-81c1-a8e7e576aa78\") " pod="openstack/ovn-controller-gnb4g" Feb 17 13:53:15 crc kubenswrapper[4768]: I0217 13:53:15.917752 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9b5j5\" (UniqueName: \"kubernetes.io/projected/75bf7b04-fd76-440d-b975-abf1c4972c4f-kube-api-access-9b5j5\") pod \"ovn-controller-ovs-rkhhj\" (UID: \"75bf7b04-fd76-440d-b975-abf1c4972c4f\") " pod="openstack/ovn-controller-ovs-rkhhj" Feb 17 13:53:15 crc kubenswrapper[4768]: I0217 13:53:15.918179 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pfjtl\" (UniqueName: \"kubernetes.io/projected/39dede0b-4ddc-46ea-81c1-a8e7e576aa78-kube-api-access-pfjtl\") pod \"ovn-controller-gnb4g\" (UID: \"39dede0b-4ddc-46ea-81c1-a8e7e576aa78\") " pod="openstack/ovn-controller-gnb4g" Feb 17 13:53:15 crc kubenswrapper[4768]: I0217 13:53:15.949094 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-gnb4g" Feb 17 13:53:15 crc kubenswrapper[4768]: I0217 13:53:15.991367 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-rkhhj" Feb 17 13:53:16 crc kubenswrapper[4768]: I0217 13:53:16.495776 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-nb-0"] Feb 17 13:53:16 crc kubenswrapper[4768]: I0217 13:53:16.497426 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Feb 17 13:53:16 crc kubenswrapper[4768]: I0217 13:53:16.499681 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-config" Feb 17 13:53:16 crc kubenswrapper[4768]: I0217 13:53:16.500393 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovn-metrics" Feb 17 13:53:16 crc kubenswrapper[4768]: I0217 13:53:16.500566 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-scripts" Feb 17 13:53:16 crc kubenswrapper[4768]: I0217 13:53:16.501374 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-nb-ovndbs" Feb 17 13:53:16 crc kubenswrapper[4768]: I0217 13:53:16.501533 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-nb-dockercfg-dx6vs" Feb 17 13:53:16 crc kubenswrapper[4768]: I0217 13:53:16.513997 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Feb 17 13:53:16 crc kubenswrapper[4768]: I0217 13:53:16.602645 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/1296e827-af28-4d2e-a80d-33add3697b6e-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"1296e827-af28-4d2e-a80d-33add3697b6e\") " pod="openstack/ovsdbserver-nb-0" Feb 17 13:53:16 crc kubenswrapper[4768]: I0217 13:53:16.602749 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"ovsdbserver-nb-0\" (UID: \"1296e827-af28-4d2e-a80d-33add3697b6e\") " pod="openstack/ovsdbserver-nb-0" Feb 17 13:53:16 crc kubenswrapper[4768]: I0217 13:53:16.602791 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-66nr2\" (UniqueName: \"kubernetes.io/projected/1296e827-af28-4d2e-a80d-33add3697b6e-kube-api-access-66nr2\") pod \"ovsdbserver-nb-0\" (UID: \"1296e827-af28-4d2e-a80d-33add3697b6e\") " pod="openstack/ovsdbserver-nb-0" Feb 17 13:53:16 crc kubenswrapper[4768]: I0217 13:53:16.602887 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/1296e827-af28-4d2e-a80d-33add3697b6e-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"1296e827-af28-4d2e-a80d-33add3697b6e\") " pod="openstack/ovsdbserver-nb-0" Feb 17 13:53:16 crc kubenswrapper[4768]: I0217 13:53:16.602941 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1296e827-af28-4d2e-a80d-33add3697b6e-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"1296e827-af28-4d2e-a80d-33add3697b6e\") " pod="openstack/ovsdbserver-nb-0" Feb 17 13:53:16 crc kubenswrapper[4768]: I0217 13:53:16.603047 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1296e827-af28-4d2e-a80d-33add3697b6e-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"1296e827-af28-4d2e-a80d-33add3697b6e\") " pod="openstack/ovsdbserver-nb-0" Feb 17 13:53:16 crc kubenswrapper[4768]: I0217 13:53:16.603156 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1296e827-af28-4d2e-a80d-33add3697b6e-config\") pod \"ovsdbserver-nb-0\" (UID: \"1296e827-af28-4d2e-a80d-33add3697b6e\") " pod="openstack/ovsdbserver-nb-0" Feb 17 13:53:16 crc kubenswrapper[4768]: I0217 13:53:16.603198 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/1296e827-af28-4d2e-a80d-33add3697b6e-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"1296e827-af28-4d2e-a80d-33add3697b6e\") " pod="openstack/ovsdbserver-nb-0" Feb 17 13:53:16 crc kubenswrapper[4768]: I0217 13:53:16.704589 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/1296e827-af28-4d2e-a80d-33add3697b6e-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"1296e827-af28-4d2e-a80d-33add3697b6e\") " pod="openstack/ovsdbserver-nb-0" Feb 17 13:53:16 crc kubenswrapper[4768]: I0217 13:53:16.704649 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1296e827-af28-4d2e-a80d-33add3697b6e-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"1296e827-af28-4d2e-a80d-33add3697b6e\") " pod="openstack/ovsdbserver-nb-0" Feb 17 13:53:16 crc kubenswrapper[4768]: I0217 13:53:16.704698 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1296e827-af28-4d2e-a80d-33add3697b6e-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"1296e827-af28-4d2e-a80d-33add3697b6e\") " pod="openstack/ovsdbserver-nb-0" Feb 17 13:53:16 crc kubenswrapper[4768]: I0217 13:53:16.704726 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1296e827-af28-4d2e-a80d-33add3697b6e-config\") pod \"ovsdbserver-nb-0\" (UID: \"1296e827-af28-4d2e-a80d-33add3697b6e\") " pod="openstack/ovsdbserver-nb-0" Feb 17 13:53:16 crc kubenswrapper[4768]: I0217 13:53:16.704741 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/1296e827-af28-4d2e-a80d-33add3697b6e-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"1296e827-af28-4d2e-a80d-33add3697b6e\") " pod="openstack/ovsdbserver-nb-0" Feb 17 13:53:16 crc kubenswrapper[4768]: I0217 13:53:16.704760 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/1296e827-af28-4d2e-a80d-33add3697b6e-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"1296e827-af28-4d2e-a80d-33add3697b6e\") " pod="openstack/ovsdbserver-nb-0" Feb 17 13:53:16 crc kubenswrapper[4768]: I0217 13:53:16.704791 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"ovsdbserver-nb-0\" (UID: \"1296e827-af28-4d2e-a80d-33add3697b6e\") " pod="openstack/ovsdbserver-nb-0" Feb 17 13:53:16 crc kubenswrapper[4768]: I0217 13:53:16.704804 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-66nr2\" (UniqueName: \"kubernetes.io/projected/1296e827-af28-4d2e-a80d-33add3697b6e-kube-api-access-66nr2\") pod \"ovsdbserver-nb-0\" (UID: \"1296e827-af28-4d2e-a80d-33add3697b6e\") " pod="openstack/ovsdbserver-nb-0" Feb 17 13:53:16 crc kubenswrapper[4768]: I0217 13:53:16.705798 4768 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"ovsdbserver-nb-0\" (UID: \"1296e827-af28-4d2e-a80d-33add3697b6e\") device mount path \"/mnt/openstack/pv01\"" pod="openstack/ovsdbserver-nb-0" Feb 17 13:53:16 crc kubenswrapper[4768]: I0217 13:53:16.706189 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/1296e827-af28-4d2e-a80d-33add3697b6e-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"1296e827-af28-4d2e-a80d-33add3697b6e\") " pod="openstack/ovsdbserver-nb-0" Feb 17 13:53:16 crc kubenswrapper[4768]: I0217 13:53:16.706452 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1296e827-af28-4d2e-a80d-33add3697b6e-config\") pod \"ovsdbserver-nb-0\" (UID: \"1296e827-af28-4d2e-a80d-33add3697b6e\") " pod="openstack/ovsdbserver-nb-0" Feb 17 13:53:16 crc kubenswrapper[4768]: I0217 13:53:16.707677 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1296e827-af28-4d2e-a80d-33add3697b6e-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"1296e827-af28-4d2e-a80d-33add3697b6e\") " pod="openstack/ovsdbserver-nb-0" Feb 17 13:53:16 crc kubenswrapper[4768]: I0217 13:53:16.709866 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/1296e827-af28-4d2e-a80d-33add3697b6e-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"1296e827-af28-4d2e-a80d-33add3697b6e\") " pod="openstack/ovsdbserver-nb-0" Feb 17 13:53:16 crc kubenswrapper[4768]: I0217 13:53:16.713287 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/1296e827-af28-4d2e-a80d-33add3697b6e-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"1296e827-af28-4d2e-a80d-33add3697b6e\") " pod="openstack/ovsdbserver-nb-0" Feb 17 13:53:16 crc kubenswrapper[4768]: I0217 13:53:16.715791 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1296e827-af28-4d2e-a80d-33add3697b6e-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"1296e827-af28-4d2e-a80d-33add3697b6e\") " pod="openstack/ovsdbserver-nb-0" Feb 17 13:53:16 crc kubenswrapper[4768]: I0217 13:53:16.729096 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-66nr2\" (UniqueName: \"kubernetes.io/projected/1296e827-af28-4d2e-a80d-33add3697b6e-kube-api-access-66nr2\") pod \"ovsdbserver-nb-0\" (UID: \"1296e827-af28-4d2e-a80d-33add3697b6e\") " pod="openstack/ovsdbserver-nb-0" Feb 17 13:53:16 crc kubenswrapper[4768]: I0217 13:53:16.736045 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"ovsdbserver-nb-0\" (UID: \"1296e827-af28-4d2e-a80d-33add3697b6e\") " pod="openstack/ovsdbserver-nb-0" Feb 17 13:53:16 crc kubenswrapper[4768]: I0217 13:53:16.834270 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Feb 17 13:53:19 crc kubenswrapper[4768]: I0217 13:53:19.952633 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-sb-0"] Feb 17 13:53:19 crc kubenswrapper[4768]: I0217 13:53:19.954613 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Feb 17 13:53:19 crc kubenswrapper[4768]: I0217 13:53:19.960760 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Feb 17 13:53:19 crc kubenswrapper[4768]: I0217 13:53:19.961058 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-scripts" Feb 17 13:53:19 crc kubenswrapper[4768]: I0217 13:53:19.961271 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-sb-dockercfg-8qs5n" Feb 17 13:53:19 crc kubenswrapper[4768]: I0217 13:53:19.961396 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-config" Feb 17 13:53:19 crc kubenswrapper[4768]: I0217 13:53:19.961729 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-sb-ovndbs" Feb 17 13:53:20 crc kubenswrapper[4768]: I0217 13:53:20.063183 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/6e5947dc-7f07-4498-8be8-2b0c184c5853-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"6e5947dc-7f07-4498-8be8-2b0c184c5853\") " pod="openstack/ovsdbserver-sb-0" Feb 17 13:53:20 crc kubenswrapper[4768]: I0217 13:53:20.063300 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j8vcm\" (UniqueName: \"kubernetes.io/projected/6e5947dc-7f07-4498-8be8-2b0c184c5853-kube-api-access-j8vcm\") pod \"ovsdbserver-sb-0\" (UID: \"6e5947dc-7f07-4498-8be8-2b0c184c5853\") " pod="openstack/ovsdbserver-sb-0" Feb 17 13:53:20 crc kubenswrapper[4768]: I0217 13:53:20.063349 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/6e5947dc-7f07-4498-8be8-2b0c184c5853-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"6e5947dc-7f07-4498-8be8-2b0c184c5853\") " pod="openstack/ovsdbserver-sb-0" Feb 17 13:53:20 crc kubenswrapper[4768]: I0217 13:53:20.063392 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6e5947dc-7f07-4498-8be8-2b0c184c5853-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"6e5947dc-7f07-4498-8be8-2b0c184c5853\") " pod="openstack/ovsdbserver-sb-0" Feb 17 13:53:20 crc kubenswrapper[4768]: I0217 13:53:20.063410 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6e5947dc-7f07-4498-8be8-2b0c184c5853-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"6e5947dc-7f07-4498-8be8-2b0c184c5853\") " pod="openstack/ovsdbserver-sb-0" Feb 17 13:53:20 crc kubenswrapper[4768]: I0217 13:53:20.063441 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/6e5947dc-7f07-4498-8be8-2b0c184c5853-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"6e5947dc-7f07-4498-8be8-2b0c184c5853\") " pod="openstack/ovsdbserver-sb-0" Feb 17 13:53:20 crc kubenswrapper[4768]: I0217 13:53:20.063517 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6e5947dc-7f07-4498-8be8-2b0c184c5853-config\") pod \"ovsdbserver-sb-0\" (UID: \"6e5947dc-7f07-4498-8be8-2b0c184c5853\") " pod="openstack/ovsdbserver-sb-0" Feb 17 13:53:20 crc kubenswrapper[4768]: I0217 13:53:20.063623 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"ovsdbserver-sb-0\" (UID: \"6e5947dc-7f07-4498-8be8-2b0c184c5853\") " pod="openstack/ovsdbserver-sb-0" Feb 17 13:53:20 crc kubenswrapper[4768]: I0217 13:53:20.165277 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/6e5947dc-7f07-4498-8be8-2b0c184c5853-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"6e5947dc-7f07-4498-8be8-2b0c184c5853\") " pod="openstack/ovsdbserver-sb-0" Feb 17 13:53:20 crc kubenswrapper[4768]: I0217 13:53:20.165322 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6e5947dc-7f07-4498-8be8-2b0c184c5853-config\") pod \"ovsdbserver-sb-0\" (UID: \"6e5947dc-7f07-4498-8be8-2b0c184c5853\") " pod="openstack/ovsdbserver-sb-0" Feb 17 13:53:20 crc kubenswrapper[4768]: I0217 13:53:20.165381 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"ovsdbserver-sb-0\" (UID: \"6e5947dc-7f07-4498-8be8-2b0c184c5853\") " pod="openstack/ovsdbserver-sb-0" Feb 17 13:53:20 crc kubenswrapper[4768]: I0217 13:53:20.165426 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/6e5947dc-7f07-4498-8be8-2b0c184c5853-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"6e5947dc-7f07-4498-8be8-2b0c184c5853\") " pod="openstack/ovsdbserver-sb-0" Feb 17 13:53:20 crc kubenswrapper[4768]: I0217 13:53:20.165457 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j8vcm\" (UniqueName: \"kubernetes.io/projected/6e5947dc-7f07-4498-8be8-2b0c184c5853-kube-api-access-j8vcm\") pod \"ovsdbserver-sb-0\" (UID: \"6e5947dc-7f07-4498-8be8-2b0c184c5853\") " pod="openstack/ovsdbserver-sb-0" Feb 17 13:53:20 crc kubenswrapper[4768]: I0217 13:53:20.165497 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/6e5947dc-7f07-4498-8be8-2b0c184c5853-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"6e5947dc-7f07-4498-8be8-2b0c184c5853\") " pod="openstack/ovsdbserver-sb-0" Feb 17 13:53:20 crc kubenswrapper[4768]: I0217 13:53:20.165514 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6e5947dc-7f07-4498-8be8-2b0c184c5853-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"6e5947dc-7f07-4498-8be8-2b0c184c5853\") " pod="openstack/ovsdbserver-sb-0" Feb 17 13:53:20 crc kubenswrapper[4768]: I0217 13:53:20.165527 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6e5947dc-7f07-4498-8be8-2b0c184c5853-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"6e5947dc-7f07-4498-8be8-2b0c184c5853\") " pod="openstack/ovsdbserver-sb-0" Feb 17 13:53:20 crc kubenswrapper[4768]: I0217 13:53:20.166625 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6e5947dc-7f07-4498-8be8-2b0c184c5853-config\") pod \"ovsdbserver-sb-0\" (UID: \"6e5947dc-7f07-4498-8be8-2b0c184c5853\") " pod="openstack/ovsdbserver-sb-0" Feb 17 13:53:20 crc kubenswrapper[4768]: I0217 13:53:20.166838 4768 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"ovsdbserver-sb-0\" (UID: \"6e5947dc-7f07-4498-8be8-2b0c184c5853\") device mount path \"/mnt/openstack/pv09\"" pod="openstack/ovsdbserver-sb-0" Feb 17 13:53:20 crc kubenswrapper[4768]: I0217 13:53:20.167304 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/6e5947dc-7f07-4498-8be8-2b0c184c5853-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"6e5947dc-7f07-4498-8be8-2b0c184c5853\") " pod="openstack/ovsdbserver-sb-0" Feb 17 13:53:20 crc kubenswrapper[4768]: I0217 13:53:20.167756 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6e5947dc-7f07-4498-8be8-2b0c184c5853-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"6e5947dc-7f07-4498-8be8-2b0c184c5853\") " pod="openstack/ovsdbserver-sb-0" Feb 17 13:53:20 crc kubenswrapper[4768]: I0217 13:53:20.174195 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6e5947dc-7f07-4498-8be8-2b0c184c5853-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"6e5947dc-7f07-4498-8be8-2b0c184c5853\") " pod="openstack/ovsdbserver-sb-0" Feb 17 13:53:20 crc kubenswrapper[4768]: I0217 13:53:20.174228 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/6e5947dc-7f07-4498-8be8-2b0c184c5853-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"6e5947dc-7f07-4498-8be8-2b0c184c5853\") " pod="openstack/ovsdbserver-sb-0" Feb 17 13:53:20 crc kubenswrapper[4768]: I0217 13:53:20.175280 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/6e5947dc-7f07-4498-8be8-2b0c184c5853-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"6e5947dc-7f07-4498-8be8-2b0c184c5853\") " pod="openstack/ovsdbserver-sb-0" Feb 17 13:53:20 crc kubenswrapper[4768]: I0217 13:53:20.186570 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j8vcm\" (UniqueName: \"kubernetes.io/projected/6e5947dc-7f07-4498-8be8-2b0c184c5853-kube-api-access-j8vcm\") pod \"ovsdbserver-sb-0\" (UID: \"6e5947dc-7f07-4498-8be8-2b0c184c5853\") " pod="openstack/ovsdbserver-sb-0" Feb 17 13:53:20 crc kubenswrapper[4768]: I0217 13:53:20.193438 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"ovsdbserver-sb-0\" (UID: \"6e5947dc-7f07-4498-8be8-2b0c184c5853\") " pod="openstack/ovsdbserver-sb-0" Feb 17 13:53:20 crc kubenswrapper[4768]: I0217 13:53:20.281768 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Feb 17 13:53:22 crc kubenswrapper[4768]: I0217 13:53:22.149395 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 17 13:53:22 crc kubenswrapper[4768]: E0217 13:53:22.488567 4768 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Feb 17 13:53:22 crc kubenswrapper[4768]: E0217 13:53:22.488753 4768 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nffh5bdhf4h5f8h79h55h77h58fh56dh7bh6fh578hbch55dh68h56bhd9h65dh57ch658hc9h566h666h688h58h65dh684h5d7h6ch575h5d6h88q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-pmjgc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-675f4bcbfc-r8xt7_openstack(33ea37da-9fba-4e4d-8130-e5cfae709011): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 17 13:53:22 crc kubenswrapper[4768]: E0217 13:53:22.490084 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-675f4bcbfc-r8xt7" podUID="33ea37da-9fba-4e4d-8130-e5cfae709011" Feb 17 13:53:22 crc kubenswrapper[4768]: E0217 13:53:22.496244 4768 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Feb 17 13:53:22 crc kubenswrapper[4768]: E0217 13:53:22.496403 4768 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:ndfhb5h667h568h584h5f9h58dh565h664h587h597h577h64bh5c4h66fh647hbdh68ch5c5h68dh686h5f7h64hd7hc6h55fh57bh98h57fh87h5fh57fq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-cg2xr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-78dd6ddcc-68xft_openstack(3e57419a-285d-4114-bf10-9b204239483f): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 17 13:53:22 crc kubenswrapper[4768]: E0217 13:53:22.498084 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-78dd6ddcc-68xft" podUID="3e57419a-285d-4114-bf10-9b204239483f" Feb 17 13:53:22 crc kubenswrapper[4768]: I0217 13:53:22.919012 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 17 13:53:23 crc kubenswrapper[4768]: W0217 13:53:23.000407 4768 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9615d9e4_113e_4282_a091_a8c69a0c7968.slice/crio-0b19f20b6877db679d6f3b9fbe72c05ab41581e600a66a3c7b76c954adf7b1c8 WatchSource:0}: Error finding container 0b19f20b6877db679d6f3b9fbe72c05ab41581e600a66a3c7b76c954adf7b1c8: Status 404 returned error can't find the container with id 0b19f20b6877db679d6f3b9fbe72c05ab41581e600a66a3c7b76c954adf7b1c8 Feb 17 13:53:23 crc kubenswrapper[4768]: I0217 13:53:23.116901 4768 generic.go:334] "Generic (PLEG): container finished" podID="efe2337b-7579-4cc3-9de6-4076d51d3fdf" containerID="f35026217ccdf2ab9d8f040a0d7a43bbf7e594438122c4d491c6f7347bb2586c" exitCode=0 Feb 17 13:53:23 crc kubenswrapper[4768]: I0217 13:53:23.117005 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-ftfmm" event={"ID":"efe2337b-7579-4cc3-9de6-4076d51d3fdf","Type":"ContainerDied","Data":"f35026217ccdf2ab9d8f040a0d7a43bbf7e594438122c4d491c6f7347bb2586c"} Feb 17 13:53:23 crc kubenswrapper[4768]: I0217 13:53:23.133482 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"9615d9e4-113e-4282-a091-a8c69a0c7968","Type":"ContainerStarted","Data":"0b19f20b6877db679d6f3b9fbe72c05ab41581e600a66a3c7b76c954adf7b1c8"} Feb 17 13:53:23 crc kubenswrapper[4768]: W0217 13:53:23.133561 4768 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd87a0ca2_9789_4e14_a18b_2ed216ea5d15.slice/crio-044faf4f89dca12f193b777fdfa21d2b1b50d326980f30b3657f83f9a47fa05d WatchSource:0}: Error finding container 044faf4f89dca12f193b777fdfa21d2b1b50d326980f30b3657f83f9a47fa05d: Status 404 returned error can't find the container with id 044faf4f89dca12f193b777fdfa21d2b1b50d326980f30b3657f83f9a47fa05d Feb 17 13:53:23 crc kubenswrapper[4768]: I0217 13:53:23.136972 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"7d5df4be-f003-429d-8a84-81a239db88c0","Type":"ContainerStarted","Data":"025e73479229fbdeae9d1d20002858466f4b18906e6618e10f641a478641e6a0"} Feb 17 13:53:23 crc kubenswrapper[4768]: I0217 13:53:23.143586 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Feb 17 13:53:23 crc kubenswrapper[4768]: I0217 13:53:23.153423 4768 generic.go:334] "Generic (PLEG): container finished" podID="738597ef-0d1a-40e5-b592-82f54af22e13" containerID="6ce259f7f97b05c37e535ecbe2794150fa273c2e2ef4c46607e929f2b1185faa" exitCode=0 Feb 17 13:53:23 crc kubenswrapper[4768]: I0217 13:53:23.154778 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-99fvk" event={"ID":"738597ef-0d1a-40e5-b592-82f54af22e13","Type":"ContainerDied","Data":"6ce259f7f97b05c37e535ecbe2794150fa273c2e2ef4c46607e929f2b1185faa"} Feb 17 13:53:23 crc kubenswrapper[4768]: I0217 13:53:23.161178 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Feb 17 13:53:23 crc kubenswrapper[4768]: I0217 13:53:23.166569 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Feb 17 13:53:23 crc kubenswrapper[4768]: W0217 13:53:23.178657 4768 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda0368ca4_d5b7_4604_b15a_a7cb4fcf5652.slice/crio-81d76961b218f3b97f8f5e897916c93d3dcf54a0263dce5fd1acddfa9d4c89e3 WatchSource:0}: Error finding container 81d76961b218f3b97f8f5e897916c93d3dcf54a0263dce5fd1acddfa9d4c89e3: Status 404 returned error can't find the container with id 81d76961b218f3b97f8f5e897916c93d3dcf54a0263dce5fd1acddfa9d4c89e3 Feb 17 13:53:23 crc kubenswrapper[4768]: W0217 13:53:23.180293 4768 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5ba1ccc6_d556_4867_8e12_a5747dba1ffa.slice/crio-56b986cffcc5f3cb29e3cf1df97cb7b3de5849db65a418e7a8bec024b3ed4a0c WatchSource:0}: Error finding container 56b986cffcc5f3cb29e3cf1df97cb7b3de5849db65a418e7a8bec024b3ed4a0c: Status 404 returned error can't find the container with id 56b986cffcc5f3cb29e3cf1df97cb7b3de5849db65a418e7a8bec024b3ed4a0c Feb 17 13:53:23 crc kubenswrapper[4768]: I0217 13:53:23.290019 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-gnb4g"] Feb 17 13:53:23 crc kubenswrapper[4768]: I0217 13:53:23.295884 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 17 13:53:23 crc kubenswrapper[4768]: W0217 13:53:23.322163 4768 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod39dede0b_4ddc_46ea_81c1_a8e7e576aa78.slice/crio-887ca7a898921b1254d4b9af01fff7c2811cbd5b1aa39dada98b4d239edd34ad WatchSource:0}: Error finding container 887ca7a898921b1254d4b9af01fff7c2811cbd5b1aa39dada98b4d239edd34ad: Status 404 returned error can't find the container with id 887ca7a898921b1254d4b9af01fff7c2811cbd5b1aa39dada98b4d239edd34ad Feb 17 13:53:23 crc kubenswrapper[4768]: I0217 13:53:23.329671 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-rkhhj"] Feb 17 13:53:23 crc kubenswrapper[4768]: W0217 13:53:23.329960 4768 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod75bf7b04_fd76_440d_b975_abf1c4972c4f.slice/crio-0dc6481c8086413c1b6c8f65087b446b10a3fc05bfed067821239cf0279892b7 WatchSource:0}: Error finding container 0dc6481c8086413c1b6c8f65087b446b10a3fc05bfed067821239cf0279892b7: Status 404 returned error can't find the container with id 0dc6481c8086413c1b6c8f65087b446b10a3fc05bfed067821239cf0279892b7 Feb 17 13:53:23 crc kubenswrapper[4768]: E0217 13:53:23.424176 4768 log.go:32] "CreateContainer in sandbox from runtime service failed" err=< Feb 17 13:53:23 crc kubenswrapper[4768]: rpc error: code = Unknown desc = container create failed: mount `/var/lib/kubelet/pods/738597ef-0d1a-40e5-b592-82f54af22e13/volume-subpaths/dns-svc/dnsmasq-dns/1` to `etc/dnsmasq.d/hosts/dns-svc`: No such file or directory Feb 17 13:53:23 crc kubenswrapper[4768]: > podSandboxID="93ab68618e5720085a0390cacd11c01ed74fbd0f7b457be228c6d51c1bbe5a76" Feb 17 13:53:23 crc kubenswrapper[4768]: E0217 13:53:23.424362 4768 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 17 13:53:23 crc kubenswrapper[4768]: container &Container{Name:dnsmasq-dns,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n68chd6h679hbfh55fhc6h5ffh5d8h94h56ch589hb4hc5h57bh677hcdh655h8dh667h675h654h66ch567h8fh659h5b4h675h566h55bh54h67dh6dq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-dzlfz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:nil,TCPSocket:&TCPSocketAction{Port:{0 5353 },Host:,},GRPC:nil,},InitialDelaySeconds:3,TimeoutSeconds:5,PeriodSeconds:3,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:nil,TCPSocket:&TCPSocketAction{Port:{0 5353 },Host:,},GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-666b6646f7-99fvk_openstack(738597ef-0d1a-40e5-b592-82f54af22e13): CreateContainerError: container create failed: mount `/var/lib/kubelet/pods/738597ef-0d1a-40e5-b592-82f54af22e13/volume-subpaths/dns-svc/dnsmasq-dns/1` to `etc/dnsmasq.d/hosts/dns-svc`: No such file or directory Feb 17 13:53:23 crc kubenswrapper[4768]: > logger="UnhandledError" Feb 17 13:53:23 crc kubenswrapper[4768]: E0217 13:53:23.425519 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dnsmasq-dns\" with CreateContainerError: \"container create failed: mount `/var/lib/kubelet/pods/738597ef-0d1a-40e5-b592-82f54af22e13/volume-subpaths/dns-svc/dnsmasq-dns/1` to `etc/dnsmasq.d/hosts/dns-svc`: No such file or directory\\n\"" pod="openstack/dnsmasq-dns-666b6646f7-99fvk" podUID="738597ef-0d1a-40e5-b592-82f54af22e13" Feb 17 13:53:23 crc kubenswrapper[4768]: I0217 13:53:23.495160 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Feb 17 13:53:23 crc kubenswrapper[4768]: W0217 13:53:23.500346 4768 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1296e827_af28_4d2e_a80d_33add3697b6e.slice/crio-e2f08182b6896fa8152b25006064419e75d4a218e41389ef997235534bdbd6cd WatchSource:0}: Error finding container e2f08182b6896fa8152b25006064419e75d4a218e41389ef997235534bdbd6cd: Status 404 returned error can't find the container with id e2f08182b6896fa8152b25006064419e75d4a218e41389ef997235534bdbd6cd Feb 17 13:53:23 crc kubenswrapper[4768]: I0217 13:53:23.505086 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-r8xt7" Feb 17 13:53:23 crc kubenswrapper[4768]: I0217 13:53:23.540090 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-68xft" Feb 17 13:53:23 crc kubenswrapper[4768]: I0217 13:53:23.625893 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/33ea37da-9fba-4e4d-8130-e5cfae709011-config\") pod \"33ea37da-9fba-4e4d-8130-e5cfae709011\" (UID: \"33ea37da-9fba-4e4d-8130-e5cfae709011\") " Feb 17 13:53:23 crc kubenswrapper[4768]: I0217 13:53:23.625980 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3e57419a-285d-4114-bf10-9b204239483f-dns-svc\") pod \"3e57419a-285d-4114-bf10-9b204239483f\" (UID: \"3e57419a-285d-4114-bf10-9b204239483f\") " Feb 17 13:53:23 crc kubenswrapper[4768]: I0217 13:53:23.626050 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pmjgc\" (UniqueName: \"kubernetes.io/projected/33ea37da-9fba-4e4d-8130-e5cfae709011-kube-api-access-pmjgc\") pod \"33ea37da-9fba-4e4d-8130-e5cfae709011\" (UID: \"33ea37da-9fba-4e4d-8130-e5cfae709011\") " Feb 17 13:53:23 crc kubenswrapper[4768]: I0217 13:53:23.626084 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cg2xr\" (UniqueName: \"kubernetes.io/projected/3e57419a-285d-4114-bf10-9b204239483f-kube-api-access-cg2xr\") pod \"3e57419a-285d-4114-bf10-9b204239483f\" (UID: \"3e57419a-285d-4114-bf10-9b204239483f\") " Feb 17 13:53:23 crc kubenswrapper[4768]: I0217 13:53:23.626268 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3e57419a-285d-4114-bf10-9b204239483f-config\") pod \"3e57419a-285d-4114-bf10-9b204239483f\" (UID: \"3e57419a-285d-4114-bf10-9b204239483f\") " Feb 17 13:53:23 crc kubenswrapper[4768]: I0217 13:53:23.626583 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/33ea37da-9fba-4e4d-8130-e5cfae709011-config" (OuterVolumeSpecName: "config") pod "33ea37da-9fba-4e4d-8130-e5cfae709011" (UID: "33ea37da-9fba-4e4d-8130-e5cfae709011"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 13:53:23 crc kubenswrapper[4768]: I0217 13:53:23.626596 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3e57419a-285d-4114-bf10-9b204239483f-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "3e57419a-285d-4114-bf10-9b204239483f" (UID: "3e57419a-285d-4114-bf10-9b204239483f"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 13:53:23 crc kubenswrapper[4768]: I0217 13:53:23.626991 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3e57419a-285d-4114-bf10-9b204239483f-config" (OuterVolumeSpecName: "config") pod "3e57419a-285d-4114-bf10-9b204239483f" (UID: "3e57419a-285d-4114-bf10-9b204239483f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 13:53:23 crc kubenswrapper[4768]: I0217 13:53:23.630996 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/33ea37da-9fba-4e4d-8130-e5cfae709011-kube-api-access-pmjgc" (OuterVolumeSpecName: "kube-api-access-pmjgc") pod "33ea37da-9fba-4e4d-8130-e5cfae709011" (UID: "33ea37da-9fba-4e4d-8130-e5cfae709011"). InnerVolumeSpecName "kube-api-access-pmjgc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 13:53:23 crc kubenswrapper[4768]: I0217 13:53:23.631073 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3e57419a-285d-4114-bf10-9b204239483f-kube-api-access-cg2xr" (OuterVolumeSpecName: "kube-api-access-cg2xr") pod "3e57419a-285d-4114-bf10-9b204239483f" (UID: "3e57419a-285d-4114-bf10-9b204239483f"). InnerVolumeSpecName "kube-api-access-cg2xr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 13:53:23 crc kubenswrapper[4768]: I0217 13:53:23.727823 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cg2xr\" (UniqueName: \"kubernetes.io/projected/3e57419a-285d-4114-bf10-9b204239483f-kube-api-access-cg2xr\") on node \"crc\" DevicePath \"\"" Feb 17 13:53:23 crc kubenswrapper[4768]: I0217 13:53:23.727891 4768 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3e57419a-285d-4114-bf10-9b204239483f-config\") on node \"crc\" DevicePath \"\"" Feb 17 13:53:23 crc kubenswrapper[4768]: I0217 13:53:23.727905 4768 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/33ea37da-9fba-4e4d-8130-e5cfae709011-config\") on node \"crc\" DevicePath \"\"" Feb 17 13:53:23 crc kubenswrapper[4768]: I0217 13:53:23.727917 4768 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3e57419a-285d-4114-bf10-9b204239483f-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 17 13:53:23 crc kubenswrapper[4768]: I0217 13:53:23.727930 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pmjgc\" (UniqueName: \"kubernetes.io/projected/33ea37da-9fba-4e4d-8130-e5cfae709011-kube-api-access-pmjgc\") on node \"crc\" DevicePath \"\"" Feb 17 13:53:24 crc kubenswrapper[4768]: I0217 13:53:24.130969 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Feb 17 13:53:24 crc kubenswrapper[4768]: I0217 13:53:24.181936 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-ftfmm" event={"ID":"efe2337b-7579-4cc3-9de6-4076d51d3fdf","Type":"ContainerStarted","Data":"8eb11cadfe99c80f2546fe63cb8692ac81b28d49a4b32f5db3107e1d20302acf"} Feb 17 13:53:24 crc kubenswrapper[4768]: I0217 13:53:24.182785 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-57d769cc4f-ftfmm" Feb 17 13:53:24 crc kubenswrapper[4768]: I0217 13:53:24.183989 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78dd6ddcc-68xft" event={"ID":"3e57419a-285d-4114-bf10-9b204239483f","Type":"ContainerDied","Data":"0dea43910180b73248188c7b1a2f7e87a189e3e4337ca2dc42b7406881da9a0e"} Feb 17 13:53:24 crc kubenswrapper[4768]: I0217 13:53:24.184008 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-68xft" Feb 17 13:53:24 crc kubenswrapper[4768]: I0217 13:53:24.190418 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-rkhhj" event={"ID":"75bf7b04-fd76-440d-b975-abf1c4972c4f","Type":"ContainerStarted","Data":"0dc6481c8086413c1b6c8f65087b446b10a3fc05bfed067821239cf0279892b7"} Feb 17 13:53:24 crc kubenswrapper[4768]: I0217 13:53:24.194387 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"d87a0ca2-9789-4e14-a18b-2ed216ea5d15","Type":"ContainerStarted","Data":"044faf4f89dca12f193b777fdfa21d2b1b50d326980f30b3657f83f9a47fa05d"} Feb 17 13:53:24 crc kubenswrapper[4768]: I0217 13:53:24.196786 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f4bcbfc-r8xt7" event={"ID":"33ea37da-9fba-4e4d-8130-e5cfae709011","Type":"ContainerDied","Data":"30eda3f65cfaa44640563d9a9c38dc32563e44a344f48aa2d3d04e1905fa91af"} Feb 17 13:53:24 crc kubenswrapper[4768]: I0217 13:53:24.196857 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-r8xt7" Feb 17 13:53:24 crc kubenswrapper[4768]: I0217 13:53:24.198830 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"1296e827-af28-4d2e-a80d-33add3697b6e","Type":"ContainerStarted","Data":"e2f08182b6896fa8152b25006064419e75d4a218e41389ef997235534bdbd6cd"} Feb 17 13:53:24 crc kubenswrapper[4768]: I0217 13:53:24.199966 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-gnb4g" event={"ID":"39dede0b-4ddc-46ea-81c1-a8e7e576aa78","Type":"ContainerStarted","Data":"887ca7a898921b1254d4b9af01fff7c2811cbd5b1aa39dada98b4d239edd34ad"} Feb 17 13:53:24 crc kubenswrapper[4768]: I0217 13:53:24.204983 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"5ba1ccc6-d556-4867-8e12-a5747dba1ffa","Type":"ContainerStarted","Data":"56b986cffcc5f3cb29e3cf1df97cb7b3de5849db65a418e7a8bec024b3ed4a0c"} Feb 17 13:53:24 crc kubenswrapper[4768]: I0217 13:53:24.207822 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"a0368ca4-d5b7-4604-b15a-a7cb4fcf5652","Type":"ContainerStarted","Data":"81d76961b218f3b97f8f5e897916c93d3dcf54a0263dce5fd1acddfa9d4c89e3"} Feb 17 13:53:24 crc kubenswrapper[4768]: I0217 13:53:24.209300 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"af700d67-c9a3-4577-b872-6ffd620ce9b5","Type":"ContainerStarted","Data":"5552a05ba8b326989acffab4a0639d4e4408339023df02d5fcc2376e12d8f6e3"} Feb 17 13:53:24 crc kubenswrapper[4768]: I0217 13:53:24.218176 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-57d769cc4f-ftfmm" podStartSLOduration=2.933512026 podStartE2EDuration="18.218150179s" podCreationTimestamp="2026-02-17 13:53:06 +0000 UTC" firstStartedPulling="2026-02-17 13:53:07.308730025 +0000 UTC m=+1006.588116467" lastFinishedPulling="2026-02-17 13:53:22.593368178 +0000 UTC m=+1021.872754620" observedRunningTime="2026-02-17 13:53:24.203127347 +0000 UTC m=+1023.482513799" watchObservedRunningTime="2026-02-17 13:53:24.218150179 +0000 UTC m=+1023.497536621" Feb 17 13:53:24 crc kubenswrapper[4768]: I0217 13:53:24.280305 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-68xft"] Feb 17 13:53:24 crc kubenswrapper[4768]: I0217 13:53:24.283393 4768 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-68xft"] Feb 17 13:53:24 crc kubenswrapper[4768]: I0217 13:53:24.303996 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-r8xt7"] Feb 17 13:53:24 crc kubenswrapper[4768]: I0217 13:53:24.311632 4768 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-r8xt7"] Feb 17 13:53:25 crc kubenswrapper[4768]: I0217 13:53:25.545201 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="33ea37da-9fba-4e4d-8130-e5cfae709011" path="/var/lib/kubelet/pods/33ea37da-9fba-4e4d-8130-e5cfae709011/volumes" Feb 17 13:53:25 crc kubenswrapper[4768]: I0217 13:53:25.545915 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3e57419a-285d-4114-bf10-9b204239483f" path="/var/lib/kubelet/pods/3e57419a-285d-4114-bf10-9b204239483f/volumes" Feb 17 13:53:25 crc kubenswrapper[4768]: W0217 13:53:25.962352 4768 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6e5947dc_7f07_4498_8be8_2b0c184c5853.slice/crio-370575f62bd9044e80e1a55037ea96107c63cf58b5c494b1d941b08bf15bd85b WatchSource:0}: Error finding container 370575f62bd9044e80e1a55037ea96107c63cf58b5c494b1d941b08bf15bd85b: Status 404 returned error can't find the container with id 370575f62bd9044e80e1a55037ea96107c63cf58b5c494b1d941b08bf15bd85b Feb 17 13:53:26 crc kubenswrapper[4768]: I0217 13:53:26.222422 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"6e5947dc-7f07-4498-8be8-2b0c184c5853","Type":"ContainerStarted","Data":"370575f62bd9044e80e1a55037ea96107c63cf58b5c494b1d941b08bf15bd85b"} Feb 17 13:53:31 crc kubenswrapper[4768]: I0217 13:53:31.257343 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-rkhhj" event={"ID":"75bf7b04-fd76-440d-b975-abf1c4972c4f","Type":"ContainerStarted","Data":"6962d495bcbd8bd6fa788b712c905afe801a6451ea4ecc9da9efc9a3a9d2b4da"} Feb 17 13:53:31 crc kubenswrapper[4768]: I0217 13:53:31.259002 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-99fvk" event={"ID":"738597ef-0d1a-40e5-b592-82f54af22e13","Type":"ContainerStarted","Data":"3143ed2278054472e520d9f2857a2c91a3e22a8559bba400f420b62120023056"} Feb 17 13:53:31 crc kubenswrapper[4768]: I0217 13:53:31.259422 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-666b6646f7-99fvk" Feb 17 13:53:31 crc kubenswrapper[4768]: I0217 13:53:31.260612 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"6e5947dc-7f07-4498-8be8-2b0c184c5853","Type":"ContainerStarted","Data":"1e3532431881a2a0bd80d8e55ab131b6f6fc891c1fb9d97256a719d49f6e51cd"} Feb 17 13:53:31 crc kubenswrapper[4768]: I0217 13:53:31.261893 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"1296e827-af28-4d2e-a80d-33add3697b6e","Type":"ContainerStarted","Data":"c0715ecfa9d554eab5ed13654e336ba75bdf06c0d861c3158e58e84536a74c04"} Feb 17 13:53:31 crc kubenswrapper[4768]: I0217 13:53:31.262971 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"5ba1ccc6-d556-4867-8e12-a5747dba1ffa","Type":"ContainerStarted","Data":"b8f6c22c70fb5a2b49b023bfd46379968b3c1a9d2d33df96619c0532dc3500f7"} Feb 17 13:53:31 crc kubenswrapper[4768]: I0217 13:53:31.265400 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"a0368ca4-d5b7-4604-b15a-a7cb4fcf5652","Type":"ContainerStarted","Data":"d4a2b1485b5e815515a8b505fd7abec725ba44319afa2850175f7b299680e08b"} Feb 17 13:53:31 crc kubenswrapper[4768]: I0217 13:53:31.267298 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"d87a0ca2-9789-4e14-a18b-2ed216ea5d15","Type":"ContainerStarted","Data":"5d6ac5e0952234e4e4d90714f6320d60fcad874c884594e854e415343ca539ef"} Feb 17 13:53:31 crc kubenswrapper[4768]: I0217 13:53:31.268046 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/memcached-0" Feb 17 13:53:31 crc kubenswrapper[4768]: I0217 13:53:31.269072 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"af700d67-c9a3-4577-b872-6ffd620ce9b5","Type":"ContainerStarted","Data":"07af6ff52633bd9c46b353cf08a42dbb5ea64d2d3f278bd8dccfb2c912b59bcd"} Feb 17 13:53:31 crc kubenswrapper[4768]: I0217 13:53:31.269523 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Feb 17 13:53:31 crc kubenswrapper[4768]: I0217 13:53:31.270616 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-gnb4g" event={"ID":"39dede0b-4ddc-46ea-81c1-a8e7e576aa78","Type":"ContainerStarted","Data":"a13bcee908043f2289bea4a106e1856e9426fbf236229d115554105fed17c0ad"} Feb 17 13:53:31 crc kubenswrapper[4768]: I0217 13:53:31.270987 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-gnb4g" Feb 17 13:53:31 crc kubenswrapper[4768]: I0217 13:53:31.315444 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=12.046423451 podStartE2EDuration="19.315414723s" podCreationTimestamp="2026-02-17 13:53:12 +0000 UTC" firstStartedPulling="2026-02-17 13:53:23.330172006 +0000 UTC m=+1022.609558448" lastFinishedPulling="2026-02-17 13:53:30.599163278 +0000 UTC m=+1029.878549720" observedRunningTime="2026-02-17 13:53:31.314440255 +0000 UTC m=+1030.593826697" watchObservedRunningTime="2026-02-17 13:53:31.315414723 +0000 UTC m=+1030.594801225" Feb 17 13:53:31 crc kubenswrapper[4768]: I0217 13:53:31.406811 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-666b6646f7-99fvk" podStartSLOduration=9.958676557 podStartE2EDuration="25.406793053s" podCreationTimestamp="2026-02-17 13:53:06 +0000 UTC" firstStartedPulling="2026-02-17 13:53:07.141201641 +0000 UTC m=+1006.420588083" lastFinishedPulling="2026-02-17 13:53:22.589318137 +0000 UTC m=+1021.868704579" observedRunningTime="2026-02-17 13:53:31.398483775 +0000 UTC m=+1030.677870217" watchObservedRunningTime="2026-02-17 13:53:31.406793053 +0000 UTC m=+1030.686179495" Feb 17 13:53:31 crc kubenswrapper[4768]: I0217 13:53:31.492561 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/memcached-0" podStartSLOduration=15.627010857 podStartE2EDuration="21.49253842s" podCreationTimestamp="2026-02-17 13:53:10 +0000 UTC" firstStartedPulling="2026-02-17 13:53:23.136608977 +0000 UTC m=+1022.415995429" lastFinishedPulling="2026-02-17 13:53:29.00213655 +0000 UTC m=+1028.281522992" observedRunningTime="2026-02-17 13:53:31.428850389 +0000 UTC m=+1030.708236831" watchObservedRunningTime="2026-02-17 13:53:31.49253842 +0000 UTC m=+1030.771924862" Feb 17 13:53:31 crc kubenswrapper[4768]: I0217 13:53:31.506881 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-gnb4g" podStartSLOduration=9.833061638 podStartE2EDuration="16.506857693s" podCreationTimestamp="2026-02-17 13:53:15 +0000 UTC" firstStartedPulling="2026-02-17 13:53:23.330592459 +0000 UTC m=+1022.609978901" lastFinishedPulling="2026-02-17 13:53:30.004388514 +0000 UTC m=+1029.283774956" observedRunningTime="2026-02-17 13:53:31.454368911 +0000 UTC m=+1030.733755353" watchObservedRunningTime="2026-02-17 13:53:31.506857693 +0000 UTC m=+1030.786244145" Feb 17 13:53:31 crc kubenswrapper[4768]: I0217 13:53:31.868014 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-57d769cc4f-ftfmm" Feb 17 13:53:31 crc kubenswrapper[4768]: I0217 13:53:31.916423 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-99fvk"] Feb 17 13:53:32 crc kubenswrapper[4768]: I0217 13:53:32.282463 4768 generic.go:334] "Generic (PLEG): container finished" podID="75bf7b04-fd76-440d-b975-abf1c4972c4f" containerID="6962d495bcbd8bd6fa788b712c905afe801a6451ea4ecc9da9efc9a3a9d2b4da" exitCode=0 Feb 17 13:53:32 crc kubenswrapper[4768]: I0217 13:53:32.282523 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-rkhhj" event={"ID":"75bf7b04-fd76-440d-b975-abf1c4972c4f","Type":"ContainerDied","Data":"6962d495bcbd8bd6fa788b712c905afe801a6451ea4ecc9da9efc9a3a9d2b4da"} Feb 17 13:53:32 crc kubenswrapper[4768]: I0217 13:53:32.287208 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"9615d9e4-113e-4282-a091-a8c69a0c7968","Type":"ContainerStarted","Data":"e1a6eaa5057abc76e416b61565b243db97335cdb11555ea5038bf95b5c8e8de4"} Feb 17 13:53:32 crc kubenswrapper[4768]: I0217 13:53:32.289818 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"7d5df4be-f003-429d-8a84-81a239db88c0","Type":"ContainerStarted","Data":"16e2b6d9fb217b4e0023a2a85ee90118aa1eb1227cce38b0a37ae2b1cbcf7add"} Feb 17 13:53:33 crc kubenswrapper[4768]: I0217 13:53:33.296930 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-rkhhj" event={"ID":"75bf7b04-fd76-440d-b975-abf1c4972c4f","Type":"ContainerStarted","Data":"6e4fb6ee34c0b0bf61f87686ced83b7898d468d4d03bfd6bf2bd7413e1cd234a"} Feb 17 13:53:33 crc kubenswrapper[4768]: I0217 13:53:33.297337 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-rkhhj" Feb 17 13:53:33 crc kubenswrapper[4768]: I0217 13:53:33.297351 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-rkhhj" event={"ID":"75bf7b04-fd76-440d-b975-abf1c4972c4f","Type":"ContainerStarted","Data":"f23575dfdac3c017acf3805ffcff81a16bc95164930d77ca5ccc46d9a467426e"} Feb 17 13:53:33 crc kubenswrapper[4768]: I0217 13:53:33.299773 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"6e5947dc-7f07-4498-8be8-2b0c184c5853","Type":"ContainerStarted","Data":"5d7d8567998633e825ef26a7db1ead4127d252e5f72f1e1f07f7d52e5061970b"} Feb 17 13:53:33 crc kubenswrapper[4768]: I0217 13:53:33.301876 4768 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-666b6646f7-99fvk" podUID="738597ef-0d1a-40e5-b592-82f54af22e13" containerName="dnsmasq-dns" containerID="cri-o://3143ed2278054472e520d9f2857a2c91a3e22a8559bba400f420b62120023056" gracePeriod=10 Feb 17 13:53:33 crc kubenswrapper[4768]: I0217 13:53:33.302176 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"1296e827-af28-4d2e-a80d-33add3697b6e","Type":"ContainerStarted","Data":"d12dbe2155b37d1a341d479d5104f3845696153bffba06813f857e106bf31acf"} Feb 17 13:53:33 crc kubenswrapper[4768]: I0217 13:53:33.319957 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-ovs-rkhhj" podStartSLOduration=11.554941008 podStartE2EDuration="18.319923779s" podCreationTimestamp="2026-02-17 13:53:15 +0000 UTC" firstStartedPulling="2026-02-17 13:53:23.334354332 +0000 UTC m=+1022.613740774" lastFinishedPulling="2026-02-17 13:53:30.099337083 +0000 UTC m=+1029.378723545" observedRunningTime="2026-02-17 13:53:33.316599388 +0000 UTC m=+1032.595985830" watchObservedRunningTime="2026-02-17 13:53:33.319923779 +0000 UTC m=+1032.599310221" Feb 17 13:53:33 crc kubenswrapper[4768]: I0217 13:53:33.342296 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-sb-0" podStartSLOduration=8.514527077 podStartE2EDuration="15.342265153s" podCreationTimestamp="2026-02-17 13:53:18 +0000 UTC" firstStartedPulling="2026-02-17 13:53:25.9815332 +0000 UTC m=+1025.260919652" lastFinishedPulling="2026-02-17 13:53:32.809271286 +0000 UTC m=+1032.088657728" observedRunningTime="2026-02-17 13:53:33.339452745 +0000 UTC m=+1032.618839187" watchObservedRunningTime="2026-02-17 13:53:33.342265153 +0000 UTC m=+1032.621651595" Feb 17 13:53:33 crc kubenswrapper[4768]: I0217 13:53:33.363218 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-nb-0" podStartSLOduration=9.087149469 podStartE2EDuration="18.363198738s" podCreationTimestamp="2026-02-17 13:53:15 +0000 UTC" firstStartedPulling="2026-02-17 13:53:23.50203554 +0000 UTC m=+1022.781421982" lastFinishedPulling="2026-02-17 13:53:32.778084809 +0000 UTC m=+1032.057471251" observedRunningTime="2026-02-17 13:53:33.358873199 +0000 UTC m=+1032.638259641" watchObservedRunningTime="2026-02-17 13:53:33.363198738 +0000 UTC m=+1032.642585180" Feb 17 13:53:33 crc kubenswrapper[4768]: I0217 13:53:33.689080 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-99fvk" Feb 17 13:53:33 crc kubenswrapper[4768]: I0217 13:53:33.783143 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/738597ef-0d1a-40e5-b592-82f54af22e13-dns-svc\") pod \"738597ef-0d1a-40e5-b592-82f54af22e13\" (UID: \"738597ef-0d1a-40e5-b592-82f54af22e13\") " Feb 17 13:53:33 crc kubenswrapper[4768]: I0217 13:53:33.783281 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/738597ef-0d1a-40e5-b592-82f54af22e13-config\") pod \"738597ef-0d1a-40e5-b592-82f54af22e13\" (UID: \"738597ef-0d1a-40e5-b592-82f54af22e13\") " Feb 17 13:53:33 crc kubenswrapper[4768]: I0217 13:53:33.783303 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dzlfz\" (UniqueName: \"kubernetes.io/projected/738597ef-0d1a-40e5-b592-82f54af22e13-kube-api-access-dzlfz\") pod \"738597ef-0d1a-40e5-b592-82f54af22e13\" (UID: \"738597ef-0d1a-40e5-b592-82f54af22e13\") " Feb 17 13:53:33 crc kubenswrapper[4768]: I0217 13:53:33.788735 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/738597ef-0d1a-40e5-b592-82f54af22e13-kube-api-access-dzlfz" (OuterVolumeSpecName: "kube-api-access-dzlfz") pod "738597ef-0d1a-40e5-b592-82f54af22e13" (UID: "738597ef-0d1a-40e5-b592-82f54af22e13"). InnerVolumeSpecName "kube-api-access-dzlfz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 13:53:33 crc kubenswrapper[4768]: I0217 13:53:33.815701 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/738597ef-0d1a-40e5-b592-82f54af22e13-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "738597ef-0d1a-40e5-b592-82f54af22e13" (UID: "738597ef-0d1a-40e5-b592-82f54af22e13"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 13:53:33 crc kubenswrapper[4768]: I0217 13:53:33.818737 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/738597ef-0d1a-40e5-b592-82f54af22e13-config" (OuterVolumeSpecName: "config") pod "738597ef-0d1a-40e5-b592-82f54af22e13" (UID: "738597ef-0d1a-40e5-b592-82f54af22e13"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 13:53:33 crc kubenswrapper[4768]: I0217 13:53:33.885141 4768 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/738597ef-0d1a-40e5-b592-82f54af22e13-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 17 13:53:33 crc kubenswrapper[4768]: I0217 13:53:33.885176 4768 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/738597ef-0d1a-40e5-b592-82f54af22e13-config\") on node \"crc\" DevicePath \"\"" Feb 17 13:53:33 crc kubenswrapper[4768]: I0217 13:53:33.885189 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dzlfz\" (UniqueName: \"kubernetes.io/projected/738597ef-0d1a-40e5-b592-82f54af22e13-kube-api-access-dzlfz\") on node \"crc\" DevicePath \"\"" Feb 17 13:53:34 crc kubenswrapper[4768]: I0217 13:53:34.312443 4768 generic.go:334] "Generic (PLEG): container finished" podID="5ba1ccc6-d556-4867-8e12-a5747dba1ffa" containerID="b8f6c22c70fb5a2b49b023bfd46379968b3c1a9d2d33df96619c0532dc3500f7" exitCode=0 Feb 17 13:53:34 crc kubenswrapper[4768]: I0217 13:53:34.312558 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"5ba1ccc6-d556-4867-8e12-a5747dba1ffa","Type":"ContainerDied","Data":"b8f6c22c70fb5a2b49b023bfd46379968b3c1a9d2d33df96619c0532dc3500f7"} Feb 17 13:53:34 crc kubenswrapper[4768]: I0217 13:53:34.318300 4768 generic.go:334] "Generic (PLEG): container finished" podID="738597ef-0d1a-40e5-b592-82f54af22e13" containerID="3143ed2278054472e520d9f2857a2c91a3e22a8559bba400f420b62120023056" exitCode=0 Feb 17 13:53:34 crc kubenswrapper[4768]: I0217 13:53:34.318388 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-99fvk" Feb 17 13:53:34 crc kubenswrapper[4768]: I0217 13:53:34.318410 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-99fvk" event={"ID":"738597ef-0d1a-40e5-b592-82f54af22e13","Type":"ContainerDied","Data":"3143ed2278054472e520d9f2857a2c91a3e22a8559bba400f420b62120023056"} Feb 17 13:53:34 crc kubenswrapper[4768]: I0217 13:53:34.320216 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-99fvk" event={"ID":"738597ef-0d1a-40e5-b592-82f54af22e13","Type":"ContainerDied","Data":"93ab68618e5720085a0390cacd11c01ed74fbd0f7b457be228c6d51c1bbe5a76"} Feb 17 13:53:34 crc kubenswrapper[4768]: I0217 13:53:34.320306 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-rkhhj" Feb 17 13:53:34 crc kubenswrapper[4768]: I0217 13:53:34.320389 4768 scope.go:117] "RemoveContainer" containerID="3143ed2278054472e520d9f2857a2c91a3e22a8559bba400f420b62120023056" Feb 17 13:53:34 crc kubenswrapper[4768]: I0217 13:53:34.361504 4768 scope.go:117] "RemoveContainer" containerID="6ce259f7f97b05c37e535ecbe2794150fa273c2e2ef4c46607e929f2b1185faa" Feb 17 13:53:34 crc kubenswrapper[4768]: I0217 13:53:34.361581 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-99fvk"] Feb 17 13:53:34 crc kubenswrapper[4768]: I0217 13:53:34.366484 4768 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-99fvk"] Feb 17 13:53:34 crc kubenswrapper[4768]: I0217 13:53:34.385824 4768 scope.go:117] "RemoveContainer" containerID="3143ed2278054472e520d9f2857a2c91a3e22a8559bba400f420b62120023056" Feb 17 13:53:34 crc kubenswrapper[4768]: E0217 13:53:34.386669 4768 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3143ed2278054472e520d9f2857a2c91a3e22a8559bba400f420b62120023056\": container with ID starting with 3143ed2278054472e520d9f2857a2c91a3e22a8559bba400f420b62120023056 not found: ID does not exist" containerID="3143ed2278054472e520d9f2857a2c91a3e22a8559bba400f420b62120023056" Feb 17 13:53:34 crc kubenswrapper[4768]: I0217 13:53:34.386708 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3143ed2278054472e520d9f2857a2c91a3e22a8559bba400f420b62120023056"} err="failed to get container status \"3143ed2278054472e520d9f2857a2c91a3e22a8559bba400f420b62120023056\": rpc error: code = NotFound desc = could not find container \"3143ed2278054472e520d9f2857a2c91a3e22a8559bba400f420b62120023056\": container with ID starting with 3143ed2278054472e520d9f2857a2c91a3e22a8559bba400f420b62120023056 not found: ID does not exist" Feb 17 13:53:34 crc kubenswrapper[4768]: I0217 13:53:34.386732 4768 scope.go:117] "RemoveContainer" containerID="6ce259f7f97b05c37e535ecbe2794150fa273c2e2ef4c46607e929f2b1185faa" Feb 17 13:53:34 crc kubenswrapper[4768]: E0217 13:53:34.387308 4768 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6ce259f7f97b05c37e535ecbe2794150fa273c2e2ef4c46607e929f2b1185faa\": container with ID starting with 6ce259f7f97b05c37e535ecbe2794150fa273c2e2ef4c46607e929f2b1185faa not found: ID does not exist" containerID="6ce259f7f97b05c37e535ecbe2794150fa273c2e2ef4c46607e929f2b1185faa" Feb 17 13:53:34 crc kubenswrapper[4768]: I0217 13:53:34.387354 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6ce259f7f97b05c37e535ecbe2794150fa273c2e2ef4c46607e929f2b1185faa"} err="failed to get container status \"6ce259f7f97b05c37e535ecbe2794150fa273c2e2ef4c46607e929f2b1185faa\": rpc error: code = NotFound desc = could not find container \"6ce259f7f97b05c37e535ecbe2794150fa273c2e2ef4c46607e929f2b1185faa\": container with ID starting with 6ce259f7f97b05c37e535ecbe2794150fa273c2e2ef4c46607e929f2b1185faa not found: ID does not exist" Feb 17 13:53:34 crc kubenswrapper[4768]: I0217 13:53:34.835449 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-nb-0" Feb 17 13:53:34 crc kubenswrapper[4768]: I0217 13:53:34.881800 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-nb-0" Feb 17 13:53:35 crc kubenswrapper[4768]: I0217 13:53:35.282849 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-sb-0" Feb 17 13:53:35 crc kubenswrapper[4768]: I0217 13:53:35.282935 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-sb-0" Feb 17 13:53:35 crc kubenswrapper[4768]: I0217 13:53:35.323862 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-sb-0" Feb 17 13:53:35 crc kubenswrapper[4768]: I0217 13:53:35.333154 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"5ba1ccc6-d556-4867-8e12-a5747dba1ffa","Type":"ContainerStarted","Data":"0763e9b4ed729f35bc4550140ab88f44682b554f25742da8b06e06f3cac2735e"} Feb 17 13:53:35 crc kubenswrapper[4768]: I0217 13:53:35.335073 4768 generic.go:334] "Generic (PLEG): container finished" podID="a0368ca4-d5b7-4604-b15a-a7cb4fcf5652" containerID="d4a2b1485b5e815515a8b505fd7abec725ba44319afa2850175f7b299680e08b" exitCode=0 Feb 17 13:53:35 crc kubenswrapper[4768]: I0217 13:53:35.335134 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"a0368ca4-d5b7-4604-b15a-a7cb4fcf5652","Type":"ContainerDied","Data":"d4a2b1485b5e815515a8b505fd7abec725ba44319afa2850175f7b299680e08b"} Feb 17 13:53:35 crc kubenswrapper[4768]: I0217 13:53:35.338419 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-nb-0" Feb 17 13:53:35 crc kubenswrapper[4768]: I0217 13:53:35.392301 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-galera-0" podStartSLOduration=21.592595725 podStartE2EDuration="28.39227674s" podCreationTimestamp="2026-02-17 13:53:07 +0000 UTC" firstStartedPulling="2026-02-17 13:53:23.204709139 +0000 UTC m=+1022.484095591" lastFinishedPulling="2026-02-17 13:53:30.004390164 +0000 UTC m=+1029.283776606" observedRunningTime="2026-02-17 13:53:35.382688907 +0000 UTC m=+1034.662075349" watchObservedRunningTime="2026-02-17 13:53:35.39227674 +0000 UTC m=+1034.671663202" Feb 17 13:53:35 crc kubenswrapper[4768]: I0217 13:53:35.396391 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-nb-0" Feb 17 13:53:35 crc kubenswrapper[4768]: I0217 13:53:35.397182 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-sb-0" Feb 17 13:53:35 crc kubenswrapper[4768]: I0217 13:53:35.546859 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="738597ef-0d1a-40e5-b592-82f54af22e13" path="/var/lib/kubelet/pods/738597ef-0d1a-40e5-b592-82f54af22e13/volumes" Feb 17 13:53:35 crc kubenswrapper[4768]: I0217 13:53:35.654259 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7fd796d7df-6fww7"] Feb 17 13:53:35 crc kubenswrapper[4768]: E0217 13:53:35.654882 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="738597ef-0d1a-40e5-b592-82f54af22e13" containerName="init" Feb 17 13:53:35 crc kubenswrapper[4768]: I0217 13:53:35.654901 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="738597ef-0d1a-40e5-b592-82f54af22e13" containerName="init" Feb 17 13:53:35 crc kubenswrapper[4768]: E0217 13:53:35.654922 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="738597ef-0d1a-40e5-b592-82f54af22e13" containerName="dnsmasq-dns" Feb 17 13:53:35 crc kubenswrapper[4768]: I0217 13:53:35.654932 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="738597ef-0d1a-40e5-b592-82f54af22e13" containerName="dnsmasq-dns" Feb 17 13:53:35 crc kubenswrapper[4768]: I0217 13:53:35.655091 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="738597ef-0d1a-40e5-b592-82f54af22e13" containerName="dnsmasq-dns" Feb 17 13:53:35 crc kubenswrapper[4768]: I0217 13:53:35.656173 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7fd796d7df-6fww7" Feb 17 13:53:35 crc kubenswrapper[4768]: I0217 13:53:35.658151 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-nb" Feb 17 13:53:35 crc kubenswrapper[4768]: I0217 13:53:35.673086 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7fd796d7df-6fww7"] Feb 17 13:53:35 crc kubenswrapper[4768]: I0217 13:53:35.702654 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/memcached-0" Feb 17 13:53:35 crc kubenswrapper[4768]: I0217 13:53:35.714718 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-metrics-rchd4"] Feb 17 13:53:35 crc kubenswrapper[4768]: I0217 13:53:35.715596 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kwnbh\" (UniqueName: \"kubernetes.io/projected/ebb0b607-4911-4e5b-887b-934c3d10fabf-kube-api-access-kwnbh\") pod \"dnsmasq-dns-7fd796d7df-6fww7\" (UID: \"ebb0b607-4911-4e5b-887b-934c3d10fabf\") " pod="openstack/dnsmasq-dns-7fd796d7df-6fww7" Feb 17 13:53:35 crc kubenswrapper[4768]: I0217 13:53:35.715740 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ebb0b607-4911-4e5b-887b-934c3d10fabf-ovsdbserver-nb\") pod \"dnsmasq-dns-7fd796d7df-6fww7\" (UID: \"ebb0b607-4911-4e5b-887b-934c3d10fabf\") " pod="openstack/dnsmasq-dns-7fd796d7df-6fww7" Feb 17 13:53:35 crc kubenswrapper[4768]: I0217 13:53:35.715799 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ebb0b607-4911-4e5b-887b-934c3d10fabf-config\") pod \"dnsmasq-dns-7fd796d7df-6fww7\" (UID: \"ebb0b607-4911-4e5b-887b-934c3d10fabf\") " pod="openstack/dnsmasq-dns-7fd796d7df-6fww7" Feb 17 13:53:35 crc kubenswrapper[4768]: I0217 13:53:35.715849 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ebb0b607-4911-4e5b-887b-934c3d10fabf-dns-svc\") pod \"dnsmasq-dns-7fd796d7df-6fww7\" (UID: \"ebb0b607-4911-4e5b-887b-934c3d10fabf\") " pod="openstack/dnsmasq-dns-7fd796d7df-6fww7" Feb 17 13:53:35 crc kubenswrapper[4768]: I0217 13:53:35.715896 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-rchd4" Feb 17 13:53:35 crc kubenswrapper[4768]: I0217 13:53:35.718265 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-metrics-config" Feb 17 13:53:35 crc kubenswrapper[4768]: I0217 13:53:35.730536 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-rchd4"] Feb 17 13:53:35 crc kubenswrapper[4768]: I0217 13:53:35.799413 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-northd-0"] Feb 17 13:53:35 crc kubenswrapper[4768]: I0217 13:53:35.805560 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Feb 17 13:53:35 crc kubenswrapper[4768]: I0217 13:53:35.810404 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovnnorthd-ovndbs" Feb 17 13:53:35 crc kubenswrapper[4768]: I0217 13:53:35.810660 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-scripts" Feb 17 13:53:35 crc kubenswrapper[4768]: I0217 13:53:35.810845 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-config" Feb 17 13:53:35 crc kubenswrapper[4768]: I0217 13:53:35.811057 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovnnorthd-ovnnorthd-dockercfg-fg8j7" Feb 17 13:53:35 crc kubenswrapper[4768]: I0217 13:53:35.812684 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Feb 17 13:53:35 crc kubenswrapper[4768]: I0217 13:53:35.817372 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/d969a380-827a-46eb-8f6e-9f28ae50312a-ovs-rundir\") pod \"ovn-controller-metrics-rchd4\" (UID: \"d969a380-827a-46eb-8f6e-9f28ae50312a\") " pod="openstack/ovn-controller-metrics-rchd4" Feb 17 13:53:35 crc kubenswrapper[4768]: I0217 13:53:35.817682 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/d969a380-827a-46eb-8f6e-9f28ae50312a-ovn-rundir\") pod \"ovn-controller-metrics-rchd4\" (UID: \"d969a380-827a-46eb-8f6e-9f28ae50312a\") " pod="openstack/ovn-controller-metrics-rchd4" Feb 17 13:53:35 crc kubenswrapper[4768]: I0217 13:53:35.817809 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d969a380-827a-46eb-8f6e-9f28ae50312a-config\") pod \"ovn-controller-metrics-rchd4\" (UID: \"d969a380-827a-46eb-8f6e-9f28ae50312a\") " pod="openstack/ovn-controller-metrics-rchd4" Feb 17 13:53:35 crc kubenswrapper[4768]: I0217 13:53:35.817968 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ebb0b607-4911-4e5b-887b-934c3d10fabf-ovsdbserver-nb\") pod \"dnsmasq-dns-7fd796d7df-6fww7\" (UID: \"ebb0b607-4911-4e5b-887b-934c3d10fabf\") " pod="openstack/dnsmasq-dns-7fd796d7df-6fww7" Feb 17 13:53:35 crc kubenswrapper[4768]: I0217 13:53:35.818125 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ebb0b607-4911-4e5b-887b-934c3d10fabf-config\") pod \"dnsmasq-dns-7fd796d7df-6fww7\" (UID: \"ebb0b607-4911-4e5b-887b-934c3d10fabf\") " pod="openstack/dnsmasq-dns-7fd796d7df-6fww7" Feb 17 13:53:35 crc kubenswrapper[4768]: I0217 13:53:35.818285 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ebb0b607-4911-4e5b-887b-934c3d10fabf-dns-svc\") pod \"dnsmasq-dns-7fd796d7df-6fww7\" (UID: \"ebb0b607-4911-4e5b-887b-934c3d10fabf\") " pod="openstack/dnsmasq-dns-7fd796d7df-6fww7" Feb 17 13:53:35 crc kubenswrapper[4768]: I0217 13:53:35.818431 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d969a380-827a-46eb-8f6e-9f28ae50312a-combined-ca-bundle\") pod \"ovn-controller-metrics-rchd4\" (UID: \"d969a380-827a-46eb-8f6e-9f28ae50312a\") " pod="openstack/ovn-controller-metrics-rchd4" Feb 17 13:53:35 crc kubenswrapper[4768]: I0217 13:53:35.818561 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kwnbh\" (UniqueName: \"kubernetes.io/projected/ebb0b607-4911-4e5b-887b-934c3d10fabf-kube-api-access-kwnbh\") pod \"dnsmasq-dns-7fd796d7df-6fww7\" (UID: \"ebb0b607-4911-4e5b-887b-934c3d10fabf\") " pod="openstack/dnsmasq-dns-7fd796d7df-6fww7" Feb 17 13:53:35 crc kubenswrapper[4768]: I0217 13:53:35.818693 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/d969a380-827a-46eb-8f6e-9f28ae50312a-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-rchd4\" (UID: \"d969a380-827a-46eb-8f6e-9f28ae50312a\") " pod="openstack/ovn-controller-metrics-rchd4" Feb 17 13:53:35 crc kubenswrapper[4768]: I0217 13:53:35.818801 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-44gwh\" (UniqueName: \"kubernetes.io/projected/d969a380-827a-46eb-8f6e-9f28ae50312a-kube-api-access-44gwh\") pod \"ovn-controller-metrics-rchd4\" (UID: \"d969a380-827a-46eb-8f6e-9f28ae50312a\") " pod="openstack/ovn-controller-metrics-rchd4" Feb 17 13:53:35 crc kubenswrapper[4768]: I0217 13:53:35.819969 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ebb0b607-4911-4e5b-887b-934c3d10fabf-ovsdbserver-nb\") pod \"dnsmasq-dns-7fd796d7df-6fww7\" (UID: \"ebb0b607-4911-4e5b-887b-934c3d10fabf\") " pod="openstack/dnsmasq-dns-7fd796d7df-6fww7" Feb 17 13:53:35 crc kubenswrapper[4768]: I0217 13:53:35.820254 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ebb0b607-4911-4e5b-887b-934c3d10fabf-config\") pod \"dnsmasq-dns-7fd796d7df-6fww7\" (UID: \"ebb0b607-4911-4e5b-887b-934c3d10fabf\") " pod="openstack/dnsmasq-dns-7fd796d7df-6fww7" Feb 17 13:53:35 crc kubenswrapper[4768]: I0217 13:53:35.821189 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ebb0b607-4911-4e5b-887b-934c3d10fabf-dns-svc\") pod \"dnsmasq-dns-7fd796d7df-6fww7\" (UID: \"ebb0b607-4911-4e5b-887b-934c3d10fabf\") " pod="openstack/dnsmasq-dns-7fd796d7df-6fww7" Feb 17 13:53:35 crc kubenswrapper[4768]: I0217 13:53:35.837918 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7fd796d7df-6fww7"] Feb 17 13:53:35 crc kubenswrapper[4768]: E0217 13:53:35.839829 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[kube-api-access-kwnbh], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openstack/dnsmasq-dns-7fd796d7df-6fww7" podUID="ebb0b607-4911-4e5b-887b-934c3d10fabf" Feb 17 13:53:35 crc kubenswrapper[4768]: I0217 13:53:35.882343 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kwnbh\" (UniqueName: \"kubernetes.io/projected/ebb0b607-4911-4e5b-887b-934c3d10fabf-kube-api-access-kwnbh\") pod \"dnsmasq-dns-7fd796d7df-6fww7\" (UID: \"ebb0b607-4911-4e5b-887b-934c3d10fabf\") " pod="openstack/dnsmasq-dns-7fd796d7df-6fww7" Feb 17 13:53:35 crc kubenswrapper[4768]: I0217 13:53:35.887213 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-spclm"] Feb 17 13:53:35 crc kubenswrapper[4768]: I0217 13:53:35.888491 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86db49b7ff-spclm" Feb 17 13:53:35 crc kubenswrapper[4768]: I0217 13:53:35.890923 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-sb" Feb 17 13:53:35 crc kubenswrapper[4768]: I0217 13:53:35.906091 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-spclm"] Feb 17 13:53:35 crc kubenswrapper[4768]: I0217 13:53:35.920798 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/41ee36c1-d509-4c0c-960a-279955237a10-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"41ee36c1-d509-4c0c-960a-279955237a10\") " pod="openstack/ovn-northd-0" Feb 17 13:53:35 crc kubenswrapper[4768]: I0217 13:53:35.920866 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gbvtt\" (UniqueName: \"kubernetes.io/projected/399cc340-c212-4298-a995-236a556b5108-kube-api-access-gbvtt\") pod \"dnsmasq-dns-86db49b7ff-spclm\" (UID: \"399cc340-c212-4298-a995-236a556b5108\") " pod="openstack/dnsmasq-dns-86db49b7ff-spclm" Feb 17 13:53:35 crc kubenswrapper[4768]: I0217 13:53:35.920898 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/41ee36c1-d509-4c0c-960a-279955237a10-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"41ee36c1-d509-4c0c-960a-279955237a10\") " pod="openstack/ovn-northd-0" Feb 17 13:53:35 crc kubenswrapper[4768]: I0217 13:53:35.920934 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/d969a380-827a-46eb-8f6e-9f28ae50312a-ovs-rundir\") pod \"ovn-controller-metrics-rchd4\" (UID: \"d969a380-827a-46eb-8f6e-9f28ae50312a\") " pod="openstack/ovn-controller-metrics-rchd4" Feb 17 13:53:35 crc kubenswrapper[4768]: I0217 13:53:35.920962 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/d969a380-827a-46eb-8f6e-9f28ae50312a-ovn-rundir\") pod \"ovn-controller-metrics-rchd4\" (UID: \"d969a380-827a-46eb-8f6e-9f28ae50312a\") " pod="openstack/ovn-controller-metrics-rchd4" Feb 17 13:53:35 crc kubenswrapper[4768]: I0217 13:53:35.920984 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d969a380-827a-46eb-8f6e-9f28ae50312a-config\") pod \"ovn-controller-metrics-rchd4\" (UID: \"d969a380-827a-46eb-8f6e-9f28ae50312a\") " pod="openstack/ovn-controller-metrics-rchd4" Feb 17 13:53:35 crc kubenswrapper[4768]: I0217 13:53:35.921011 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/399cc340-c212-4298-a995-236a556b5108-ovsdbserver-nb\") pod \"dnsmasq-dns-86db49b7ff-spclm\" (UID: \"399cc340-c212-4298-a995-236a556b5108\") " pod="openstack/dnsmasq-dns-86db49b7ff-spclm" Feb 17 13:53:35 crc kubenswrapper[4768]: I0217 13:53:35.921048 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/41ee36c1-d509-4c0c-960a-279955237a10-config\") pod \"ovn-northd-0\" (UID: \"41ee36c1-d509-4c0c-960a-279955237a10\") " pod="openstack/ovn-northd-0" Feb 17 13:53:35 crc kubenswrapper[4768]: I0217 13:53:35.921086 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/41ee36c1-d509-4c0c-960a-279955237a10-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"41ee36c1-d509-4c0c-960a-279955237a10\") " pod="openstack/ovn-northd-0" Feb 17 13:53:35 crc kubenswrapper[4768]: I0217 13:53:35.921145 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zzzdn\" (UniqueName: \"kubernetes.io/projected/41ee36c1-d509-4c0c-960a-279955237a10-kube-api-access-zzzdn\") pod \"ovn-northd-0\" (UID: \"41ee36c1-d509-4c0c-960a-279955237a10\") " pod="openstack/ovn-northd-0" Feb 17 13:53:35 crc kubenswrapper[4768]: I0217 13:53:35.921182 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/399cc340-c212-4298-a995-236a556b5108-dns-svc\") pod \"dnsmasq-dns-86db49b7ff-spclm\" (UID: \"399cc340-c212-4298-a995-236a556b5108\") " pod="openstack/dnsmasq-dns-86db49b7ff-spclm" Feb 17 13:53:35 crc kubenswrapper[4768]: I0217 13:53:35.921207 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/41ee36c1-d509-4c0c-960a-279955237a10-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"41ee36c1-d509-4c0c-960a-279955237a10\") " pod="openstack/ovn-northd-0" Feb 17 13:53:35 crc kubenswrapper[4768]: I0217 13:53:35.921238 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d969a380-827a-46eb-8f6e-9f28ae50312a-combined-ca-bundle\") pod \"ovn-controller-metrics-rchd4\" (UID: \"d969a380-827a-46eb-8f6e-9f28ae50312a\") " pod="openstack/ovn-controller-metrics-rchd4" Feb 17 13:53:35 crc kubenswrapper[4768]: I0217 13:53:35.921265 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/399cc340-c212-4298-a995-236a556b5108-ovsdbserver-sb\") pod \"dnsmasq-dns-86db49b7ff-spclm\" (UID: \"399cc340-c212-4298-a995-236a556b5108\") " pod="openstack/dnsmasq-dns-86db49b7ff-spclm" Feb 17 13:53:35 crc kubenswrapper[4768]: I0217 13:53:35.921304 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/399cc340-c212-4298-a995-236a556b5108-config\") pod \"dnsmasq-dns-86db49b7ff-spclm\" (UID: \"399cc340-c212-4298-a995-236a556b5108\") " pod="openstack/dnsmasq-dns-86db49b7ff-spclm" Feb 17 13:53:35 crc kubenswrapper[4768]: I0217 13:53:35.921334 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/d969a380-827a-46eb-8f6e-9f28ae50312a-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-rchd4\" (UID: \"d969a380-827a-46eb-8f6e-9f28ae50312a\") " pod="openstack/ovn-controller-metrics-rchd4" Feb 17 13:53:35 crc kubenswrapper[4768]: I0217 13:53:35.921360 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-44gwh\" (UniqueName: \"kubernetes.io/projected/d969a380-827a-46eb-8f6e-9f28ae50312a-kube-api-access-44gwh\") pod \"ovn-controller-metrics-rchd4\" (UID: \"d969a380-827a-46eb-8f6e-9f28ae50312a\") " pod="openstack/ovn-controller-metrics-rchd4" Feb 17 13:53:35 crc kubenswrapper[4768]: I0217 13:53:35.921397 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/41ee36c1-d509-4c0c-960a-279955237a10-scripts\") pod \"ovn-northd-0\" (UID: \"41ee36c1-d509-4c0c-960a-279955237a10\") " pod="openstack/ovn-northd-0" Feb 17 13:53:35 crc kubenswrapper[4768]: I0217 13:53:35.921783 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/d969a380-827a-46eb-8f6e-9f28ae50312a-ovs-rundir\") pod \"ovn-controller-metrics-rchd4\" (UID: \"d969a380-827a-46eb-8f6e-9f28ae50312a\") " pod="openstack/ovn-controller-metrics-rchd4" Feb 17 13:53:35 crc kubenswrapper[4768]: I0217 13:53:35.922591 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/d969a380-827a-46eb-8f6e-9f28ae50312a-ovn-rundir\") pod \"ovn-controller-metrics-rchd4\" (UID: \"d969a380-827a-46eb-8f6e-9f28ae50312a\") " pod="openstack/ovn-controller-metrics-rchd4" Feb 17 13:53:35 crc kubenswrapper[4768]: I0217 13:53:35.923319 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d969a380-827a-46eb-8f6e-9f28ae50312a-config\") pod \"ovn-controller-metrics-rchd4\" (UID: \"d969a380-827a-46eb-8f6e-9f28ae50312a\") " pod="openstack/ovn-controller-metrics-rchd4" Feb 17 13:53:35 crc kubenswrapper[4768]: I0217 13:53:35.928086 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d969a380-827a-46eb-8f6e-9f28ae50312a-combined-ca-bundle\") pod \"ovn-controller-metrics-rchd4\" (UID: \"d969a380-827a-46eb-8f6e-9f28ae50312a\") " pod="openstack/ovn-controller-metrics-rchd4" Feb 17 13:53:35 crc kubenswrapper[4768]: I0217 13:53:35.943304 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-44gwh\" (UniqueName: \"kubernetes.io/projected/d969a380-827a-46eb-8f6e-9f28ae50312a-kube-api-access-44gwh\") pod \"ovn-controller-metrics-rchd4\" (UID: \"d969a380-827a-46eb-8f6e-9f28ae50312a\") " pod="openstack/ovn-controller-metrics-rchd4" Feb 17 13:53:35 crc kubenswrapper[4768]: I0217 13:53:35.944948 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/d969a380-827a-46eb-8f6e-9f28ae50312a-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-rchd4\" (UID: \"d969a380-827a-46eb-8f6e-9f28ae50312a\") " pod="openstack/ovn-controller-metrics-rchd4" Feb 17 13:53:36 crc kubenswrapper[4768]: I0217 13:53:36.023195 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/41ee36c1-d509-4c0c-960a-279955237a10-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"41ee36c1-d509-4c0c-960a-279955237a10\") " pod="openstack/ovn-northd-0" Feb 17 13:53:36 crc kubenswrapper[4768]: I0217 13:53:36.023538 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gbvtt\" (UniqueName: \"kubernetes.io/projected/399cc340-c212-4298-a995-236a556b5108-kube-api-access-gbvtt\") pod \"dnsmasq-dns-86db49b7ff-spclm\" (UID: \"399cc340-c212-4298-a995-236a556b5108\") " pod="openstack/dnsmasq-dns-86db49b7ff-spclm" Feb 17 13:53:36 crc kubenswrapper[4768]: I0217 13:53:36.023621 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/41ee36c1-d509-4c0c-960a-279955237a10-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"41ee36c1-d509-4c0c-960a-279955237a10\") " pod="openstack/ovn-northd-0" Feb 17 13:53:36 crc kubenswrapper[4768]: I0217 13:53:36.023715 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/399cc340-c212-4298-a995-236a556b5108-ovsdbserver-nb\") pod \"dnsmasq-dns-86db49b7ff-spclm\" (UID: \"399cc340-c212-4298-a995-236a556b5108\") " pod="openstack/dnsmasq-dns-86db49b7ff-spclm" Feb 17 13:53:36 crc kubenswrapper[4768]: I0217 13:53:36.023790 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/41ee36c1-d509-4c0c-960a-279955237a10-config\") pod \"ovn-northd-0\" (UID: \"41ee36c1-d509-4c0c-960a-279955237a10\") " pod="openstack/ovn-northd-0" Feb 17 13:53:36 crc kubenswrapper[4768]: I0217 13:53:36.023873 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/41ee36c1-d509-4c0c-960a-279955237a10-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"41ee36c1-d509-4c0c-960a-279955237a10\") " pod="openstack/ovn-northd-0" Feb 17 13:53:36 crc kubenswrapper[4768]: I0217 13:53:36.023946 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zzzdn\" (UniqueName: \"kubernetes.io/projected/41ee36c1-d509-4c0c-960a-279955237a10-kube-api-access-zzzdn\") pod \"ovn-northd-0\" (UID: \"41ee36c1-d509-4c0c-960a-279955237a10\") " pod="openstack/ovn-northd-0" Feb 17 13:53:36 crc kubenswrapper[4768]: I0217 13:53:36.024016 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/399cc340-c212-4298-a995-236a556b5108-dns-svc\") pod \"dnsmasq-dns-86db49b7ff-spclm\" (UID: \"399cc340-c212-4298-a995-236a556b5108\") " pod="openstack/dnsmasq-dns-86db49b7ff-spclm" Feb 17 13:53:36 crc kubenswrapper[4768]: I0217 13:53:36.024093 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/41ee36c1-d509-4c0c-960a-279955237a10-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"41ee36c1-d509-4c0c-960a-279955237a10\") " pod="openstack/ovn-northd-0" Feb 17 13:53:36 crc kubenswrapper[4768]: I0217 13:53:36.024179 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/399cc340-c212-4298-a995-236a556b5108-ovsdbserver-sb\") pod \"dnsmasq-dns-86db49b7ff-spclm\" (UID: \"399cc340-c212-4298-a995-236a556b5108\") " pod="openstack/dnsmasq-dns-86db49b7ff-spclm" Feb 17 13:53:36 crc kubenswrapper[4768]: I0217 13:53:36.024279 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/399cc340-c212-4298-a995-236a556b5108-config\") pod \"dnsmasq-dns-86db49b7ff-spclm\" (UID: \"399cc340-c212-4298-a995-236a556b5108\") " pod="openstack/dnsmasq-dns-86db49b7ff-spclm" Feb 17 13:53:36 crc kubenswrapper[4768]: I0217 13:53:36.024473 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/41ee36c1-d509-4c0c-960a-279955237a10-scripts\") pod \"ovn-northd-0\" (UID: \"41ee36c1-d509-4c0c-960a-279955237a10\") " pod="openstack/ovn-northd-0" Feb 17 13:53:36 crc kubenswrapper[4768]: I0217 13:53:36.025689 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/41ee36c1-d509-4c0c-960a-279955237a10-scripts\") pod \"ovn-northd-0\" (UID: \"41ee36c1-d509-4c0c-960a-279955237a10\") " pod="openstack/ovn-northd-0" Feb 17 13:53:36 crc kubenswrapper[4768]: I0217 13:53:36.025875 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/41ee36c1-d509-4c0c-960a-279955237a10-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"41ee36c1-d509-4c0c-960a-279955237a10\") " pod="openstack/ovn-northd-0" Feb 17 13:53:36 crc kubenswrapper[4768]: I0217 13:53:36.026965 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/399cc340-c212-4298-a995-236a556b5108-dns-svc\") pod \"dnsmasq-dns-86db49b7ff-spclm\" (UID: \"399cc340-c212-4298-a995-236a556b5108\") " pod="openstack/dnsmasq-dns-86db49b7ff-spclm" Feb 17 13:53:36 crc kubenswrapper[4768]: I0217 13:53:36.027199 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/399cc340-c212-4298-a995-236a556b5108-ovsdbserver-sb\") pod \"dnsmasq-dns-86db49b7ff-spclm\" (UID: \"399cc340-c212-4298-a995-236a556b5108\") " pod="openstack/dnsmasq-dns-86db49b7ff-spclm" Feb 17 13:53:36 crc kubenswrapper[4768]: I0217 13:53:36.027642 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/399cc340-c212-4298-a995-236a556b5108-config\") pod \"dnsmasq-dns-86db49b7ff-spclm\" (UID: \"399cc340-c212-4298-a995-236a556b5108\") " pod="openstack/dnsmasq-dns-86db49b7ff-spclm" Feb 17 13:53:36 crc kubenswrapper[4768]: I0217 13:53:36.028282 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/41ee36c1-d509-4c0c-960a-279955237a10-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"41ee36c1-d509-4c0c-960a-279955237a10\") " pod="openstack/ovn-northd-0" Feb 17 13:53:36 crc kubenswrapper[4768]: I0217 13:53:36.028338 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/399cc340-c212-4298-a995-236a556b5108-ovsdbserver-nb\") pod \"dnsmasq-dns-86db49b7ff-spclm\" (UID: \"399cc340-c212-4298-a995-236a556b5108\") " pod="openstack/dnsmasq-dns-86db49b7ff-spclm" Feb 17 13:53:36 crc kubenswrapper[4768]: I0217 13:53:36.028632 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/41ee36c1-d509-4c0c-960a-279955237a10-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"41ee36c1-d509-4c0c-960a-279955237a10\") " pod="openstack/ovn-northd-0" Feb 17 13:53:36 crc kubenswrapper[4768]: I0217 13:53:36.028721 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/41ee36c1-d509-4c0c-960a-279955237a10-config\") pod \"ovn-northd-0\" (UID: \"41ee36c1-d509-4c0c-960a-279955237a10\") " pod="openstack/ovn-northd-0" Feb 17 13:53:36 crc kubenswrapper[4768]: I0217 13:53:36.040087 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/41ee36c1-d509-4c0c-960a-279955237a10-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"41ee36c1-d509-4c0c-960a-279955237a10\") " pod="openstack/ovn-northd-0" Feb 17 13:53:36 crc kubenswrapper[4768]: I0217 13:53:36.043400 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-rchd4" Feb 17 13:53:36 crc kubenswrapper[4768]: I0217 13:53:36.053071 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gbvtt\" (UniqueName: \"kubernetes.io/projected/399cc340-c212-4298-a995-236a556b5108-kube-api-access-gbvtt\") pod \"dnsmasq-dns-86db49b7ff-spclm\" (UID: \"399cc340-c212-4298-a995-236a556b5108\") " pod="openstack/dnsmasq-dns-86db49b7ff-spclm" Feb 17 13:53:36 crc kubenswrapper[4768]: I0217 13:53:36.059974 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zzzdn\" (UniqueName: \"kubernetes.io/projected/41ee36c1-d509-4c0c-960a-279955237a10-kube-api-access-zzzdn\") pod \"ovn-northd-0\" (UID: \"41ee36c1-d509-4c0c-960a-279955237a10\") " pod="openstack/ovn-northd-0" Feb 17 13:53:36 crc kubenswrapper[4768]: I0217 13:53:36.163241 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Feb 17 13:53:36 crc kubenswrapper[4768]: I0217 13:53:36.223766 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86db49b7ff-spclm" Feb 17 13:53:36 crc kubenswrapper[4768]: I0217 13:53:36.354712 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7fd796d7df-6fww7" Feb 17 13:53:36 crc kubenswrapper[4768]: I0217 13:53:36.355252 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"a0368ca4-d5b7-4604-b15a-a7cb4fcf5652","Type":"ContainerStarted","Data":"ec287f1d66ef01897d1ca7ae3b7b288913e010a3bb98c4f76d9bec90d00a49ce"} Feb 17 13:53:36 crc kubenswrapper[4768]: I0217 13:53:36.369026 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7fd796d7df-6fww7" Feb 17 13:53:36 crc kubenswrapper[4768]: I0217 13:53:36.383663 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-cell1-galera-0" podStartSLOduration=20.733033937 podStartE2EDuration="27.383645525s" podCreationTimestamp="2026-02-17 13:53:09 +0000 UTC" firstStartedPulling="2026-02-17 13:53:23.183057844 +0000 UTC m=+1022.462444286" lastFinishedPulling="2026-02-17 13:53:29.833669432 +0000 UTC m=+1029.113055874" observedRunningTime="2026-02-17 13:53:36.378596686 +0000 UTC m=+1035.657983128" watchObservedRunningTime="2026-02-17 13:53:36.383645525 +0000 UTC m=+1035.663031977" Feb 17 13:53:36 crc kubenswrapper[4768]: I0217 13:53:36.433507 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ebb0b607-4911-4e5b-887b-934c3d10fabf-dns-svc\") pod \"ebb0b607-4911-4e5b-887b-934c3d10fabf\" (UID: \"ebb0b607-4911-4e5b-887b-934c3d10fabf\") " Feb 17 13:53:36 crc kubenswrapper[4768]: I0217 13:53:36.433556 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kwnbh\" (UniqueName: \"kubernetes.io/projected/ebb0b607-4911-4e5b-887b-934c3d10fabf-kube-api-access-kwnbh\") pod \"ebb0b607-4911-4e5b-887b-934c3d10fabf\" (UID: \"ebb0b607-4911-4e5b-887b-934c3d10fabf\") " Feb 17 13:53:36 crc kubenswrapper[4768]: I0217 13:53:36.433615 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ebb0b607-4911-4e5b-887b-934c3d10fabf-ovsdbserver-nb\") pod \"ebb0b607-4911-4e5b-887b-934c3d10fabf\" (UID: \"ebb0b607-4911-4e5b-887b-934c3d10fabf\") " Feb 17 13:53:36 crc kubenswrapper[4768]: I0217 13:53:36.433721 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ebb0b607-4911-4e5b-887b-934c3d10fabf-config\") pod \"ebb0b607-4911-4e5b-887b-934c3d10fabf\" (UID: \"ebb0b607-4911-4e5b-887b-934c3d10fabf\") " Feb 17 13:53:36 crc kubenswrapper[4768]: I0217 13:53:36.435238 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ebb0b607-4911-4e5b-887b-934c3d10fabf-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "ebb0b607-4911-4e5b-887b-934c3d10fabf" (UID: "ebb0b607-4911-4e5b-887b-934c3d10fabf"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 13:53:36 crc kubenswrapper[4768]: I0217 13:53:36.436720 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ebb0b607-4911-4e5b-887b-934c3d10fabf-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "ebb0b607-4911-4e5b-887b-934c3d10fabf" (UID: "ebb0b607-4911-4e5b-887b-934c3d10fabf"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 13:53:36 crc kubenswrapper[4768]: I0217 13:53:36.436785 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ebb0b607-4911-4e5b-887b-934c3d10fabf-config" (OuterVolumeSpecName: "config") pod "ebb0b607-4911-4e5b-887b-934c3d10fabf" (UID: "ebb0b607-4911-4e5b-887b-934c3d10fabf"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 13:53:36 crc kubenswrapper[4768]: I0217 13:53:36.442303 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ebb0b607-4911-4e5b-887b-934c3d10fabf-kube-api-access-kwnbh" (OuterVolumeSpecName: "kube-api-access-kwnbh") pod "ebb0b607-4911-4e5b-887b-934c3d10fabf" (UID: "ebb0b607-4911-4e5b-887b-934c3d10fabf"). InnerVolumeSpecName "kube-api-access-kwnbh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 13:53:36 crc kubenswrapper[4768]: I0217 13:53:36.453050 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-rchd4"] Feb 17 13:53:36 crc kubenswrapper[4768]: W0217 13:53:36.457717 4768 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd969a380_827a_46eb_8f6e_9f28ae50312a.slice/crio-469feb87b17e1ffde554c0770a1bce676ca5bfffb4112069879f3ac08be46ca0 WatchSource:0}: Error finding container 469feb87b17e1ffde554c0770a1bce676ca5bfffb4112069879f3ac08be46ca0: Status 404 returned error can't find the container with id 469feb87b17e1ffde554c0770a1bce676ca5bfffb4112069879f3ac08be46ca0 Feb 17 13:53:36 crc kubenswrapper[4768]: I0217 13:53:36.535280 4768 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ebb0b607-4911-4e5b-887b-934c3d10fabf-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 17 13:53:36 crc kubenswrapper[4768]: I0217 13:53:36.535578 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kwnbh\" (UniqueName: \"kubernetes.io/projected/ebb0b607-4911-4e5b-887b-934c3d10fabf-kube-api-access-kwnbh\") on node \"crc\" DevicePath \"\"" Feb 17 13:53:36 crc kubenswrapper[4768]: I0217 13:53:36.535608 4768 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ebb0b607-4911-4e5b-887b-934c3d10fabf-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 17 13:53:36 crc kubenswrapper[4768]: I0217 13:53:36.535620 4768 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ebb0b607-4911-4e5b-887b-934c3d10fabf-config\") on node \"crc\" DevicePath \"\"" Feb 17 13:53:36 crc kubenswrapper[4768]: I0217 13:53:36.619663 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Feb 17 13:53:36 crc kubenswrapper[4768]: W0217 13:53:36.622971 4768 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod41ee36c1_d509_4c0c_960a_279955237a10.slice/crio-85e2d3b82a11970ecfd70266c5ad8316e561e2eddf796c671f12feacf1dd7dfa WatchSource:0}: Error finding container 85e2d3b82a11970ecfd70266c5ad8316e561e2eddf796c671f12feacf1dd7dfa: Status 404 returned error can't find the container with id 85e2d3b82a11970ecfd70266c5ad8316e561e2eddf796c671f12feacf1dd7dfa Feb 17 13:53:36 crc kubenswrapper[4768]: I0217 13:53:36.713873 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-spclm"] Feb 17 13:53:37 crc kubenswrapper[4768]: I0217 13:53:37.372955 4768 generic.go:334] "Generic (PLEG): container finished" podID="399cc340-c212-4298-a995-236a556b5108" containerID="ec5b17ca91f73ae4c501402508cdfa4487b8f30008e45e55d9fe56bea35934c3" exitCode=0 Feb 17 13:53:37 crc kubenswrapper[4768]: I0217 13:53:37.373079 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-spclm" event={"ID":"399cc340-c212-4298-a995-236a556b5108","Type":"ContainerDied","Data":"ec5b17ca91f73ae4c501402508cdfa4487b8f30008e45e55d9fe56bea35934c3"} Feb 17 13:53:37 crc kubenswrapper[4768]: I0217 13:53:37.373393 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-spclm" event={"ID":"399cc340-c212-4298-a995-236a556b5108","Type":"ContainerStarted","Data":"7f6c3199137ef16c57575f685309077482c5a1ccfd4ae987972511c94cd99699"} Feb 17 13:53:37 crc kubenswrapper[4768]: I0217 13:53:37.377062 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"41ee36c1-d509-4c0c-960a-279955237a10","Type":"ContainerStarted","Data":"85e2d3b82a11970ecfd70266c5ad8316e561e2eddf796c671f12feacf1dd7dfa"} Feb 17 13:53:37 crc kubenswrapper[4768]: I0217 13:53:37.382067 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7fd796d7df-6fww7" Feb 17 13:53:37 crc kubenswrapper[4768]: I0217 13:53:37.382131 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-rchd4" event={"ID":"d969a380-827a-46eb-8f6e-9f28ae50312a","Type":"ContainerStarted","Data":"e96735cf5f4b3c46585ecb665553fe5729a4679c6481495079311575ee3cf2e1"} Feb 17 13:53:37 crc kubenswrapper[4768]: I0217 13:53:37.382164 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-rchd4" event={"ID":"d969a380-827a-46eb-8f6e-9f28ae50312a","Type":"ContainerStarted","Data":"469feb87b17e1ffde554c0770a1bce676ca5bfffb4112069879f3ac08be46ca0"} Feb 17 13:53:37 crc kubenswrapper[4768]: I0217 13:53:37.417009 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-metrics-rchd4" podStartSLOduration=2.416990842 podStartE2EDuration="2.416990842s" podCreationTimestamp="2026-02-17 13:53:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 13:53:37.414598397 +0000 UTC m=+1036.693984839" watchObservedRunningTime="2026-02-17 13:53:37.416990842 +0000 UTC m=+1036.696377284" Feb 17 13:53:37 crc kubenswrapper[4768]: I0217 13:53:37.471392 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7fd796d7df-6fww7"] Feb 17 13:53:37 crc kubenswrapper[4768]: I0217 13:53:37.478621 4768 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7fd796d7df-6fww7"] Feb 17 13:53:37 crc kubenswrapper[4768]: I0217 13:53:37.548067 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ebb0b607-4911-4e5b-887b-934c3d10fabf" path="/var/lib/kubelet/pods/ebb0b607-4911-4e5b-887b-934c3d10fabf/volumes" Feb 17 13:53:38 crc kubenswrapper[4768]: I0217 13:53:38.392279 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-spclm" event={"ID":"399cc340-c212-4298-a995-236a556b5108","Type":"ContainerStarted","Data":"5079b23a939a7e7bc1747ee38ce622a99cd22705bfb8194defeb9e203173fae4"} Feb 17 13:53:38 crc kubenswrapper[4768]: I0217 13:53:38.392616 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-86db49b7ff-spclm" Feb 17 13:53:38 crc kubenswrapper[4768]: I0217 13:53:38.394325 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"41ee36c1-d509-4c0c-960a-279955237a10","Type":"ContainerStarted","Data":"0d25c9d37153c7e42708382333551527696970ebd5eab3f43a4f3d23c1ee2b8c"} Feb 17 13:53:38 crc kubenswrapper[4768]: I0217 13:53:38.394387 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"41ee36c1-d509-4c0c-960a-279955237a10","Type":"ContainerStarted","Data":"8f74f3699e82c05c730a2e6782e24fa9cc8350b6629f670f1c4890e36c706f52"} Feb 17 13:53:38 crc kubenswrapper[4768]: I0217 13:53:38.394560 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-northd-0" Feb 17 13:53:38 crc kubenswrapper[4768]: I0217 13:53:38.414397 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-86db49b7ff-spclm" podStartSLOduration=3.414379262 podStartE2EDuration="3.414379262s" podCreationTimestamp="2026-02-17 13:53:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 13:53:38.412282214 +0000 UTC m=+1037.691668656" watchObservedRunningTime="2026-02-17 13:53:38.414379262 +0000 UTC m=+1037.693765704" Feb 17 13:53:38 crc kubenswrapper[4768]: I0217 13:53:38.438367 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-northd-0" podStartSLOduration=2.39621349 podStartE2EDuration="3.43834818s" podCreationTimestamp="2026-02-17 13:53:35 +0000 UTC" firstStartedPulling="2026-02-17 13:53:36.625228163 +0000 UTC m=+1035.904614605" lastFinishedPulling="2026-02-17 13:53:37.667362853 +0000 UTC m=+1036.946749295" observedRunningTime="2026-02-17 13:53:38.43066001 +0000 UTC m=+1037.710046452" watchObservedRunningTime="2026-02-17 13:53:38.43834818 +0000 UTC m=+1037.717734632" Feb 17 13:53:39 crc kubenswrapper[4768]: E0217 13:53:39.048526 4768 upgradeaware.go:441] Error proxying data from backend to client: writeto tcp 38.102.83.36:35070->38.102.83.36:45817: read tcp 38.102.83.36:35070->38.102.83.36:45817: read: connection reset by peer Feb 17 13:53:39 crc kubenswrapper[4768]: I0217 13:53:39.185229 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-galera-0" Feb 17 13:53:39 crc kubenswrapper[4768]: I0217 13:53:39.185277 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-galera-0" Feb 17 13:53:39 crc kubenswrapper[4768]: I0217 13:53:39.262966 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-galera-0" Feb 17 13:53:39 crc kubenswrapper[4768]: I0217 13:53:39.470385 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-galera-0" Feb 17 13:53:40 crc kubenswrapper[4768]: I0217 13:53:40.570839 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-cell1-galera-0" Feb 17 13:53:40 crc kubenswrapper[4768]: I0217 13:53:40.571369 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-cell1-galera-0" Feb 17 13:53:40 crc kubenswrapper[4768]: I0217 13:53:40.681140 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-cell1-galera-0" Feb 17 13:53:41 crc kubenswrapper[4768]: I0217 13:53:41.527751 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-cell1-galera-0" Feb 17 13:53:41 crc kubenswrapper[4768]: I0217 13:53:41.780599 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-3797-account-create-update-bftff"] Feb 17 13:53:41 crc kubenswrapper[4768]: I0217 13:53:41.781731 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-3797-account-create-update-bftff" Feb 17 13:53:41 crc kubenswrapper[4768]: I0217 13:53:41.786925 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-db-secret" Feb 17 13:53:41 crc kubenswrapper[4768]: I0217 13:53:41.791688 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-3797-account-create-update-bftff"] Feb 17 13:53:41 crc kubenswrapper[4768]: I0217 13:53:41.836962 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-create-6qstf"] Feb 17 13:53:41 crc kubenswrapper[4768]: I0217 13:53:41.838317 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-6qstf" Feb 17 13:53:41 crc kubenswrapper[4768]: I0217 13:53:41.844466 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-6qstf"] Feb 17 13:53:41 crc kubenswrapper[4768]: I0217 13:53:41.945860 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/43f9209b-c554-49e3-886a-4e9ee73ebe3e-operator-scripts\") pod \"keystone-3797-account-create-update-bftff\" (UID: \"43f9209b-c554-49e3-886a-4e9ee73ebe3e\") " pod="openstack/keystone-3797-account-create-update-bftff" Feb 17 13:53:41 crc kubenswrapper[4768]: I0217 13:53:41.945943 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-28vm2\" (UniqueName: \"kubernetes.io/projected/43f9209b-c554-49e3-886a-4e9ee73ebe3e-kube-api-access-28vm2\") pod \"keystone-3797-account-create-update-bftff\" (UID: \"43f9209b-c554-49e3-886a-4e9ee73ebe3e\") " pod="openstack/keystone-3797-account-create-update-bftff" Feb 17 13:53:41 crc kubenswrapper[4768]: I0217 13:53:41.946015 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lxdqz\" (UniqueName: \"kubernetes.io/projected/ff18ff3e-97c6-433b-8ad9-837a77fb0e88-kube-api-access-lxdqz\") pod \"keystone-db-create-6qstf\" (UID: \"ff18ff3e-97c6-433b-8ad9-837a77fb0e88\") " pod="openstack/keystone-db-create-6qstf" Feb 17 13:53:41 crc kubenswrapper[4768]: I0217 13:53:41.946301 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ff18ff3e-97c6-433b-8ad9-837a77fb0e88-operator-scripts\") pod \"keystone-db-create-6qstf\" (UID: \"ff18ff3e-97c6-433b-8ad9-837a77fb0e88\") " pod="openstack/keystone-db-create-6qstf" Feb 17 13:53:42 crc kubenswrapper[4768]: I0217 13:53:42.029941 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-create-2wbdp"] Feb 17 13:53:42 crc kubenswrapper[4768]: I0217 13:53:42.031299 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-2wbdp" Feb 17 13:53:42 crc kubenswrapper[4768]: I0217 13:53:42.050159 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/43f9209b-c554-49e3-886a-4e9ee73ebe3e-operator-scripts\") pod \"keystone-3797-account-create-update-bftff\" (UID: \"43f9209b-c554-49e3-886a-4e9ee73ebe3e\") " pod="openstack/keystone-3797-account-create-update-bftff" Feb 17 13:53:42 crc kubenswrapper[4768]: I0217 13:53:42.050213 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-28vm2\" (UniqueName: \"kubernetes.io/projected/43f9209b-c554-49e3-886a-4e9ee73ebe3e-kube-api-access-28vm2\") pod \"keystone-3797-account-create-update-bftff\" (UID: \"43f9209b-c554-49e3-886a-4e9ee73ebe3e\") " pod="openstack/keystone-3797-account-create-update-bftff" Feb 17 13:53:42 crc kubenswrapper[4768]: I0217 13:53:42.050254 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lxdqz\" (UniqueName: \"kubernetes.io/projected/ff18ff3e-97c6-433b-8ad9-837a77fb0e88-kube-api-access-lxdqz\") pod \"keystone-db-create-6qstf\" (UID: \"ff18ff3e-97c6-433b-8ad9-837a77fb0e88\") " pod="openstack/keystone-db-create-6qstf" Feb 17 13:53:42 crc kubenswrapper[4768]: I0217 13:53:42.050298 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ff18ff3e-97c6-433b-8ad9-837a77fb0e88-operator-scripts\") pod \"keystone-db-create-6qstf\" (UID: \"ff18ff3e-97c6-433b-8ad9-837a77fb0e88\") " pod="openstack/keystone-db-create-6qstf" Feb 17 13:53:42 crc kubenswrapper[4768]: I0217 13:53:42.050947 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ff18ff3e-97c6-433b-8ad9-837a77fb0e88-operator-scripts\") pod \"keystone-db-create-6qstf\" (UID: \"ff18ff3e-97c6-433b-8ad9-837a77fb0e88\") " pod="openstack/keystone-db-create-6qstf" Feb 17 13:53:42 crc kubenswrapper[4768]: I0217 13:53:42.051452 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/43f9209b-c554-49e3-886a-4e9ee73ebe3e-operator-scripts\") pod \"keystone-3797-account-create-update-bftff\" (UID: \"43f9209b-c554-49e3-886a-4e9ee73ebe3e\") " pod="openstack/keystone-3797-account-create-update-bftff" Feb 17 13:53:42 crc kubenswrapper[4768]: I0217 13:53:42.055371 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-8557-account-create-update-zm7x5"] Feb 17 13:53:42 crc kubenswrapper[4768]: I0217 13:53:42.059657 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-8557-account-create-update-zm7x5" Feb 17 13:53:42 crc kubenswrapper[4768]: I0217 13:53:42.063789 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-db-secret" Feb 17 13:53:42 crc kubenswrapper[4768]: I0217 13:53:42.079732 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lxdqz\" (UniqueName: \"kubernetes.io/projected/ff18ff3e-97c6-433b-8ad9-837a77fb0e88-kube-api-access-lxdqz\") pod \"keystone-db-create-6qstf\" (UID: \"ff18ff3e-97c6-433b-8ad9-837a77fb0e88\") " pod="openstack/keystone-db-create-6qstf" Feb 17 13:53:42 crc kubenswrapper[4768]: I0217 13:53:42.081175 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-2wbdp"] Feb 17 13:53:42 crc kubenswrapper[4768]: I0217 13:53:42.081959 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-28vm2\" (UniqueName: \"kubernetes.io/projected/43f9209b-c554-49e3-886a-4e9ee73ebe3e-kube-api-access-28vm2\") pod \"keystone-3797-account-create-update-bftff\" (UID: \"43f9209b-c554-49e3-886a-4e9ee73ebe3e\") " pod="openstack/keystone-3797-account-create-update-bftff" Feb 17 13:53:42 crc kubenswrapper[4768]: I0217 13:53:42.088327 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-8557-account-create-update-zm7x5"] Feb 17 13:53:42 crc kubenswrapper[4768]: I0217 13:53:42.103823 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-3797-account-create-update-bftff" Feb 17 13:53:42 crc kubenswrapper[4768]: I0217 13:53:42.153255 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s8hxn\" (UniqueName: \"kubernetes.io/projected/44008c3e-4ca6-4d59-8a7c-046a28c72b7d-kube-api-access-s8hxn\") pod \"placement-db-create-2wbdp\" (UID: \"44008c3e-4ca6-4d59-8a7c-046a28c72b7d\") " pod="openstack/placement-db-create-2wbdp" Feb 17 13:53:42 crc kubenswrapper[4768]: I0217 13:53:42.153573 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nhrjt\" (UniqueName: \"kubernetes.io/projected/fb47d4a7-16fc-402d-8943-40d7d22a00c4-kube-api-access-nhrjt\") pod \"placement-8557-account-create-update-zm7x5\" (UID: \"fb47d4a7-16fc-402d-8943-40d7d22a00c4\") " pod="openstack/placement-8557-account-create-update-zm7x5" Feb 17 13:53:42 crc kubenswrapper[4768]: I0217 13:53:42.153753 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fb47d4a7-16fc-402d-8943-40d7d22a00c4-operator-scripts\") pod \"placement-8557-account-create-update-zm7x5\" (UID: \"fb47d4a7-16fc-402d-8943-40d7d22a00c4\") " pod="openstack/placement-8557-account-create-update-zm7x5" Feb 17 13:53:42 crc kubenswrapper[4768]: I0217 13:53:42.154124 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/44008c3e-4ca6-4d59-8a7c-046a28c72b7d-operator-scripts\") pod \"placement-db-create-2wbdp\" (UID: \"44008c3e-4ca6-4d59-8a7c-046a28c72b7d\") " pod="openstack/placement-db-create-2wbdp" Feb 17 13:53:42 crc kubenswrapper[4768]: I0217 13:53:42.158046 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-6qstf" Feb 17 13:53:42 crc kubenswrapper[4768]: I0217 13:53:42.255579 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/44008c3e-4ca6-4d59-8a7c-046a28c72b7d-operator-scripts\") pod \"placement-db-create-2wbdp\" (UID: \"44008c3e-4ca6-4d59-8a7c-046a28c72b7d\") " pod="openstack/placement-db-create-2wbdp" Feb 17 13:53:42 crc kubenswrapper[4768]: I0217 13:53:42.255647 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s8hxn\" (UniqueName: \"kubernetes.io/projected/44008c3e-4ca6-4d59-8a7c-046a28c72b7d-kube-api-access-s8hxn\") pod \"placement-db-create-2wbdp\" (UID: \"44008c3e-4ca6-4d59-8a7c-046a28c72b7d\") " pod="openstack/placement-db-create-2wbdp" Feb 17 13:53:42 crc kubenswrapper[4768]: I0217 13:53:42.255711 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nhrjt\" (UniqueName: \"kubernetes.io/projected/fb47d4a7-16fc-402d-8943-40d7d22a00c4-kube-api-access-nhrjt\") pod \"placement-8557-account-create-update-zm7x5\" (UID: \"fb47d4a7-16fc-402d-8943-40d7d22a00c4\") " pod="openstack/placement-8557-account-create-update-zm7x5" Feb 17 13:53:42 crc kubenswrapper[4768]: I0217 13:53:42.255771 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fb47d4a7-16fc-402d-8943-40d7d22a00c4-operator-scripts\") pod \"placement-8557-account-create-update-zm7x5\" (UID: \"fb47d4a7-16fc-402d-8943-40d7d22a00c4\") " pod="openstack/placement-8557-account-create-update-zm7x5" Feb 17 13:53:42 crc kubenswrapper[4768]: I0217 13:53:42.256811 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/44008c3e-4ca6-4d59-8a7c-046a28c72b7d-operator-scripts\") pod \"placement-db-create-2wbdp\" (UID: \"44008c3e-4ca6-4d59-8a7c-046a28c72b7d\") " pod="openstack/placement-db-create-2wbdp" Feb 17 13:53:42 crc kubenswrapper[4768]: I0217 13:53:42.258526 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fb47d4a7-16fc-402d-8943-40d7d22a00c4-operator-scripts\") pod \"placement-8557-account-create-update-zm7x5\" (UID: \"fb47d4a7-16fc-402d-8943-40d7d22a00c4\") " pod="openstack/placement-8557-account-create-update-zm7x5" Feb 17 13:53:42 crc kubenswrapper[4768]: I0217 13:53:42.273860 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nhrjt\" (UniqueName: \"kubernetes.io/projected/fb47d4a7-16fc-402d-8943-40d7d22a00c4-kube-api-access-nhrjt\") pod \"placement-8557-account-create-update-zm7x5\" (UID: \"fb47d4a7-16fc-402d-8943-40d7d22a00c4\") " pod="openstack/placement-8557-account-create-update-zm7x5" Feb 17 13:53:42 crc kubenswrapper[4768]: I0217 13:53:42.273972 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s8hxn\" (UniqueName: \"kubernetes.io/projected/44008c3e-4ca6-4d59-8a7c-046a28c72b7d-kube-api-access-s8hxn\") pod \"placement-db-create-2wbdp\" (UID: \"44008c3e-4ca6-4d59-8a7c-046a28c72b7d\") " pod="openstack/placement-db-create-2wbdp" Feb 17 13:53:42 crc kubenswrapper[4768]: I0217 13:53:42.349959 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-2wbdp" Feb 17 13:53:42 crc kubenswrapper[4768]: I0217 13:53:42.381870 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-8557-account-create-update-zm7x5" Feb 17 13:53:42 crc kubenswrapper[4768]: I0217 13:53:42.546427 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-3797-account-create-update-bftff"] Feb 17 13:53:42 crc kubenswrapper[4768]: I0217 13:53:42.654837 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-6qstf"] Feb 17 13:53:42 crc kubenswrapper[4768]: W0217 13:53:42.657781 4768 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podff18ff3e_97c6_433b_8ad9_837a77fb0e88.slice/crio-6deb9698a0dae9fb6c89acb3a1f01c5f420da440b946afcf89e7c42345e79fdf WatchSource:0}: Error finding container 6deb9698a0dae9fb6c89acb3a1f01c5f420da440b946afcf89e7c42345e79fdf: Status 404 returned error can't find the container with id 6deb9698a0dae9fb6c89acb3a1f01c5f420da440b946afcf89e7c42345e79fdf Feb 17 13:53:42 crc kubenswrapper[4768]: W0217 13:53:42.754277 4768 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod44008c3e_4ca6_4d59_8a7c_046a28c72b7d.slice/crio-7362fc9ac08e7584e0ffa2b77aae1983863403224b3bfc0a238141df0d093446 WatchSource:0}: Error finding container 7362fc9ac08e7584e0ffa2b77aae1983863403224b3bfc0a238141df0d093446: Status 404 returned error can't find the container with id 7362fc9ac08e7584e0ffa2b77aae1983863403224b3bfc0a238141df0d093446 Feb 17 13:53:42 crc kubenswrapper[4768]: I0217 13:53:42.756903 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-2wbdp"] Feb 17 13:53:42 crc kubenswrapper[4768]: I0217 13:53:42.846118 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-8557-account-create-update-zm7x5"] Feb 17 13:53:42 crc kubenswrapper[4768]: W0217 13:53:42.865177 4768 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfb47d4a7_16fc_402d_8943_40d7d22a00c4.slice/crio-68f22efb6d3f1a98a35e1a6d6ea3b2712294e9bda0167be8b92de785914cf8a9 WatchSource:0}: Error finding container 68f22efb6d3f1a98a35e1a6d6ea3b2712294e9bda0167be8b92de785914cf8a9: Status 404 returned error can't find the container with id 68f22efb6d3f1a98a35e1a6d6ea3b2712294e9bda0167be8b92de785914cf8a9 Feb 17 13:53:43 crc kubenswrapper[4768]: I0217 13:53:43.190794 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-spclm"] Feb 17 13:53:43 crc kubenswrapper[4768]: I0217 13:53:43.191237 4768 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-86db49b7ff-spclm" podUID="399cc340-c212-4298-a995-236a556b5108" containerName="dnsmasq-dns" containerID="cri-o://5079b23a939a7e7bc1747ee38ce622a99cd22705bfb8194defeb9e203173fae4" gracePeriod=10 Feb 17 13:53:43 crc kubenswrapper[4768]: I0217 13:53:43.193310 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-86db49b7ff-spclm" Feb 17 13:53:43 crc kubenswrapper[4768]: I0217 13:53:43.198881 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Feb 17 13:53:43 crc kubenswrapper[4768]: I0217 13:53:43.245644 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-698758b865-5ffzn"] Feb 17 13:53:43 crc kubenswrapper[4768]: I0217 13:53:43.247167 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-5ffzn" Feb 17 13:53:43 crc kubenswrapper[4768]: I0217 13:53:43.252324 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-698758b865-5ffzn"] Feb 17 13:53:43 crc kubenswrapper[4768]: I0217 13:53:43.375886 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d185e75d-9b91-415c-baba-1f5bea3b5ad1-ovsdbserver-sb\") pod \"dnsmasq-dns-698758b865-5ffzn\" (UID: \"d185e75d-9b91-415c-baba-1f5bea3b5ad1\") " pod="openstack/dnsmasq-dns-698758b865-5ffzn" Feb 17 13:53:43 crc kubenswrapper[4768]: I0217 13:53:43.376337 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d185e75d-9b91-415c-baba-1f5bea3b5ad1-ovsdbserver-nb\") pod \"dnsmasq-dns-698758b865-5ffzn\" (UID: \"d185e75d-9b91-415c-baba-1f5bea3b5ad1\") " pod="openstack/dnsmasq-dns-698758b865-5ffzn" Feb 17 13:53:43 crc kubenswrapper[4768]: I0217 13:53:43.376379 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d185e75d-9b91-415c-baba-1f5bea3b5ad1-dns-svc\") pod \"dnsmasq-dns-698758b865-5ffzn\" (UID: \"d185e75d-9b91-415c-baba-1f5bea3b5ad1\") " pod="openstack/dnsmasq-dns-698758b865-5ffzn" Feb 17 13:53:43 crc kubenswrapper[4768]: I0217 13:53:43.376418 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d185e75d-9b91-415c-baba-1f5bea3b5ad1-config\") pod \"dnsmasq-dns-698758b865-5ffzn\" (UID: \"d185e75d-9b91-415c-baba-1f5bea3b5ad1\") " pod="openstack/dnsmasq-dns-698758b865-5ffzn" Feb 17 13:53:43 crc kubenswrapper[4768]: I0217 13:53:43.376498 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-59ljb\" (UniqueName: \"kubernetes.io/projected/d185e75d-9b91-415c-baba-1f5bea3b5ad1-kube-api-access-59ljb\") pod \"dnsmasq-dns-698758b865-5ffzn\" (UID: \"d185e75d-9b91-415c-baba-1f5bea3b5ad1\") " pod="openstack/dnsmasq-dns-698758b865-5ffzn" Feb 17 13:53:43 crc kubenswrapper[4768]: I0217 13:53:43.440442 4768 generic.go:334] "Generic (PLEG): container finished" podID="ff18ff3e-97c6-433b-8ad9-837a77fb0e88" containerID="4d18fd16d9b7b5f25fe352c251ebd2670b27c45b1266d760555f5efff85d5253" exitCode=0 Feb 17 13:53:43 crc kubenswrapper[4768]: I0217 13:53:43.440487 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-6qstf" event={"ID":"ff18ff3e-97c6-433b-8ad9-837a77fb0e88","Type":"ContainerDied","Data":"4d18fd16d9b7b5f25fe352c251ebd2670b27c45b1266d760555f5efff85d5253"} Feb 17 13:53:43 crc kubenswrapper[4768]: I0217 13:53:43.440589 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-6qstf" event={"ID":"ff18ff3e-97c6-433b-8ad9-837a77fb0e88","Type":"ContainerStarted","Data":"6deb9698a0dae9fb6c89acb3a1f01c5f420da440b946afcf89e7c42345e79fdf"} Feb 17 13:53:43 crc kubenswrapper[4768]: I0217 13:53:43.442511 4768 generic.go:334] "Generic (PLEG): container finished" podID="43f9209b-c554-49e3-886a-4e9ee73ebe3e" containerID="ce4bad8f9dc20d5d0b127a1d1075495f86c7606d70dbca27d62c46cbae1bb061" exitCode=0 Feb 17 13:53:43 crc kubenswrapper[4768]: I0217 13:53:43.442548 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-3797-account-create-update-bftff" event={"ID":"43f9209b-c554-49e3-886a-4e9ee73ebe3e","Type":"ContainerDied","Data":"ce4bad8f9dc20d5d0b127a1d1075495f86c7606d70dbca27d62c46cbae1bb061"} Feb 17 13:53:43 crc kubenswrapper[4768]: I0217 13:53:43.442574 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-3797-account-create-update-bftff" event={"ID":"43f9209b-c554-49e3-886a-4e9ee73ebe3e","Type":"ContainerStarted","Data":"18b6ace3346902bacc9e18e7f2385c7293ebd0a77009a012494e63ab18f39fe4"} Feb 17 13:53:43 crc kubenswrapper[4768]: I0217 13:53:43.443963 4768 generic.go:334] "Generic (PLEG): container finished" podID="399cc340-c212-4298-a995-236a556b5108" containerID="5079b23a939a7e7bc1747ee38ce622a99cd22705bfb8194defeb9e203173fae4" exitCode=0 Feb 17 13:53:43 crc kubenswrapper[4768]: I0217 13:53:43.444020 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-spclm" event={"ID":"399cc340-c212-4298-a995-236a556b5108","Type":"ContainerDied","Data":"5079b23a939a7e7bc1747ee38ce622a99cd22705bfb8194defeb9e203173fae4"} Feb 17 13:53:43 crc kubenswrapper[4768]: I0217 13:53:43.449324 4768 generic.go:334] "Generic (PLEG): container finished" podID="fb47d4a7-16fc-402d-8943-40d7d22a00c4" containerID="f0a0952c2bc3c6fb4ccf78bb5d4b8ebe9205bb68ab4808bcc5ad6cbd12d56f76" exitCode=0 Feb 17 13:53:43 crc kubenswrapper[4768]: I0217 13:53:43.449406 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-8557-account-create-update-zm7x5" event={"ID":"fb47d4a7-16fc-402d-8943-40d7d22a00c4","Type":"ContainerDied","Data":"f0a0952c2bc3c6fb4ccf78bb5d4b8ebe9205bb68ab4808bcc5ad6cbd12d56f76"} Feb 17 13:53:43 crc kubenswrapper[4768]: I0217 13:53:43.449432 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-8557-account-create-update-zm7x5" event={"ID":"fb47d4a7-16fc-402d-8943-40d7d22a00c4","Type":"ContainerStarted","Data":"68f22efb6d3f1a98a35e1a6d6ea3b2712294e9bda0167be8b92de785914cf8a9"} Feb 17 13:53:43 crc kubenswrapper[4768]: I0217 13:53:43.450640 4768 generic.go:334] "Generic (PLEG): container finished" podID="44008c3e-4ca6-4d59-8a7c-046a28c72b7d" containerID="a88311b8751f759efa505a5321b475ddf302f354e83e59b4c701de0054d75e95" exitCode=0 Feb 17 13:53:43 crc kubenswrapper[4768]: I0217 13:53:43.450680 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-2wbdp" event={"ID":"44008c3e-4ca6-4d59-8a7c-046a28c72b7d","Type":"ContainerDied","Data":"a88311b8751f759efa505a5321b475ddf302f354e83e59b4c701de0054d75e95"} Feb 17 13:53:43 crc kubenswrapper[4768]: I0217 13:53:43.450703 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-2wbdp" event={"ID":"44008c3e-4ca6-4d59-8a7c-046a28c72b7d","Type":"ContainerStarted","Data":"7362fc9ac08e7584e0ffa2b77aae1983863403224b3bfc0a238141df0d093446"} Feb 17 13:53:43 crc kubenswrapper[4768]: I0217 13:53:43.477874 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d185e75d-9b91-415c-baba-1f5bea3b5ad1-config\") pod \"dnsmasq-dns-698758b865-5ffzn\" (UID: \"d185e75d-9b91-415c-baba-1f5bea3b5ad1\") " pod="openstack/dnsmasq-dns-698758b865-5ffzn" Feb 17 13:53:43 crc kubenswrapper[4768]: I0217 13:53:43.477985 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-59ljb\" (UniqueName: \"kubernetes.io/projected/d185e75d-9b91-415c-baba-1f5bea3b5ad1-kube-api-access-59ljb\") pod \"dnsmasq-dns-698758b865-5ffzn\" (UID: \"d185e75d-9b91-415c-baba-1f5bea3b5ad1\") " pod="openstack/dnsmasq-dns-698758b865-5ffzn" Feb 17 13:53:43 crc kubenswrapper[4768]: I0217 13:53:43.478060 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d185e75d-9b91-415c-baba-1f5bea3b5ad1-ovsdbserver-sb\") pod \"dnsmasq-dns-698758b865-5ffzn\" (UID: \"d185e75d-9b91-415c-baba-1f5bea3b5ad1\") " pod="openstack/dnsmasq-dns-698758b865-5ffzn" Feb 17 13:53:43 crc kubenswrapper[4768]: I0217 13:53:43.478130 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d185e75d-9b91-415c-baba-1f5bea3b5ad1-ovsdbserver-nb\") pod \"dnsmasq-dns-698758b865-5ffzn\" (UID: \"d185e75d-9b91-415c-baba-1f5bea3b5ad1\") " pod="openstack/dnsmasq-dns-698758b865-5ffzn" Feb 17 13:53:43 crc kubenswrapper[4768]: I0217 13:53:43.478171 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d185e75d-9b91-415c-baba-1f5bea3b5ad1-dns-svc\") pod \"dnsmasq-dns-698758b865-5ffzn\" (UID: \"d185e75d-9b91-415c-baba-1f5bea3b5ad1\") " pod="openstack/dnsmasq-dns-698758b865-5ffzn" Feb 17 13:53:43 crc kubenswrapper[4768]: I0217 13:53:43.479911 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d185e75d-9b91-415c-baba-1f5bea3b5ad1-dns-svc\") pod \"dnsmasq-dns-698758b865-5ffzn\" (UID: \"d185e75d-9b91-415c-baba-1f5bea3b5ad1\") " pod="openstack/dnsmasq-dns-698758b865-5ffzn" Feb 17 13:53:43 crc kubenswrapper[4768]: I0217 13:53:43.479954 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d185e75d-9b91-415c-baba-1f5bea3b5ad1-config\") pod \"dnsmasq-dns-698758b865-5ffzn\" (UID: \"d185e75d-9b91-415c-baba-1f5bea3b5ad1\") " pod="openstack/dnsmasq-dns-698758b865-5ffzn" Feb 17 13:53:43 crc kubenswrapper[4768]: I0217 13:53:43.480071 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d185e75d-9b91-415c-baba-1f5bea3b5ad1-ovsdbserver-sb\") pod \"dnsmasq-dns-698758b865-5ffzn\" (UID: \"d185e75d-9b91-415c-baba-1f5bea3b5ad1\") " pod="openstack/dnsmasq-dns-698758b865-5ffzn" Feb 17 13:53:43 crc kubenswrapper[4768]: I0217 13:53:43.480568 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d185e75d-9b91-415c-baba-1f5bea3b5ad1-ovsdbserver-nb\") pod \"dnsmasq-dns-698758b865-5ffzn\" (UID: \"d185e75d-9b91-415c-baba-1f5bea3b5ad1\") " pod="openstack/dnsmasq-dns-698758b865-5ffzn" Feb 17 13:53:43 crc kubenswrapper[4768]: I0217 13:53:43.495209 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-59ljb\" (UniqueName: \"kubernetes.io/projected/d185e75d-9b91-415c-baba-1f5bea3b5ad1-kube-api-access-59ljb\") pod \"dnsmasq-dns-698758b865-5ffzn\" (UID: \"d185e75d-9b91-415c-baba-1f5bea3b5ad1\") " pod="openstack/dnsmasq-dns-698758b865-5ffzn" Feb 17 13:53:43 crc kubenswrapper[4768]: I0217 13:53:43.657085 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-5ffzn" Feb 17 13:53:43 crc kubenswrapper[4768]: I0217 13:53:43.764010 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86db49b7ff-spclm" Feb 17 13:53:43 crc kubenswrapper[4768]: I0217 13:53:43.885687 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/399cc340-c212-4298-a995-236a556b5108-ovsdbserver-nb\") pod \"399cc340-c212-4298-a995-236a556b5108\" (UID: \"399cc340-c212-4298-a995-236a556b5108\") " Feb 17 13:53:43 crc kubenswrapper[4768]: I0217 13:53:43.885744 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gbvtt\" (UniqueName: \"kubernetes.io/projected/399cc340-c212-4298-a995-236a556b5108-kube-api-access-gbvtt\") pod \"399cc340-c212-4298-a995-236a556b5108\" (UID: \"399cc340-c212-4298-a995-236a556b5108\") " Feb 17 13:53:43 crc kubenswrapper[4768]: I0217 13:53:43.885821 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/399cc340-c212-4298-a995-236a556b5108-ovsdbserver-sb\") pod \"399cc340-c212-4298-a995-236a556b5108\" (UID: \"399cc340-c212-4298-a995-236a556b5108\") " Feb 17 13:53:43 crc kubenswrapper[4768]: I0217 13:53:43.886420 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/399cc340-c212-4298-a995-236a556b5108-config\") pod \"399cc340-c212-4298-a995-236a556b5108\" (UID: \"399cc340-c212-4298-a995-236a556b5108\") " Feb 17 13:53:43 crc kubenswrapper[4768]: I0217 13:53:43.887143 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/399cc340-c212-4298-a995-236a556b5108-dns-svc\") pod \"399cc340-c212-4298-a995-236a556b5108\" (UID: \"399cc340-c212-4298-a995-236a556b5108\") " Feb 17 13:53:43 crc kubenswrapper[4768]: I0217 13:53:43.891087 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/399cc340-c212-4298-a995-236a556b5108-kube-api-access-gbvtt" (OuterVolumeSpecName: "kube-api-access-gbvtt") pod "399cc340-c212-4298-a995-236a556b5108" (UID: "399cc340-c212-4298-a995-236a556b5108"). InnerVolumeSpecName "kube-api-access-gbvtt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 13:53:43 crc kubenswrapper[4768]: I0217 13:53:43.924592 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/399cc340-c212-4298-a995-236a556b5108-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "399cc340-c212-4298-a995-236a556b5108" (UID: "399cc340-c212-4298-a995-236a556b5108"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 13:53:43 crc kubenswrapper[4768]: I0217 13:53:43.934659 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/399cc340-c212-4298-a995-236a556b5108-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "399cc340-c212-4298-a995-236a556b5108" (UID: "399cc340-c212-4298-a995-236a556b5108"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 13:53:43 crc kubenswrapper[4768]: I0217 13:53:43.938337 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/399cc340-c212-4298-a995-236a556b5108-config" (OuterVolumeSpecName: "config") pod "399cc340-c212-4298-a995-236a556b5108" (UID: "399cc340-c212-4298-a995-236a556b5108"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 13:53:43 crc kubenswrapper[4768]: I0217 13:53:43.938454 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/399cc340-c212-4298-a995-236a556b5108-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "399cc340-c212-4298-a995-236a556b5108" (UID: "399cc340-c212-4298-a995-236a556b5108"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 13:53:43 crc kubenswrapper[4768]: I0217 13:53:43.988706 4768 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/399cc340-c212-4298-a995-236a556b5108-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 17 13:53:43 crc kubenswrapper[4768]: I0217 13:53:43.988735 4768 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/399cc340-c212-4298-a995-236a556b5108-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 17 13:53:43 crc kubenswrapper[4768]: I0217 13:53:43.988747 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gbvtt\" (UniqueName: \"kubernetes.io/projected/399cc340-c212-4298-a995-236a556b5108-kube-api-access-gbvtt\") on node \"crc\" DevicePath \"\"" Feb 17 13:53:43 crc kubenswrapper[4768]: I0217 13:53:43.988755 4768 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/399cc340-c212-4298-a995-236a556b5108-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 17 13:53:43 crc kubenswrapper[4768]: I0217 13:53:43.988780 4768 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/399cc340-c212-4298-a995-236a556b5108-config\") on node \"crc\" DevicePath \"\"" Feb 17 13:53:44 crc kubenswrapper[4768]: I0217 13:53:44.131915 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-698758b865-5ffzn"] Feb 17 13:53:44 crc kubenswrapper[4768]: W0217 13:53:44.139627 4768 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd185e75d_9b91_415c_baba_1f5bea3b5ad1.slice/crio-3487555ab75b87af5b96377599846551b596f1a5e9440fe3ce7914d22b007ac8 WatchSource:0}: Error finding container 3487555ab75b87af5b96377599846551b596f1a5e9440fe3ce7914d22b007ac8: Status 404 returned error can't find the container with id 3487555ab75b87af5b96377599846551b596f1a5e9440fe3ce7914d22b007ac8 Feb 17 13:53:44 crc kubenswrapper[4768]: I0217 13:53:44.406695 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-storage-0"] Feb 17 13:53:44 crc kubenswrapper[4768]: E0217 13:53:44.408303 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="399cc340-c212-4298-a995-236a556b5108" containerName="dnsmasq-dns" Feb 17 13:53:44 crc kubenswrapper[4768]: I0217 13:53:44.408377 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="399cc340-c212-4298-a995-236a556b5108" containerName="dnsmasq-dns" Feb 17 13:53:44 crc kubenswrapper[4768]: E0217 13:53:44.408452 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="399cc340-c212-4298-a995-236a556b5108" containerName="init" Feb 17 13:53:44 crc kubenswrapper[4768]: I0217 13:53:44.408515 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="399cc340-c212-4298-a995-236a556b5108" containerName="init" Feb 17 13:53:44 crc kubenswrapper[4768]: I0217 13:53:44.408720 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="399cc340-c212-4298-a995-236a556b5108" containerName="dnsmasq-dns" Feb 17 13:53:44 crc kubenswrapper[4768]: I0217 13:53:44.413868 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Feb 17 13:53:44 crc kubenswrapper[4768]: I0217 13:53:44.415674 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-conf" Feb 17 13:53:44 crc kubenswrapper[4768]: I0217 13:53:44.416133 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-swift-dockercfg-j9crr" Feb 17 13:53:44 crc kubenswrapper[4768]: I0217 13:53:44.415907 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-files" Feb 17 13:53:44 crc kubenswrapper[4768]: I0217 13:53:44.419486 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-storage-config-data" Feb 17 13:53:44 crc kubenswrapper[4768]: I0217 13:53:44.482972 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-spclm" event={"ID":"399cc340-c212-4298-a995-236a556b5108","Type":"ContainerDied","Data":"7f6c3199137ef16c57575f685309077482c5a1ccfd4ae987972511c94cd99699"} Feb 17 13:53:44 crc kubenswrapper[4768]: I0217 13:53:44.483033 4768 scope.go:117] "RemoveContainer" containerID="5079b23a939a7e7bc1747ee38ce622a99cd22705bfb8194defeb9e203173fae4" Feb 17 13:53:44 crc kubenswrapper[4768]: I0217 13:53:44.483213 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86db49b7ff-spclm" Feb 17 13:53:44 crc kubenswrapper[4768]: I0217 13:53:44.484666 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Feb 17 13:53:44 crc kubenswrapper[4768]: I0217 13:53:44.489163 4768 generic.go:334] "Generic (PLEG): container finished" podID="d185e75d-9b91-415c-baba-1f5bea3b5ad1" containerID="386761d8e1e005a4fcb8d495b6850d531a7d26770bd0d0b80c0edc1eaed5e545" exitCode=0 Feb 17 13:53:44 crc kubenswrapper[4768]: I0217 13:53:44.489855 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-5ffzn" event={"ID":"d185e75d-9b91-415c-baba-1f5bea3b5ad1","Type":"ContainerDied","Data":"386761d8e1e005a4fcb8d495b6850d531a7d26770bd0d0b80c0edc1eaed5e545"} Feb 17 13:53:44 crc kubenswrapper[4768]: I0217 13:53:44.489908 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-5ffzn" event={"ID":"d185e75d-9b91-415c-baba-1f5bea3b5ad1","Type":"ContainerStarted","Data":"3487555ab75b87af5b96377599846551b596f1a5e9440fe3ce7914d22b007ac8"} Feb 17 13:53:44 crc kubenswrapper[4768]: I0217 13:53:44.495828 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"swift-storage-0\" (UID: \"81d96922-74f7-4840-bcad-6f98ffb1bbdf\") " pod="openstack/swift-storage-0" Feb 17 13:53:44 crc kubenswrapper[4768]: I0217 13:53:44.495881 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/81d96922-74f7-4840-bcad-6f98ffb1bbdf-lock\") pod \"swift-storage-0\" (UID: \"81d96922-74f7-4840-bcad-6f98ffb1bbdf\") " pod="openstack/swift-storage-0" Feb 17 13:53:44 crc kubenswrapper[4768]: I0217 13:53:44.495932 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fsgwk\" (UniqueName: \"kubernetes.io/projected/81d96922-74f7-4840-bcad-6f98ffb1bbdf-kube-api-access-fsgwk\") pod \"swift-storage-0\" (UID: \"81d96922-74f7-4840-bcad-6f98ffb1bbdf\") " pod="openstack/swift-storage-0" Feb 17 13:53:44 crc kubenswrapper[4768]: I0217 13:53:44.495958 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/81d96922-74f7-4840-bcad-6f98ffb1bbdf-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"81d96922-74f7-4840-bcad-6f98ffb1bbdf\") " pod="openstack/swift-storage-0" Feb 17 13:53:44 crc kubenswrapper[4768]: I0217 13:53:44.495990 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/81d96922-74f7-4840-bcad-6f98ffb1bbdf-etc-swift\") pod \"swift-storage-0\" (UID: \"81d96922-74f7-4840-bcad-6f98ffb1bbdf\") " pod="openstack/swift-storage-0" Feb 17 13:53:44 crc kubenswrapper[4768]: I0217 13:53:44.496029 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/81d96922-74f7-4840-bcad-6f98ffb1bbdf-cache\") pod \"swift-storage-0\" (UID: \"81d96922-74f7-4840-bcad-6f98ffb1bbdf\") " pod="openstack/swift-storage-0" Feb 17 13:53:44 crc kubenswrapper[4768]: I0217 13:53:44.508695 4768 scope.go:117] "RemoveContainer" containerID="ec5b17ca91f73ae4c501402508cdfa4487b8f30008e45e55d9fe56bea35934c3" Feb 17 13:53:44 crc kubenswrapper[4768]: I0217 13:53:44.597859 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/81d96922-74f7-4840-bcad-6f98ffb1bbdf-cache\") pod \"swift-storage-0\" (UID: \"81d96922-74f7-4840-bcad-6f98ffb1bbdf\") " pod="openstack/swift-storage-0" Feb 17 13:53:44 crc kubenswrapper[4768]: I0217 13:53:44.598575 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"swift-storage-0\" (UID: \"81d96922-74f7-4840-bcad-6f98ffb1bbdf\") " pod="openstack/swift-storage-0" Feb 17 13:53:44 crc kubenswrapper[4768]: I0217 13:53:44.598609 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/81d96922-74f7-4840-bcad-6f98ffb1bbdf-lock\") pod \"swift-storage-0\" (UID: \"81d96922-74f7-4840-bcad-6f98ffb1bbdf\") " pod="openstack/swift-storage-0" Feb 17 13:53:44 crc kubenswrapper[4768]: I0217 13:53:44.598641 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/81d96922-74f7-4840-bcad-6f98ffb1bbdf-cache\") pod \"swift-storage-0\" (UID: \"81d96922-74f7-4840-bcad-6f98ffb1bbdf\") " pod="openstack/swift-storage-0" Feb 17 13:53:44 crc kubenswrapper[4768]: I0217 13:53:44.600607 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/81d96922-74f7-4840-bcad-6f98ffb1bbdf-lock\") pod \"swift-storage-0\" (UID: \"81d96922-74f7-4840-bcad-6f98ffb1bbdf\") " pod="openstack/swift-storage-0" Feb 17 13:53:44 crc kubenswrapper[4768]: I0217 13:53:44.600685 4768 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"swift-storage-0\" (UID: \"81d96922-74f7-4840-bcad-6f98ffb1bbdf\") device mount path \"/mnt/openstack/pv06\"" pod="openstack/swift-storage-0" Feb 17 13:53:44 crc kubenswrapper[4768]: I0217 13:53:44.598677 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/81d96922-74f7-4840-bcad-6f98ffb1bbdf-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"81d96922-74f7-4840-bcad-6f98ffb1bbdf\") " pod="openstack/swift-storage-0" Feb 17 13:53:44 crc kubenswrapper[4768]: I0217 13:53:44.604064 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fsgwk\" (UniqueName: \"kubernetes.io/projected/81d96922-74f7-4840-bcad-6f98ffb1bbdf-kube-api-access-fsgwk\") pod \"swift-storage-0\" (UID: \"81d96922-74f7-4840-bcad-6f98ffb1bbdf\") " pod="openstack/swift-storage-0" Feb 17 13:53:44 crc kubenswrapper[4768]: I0217 13:53:44.604124 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/81d96922-74f7-4840-bcad-6f98ffb1bbdf-etc-swift\") pod \"swift-storage-0\" (UID: \"81d96922-74f7-4840-bcad-6f98ffb1bbdf\") " pod="openstack/swift-storage-0" Feb 17 13:53:44 crc kubenswrapper[4768]: I0217 13:53:44.605834 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/81d96922-74f7-4840-bcad-6f98ffb1bbdf-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"81d96922-74f7-4840-bcad-6f98ffb1bbdf\") " pod="openstack/swift-storage-0" Feb 17 13:53:44 crc kubenswrapper[4768]: E0217 13:53:44.608758 4768 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 17 13:53:44 crc kubenswrapper[4768]: E0217 13:53:44.608800 4768 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 17 13:53:44 crc kubenswrapper[4768]: E0217 13:53:44.608865 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/81d96922-74f7-4840-bcad-6f98ffb1bbdf-etc-swift podName:81d96922-74f7-4840-bcad-6f98ffb1bbdf nodeName:}" failed. No retries permitted until 2026-02-17 13:53:45.108844624 +0000 UTC m=+1044.388231066 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/81d96922-74f7-4840-bcad-6f98ffb1bbdf-etc-swift") pod "swift-storage-0" (UID: "81d96922-74f7-4840-bcad-6f98ffb1bbdf") : configmap "swift-ring-files" not found Feb 17 13:53:44 crc kubenswrapper[4768]: I0217 13:53:44.614712 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-spclm"] Feb 17 13:53:44 crc kubenswrapper[4768]: I0217 13:53:44.622873 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fsgwk\" (UniqueName: \"kubernetes.io/projected/81d96922-74f7-4840-bcad-6f98ffb1bbdf-kube-api-access-fsgwk\") pod \"swift-storage-0\" (UID: \"81d96922-74f7-4840-bcad-6f98ffb1bbdf\") " pod="openstack/swift-storage-0" Feb 17 13:53:44 crc kubenswrapper[4768]: I0217 13:53:44.624485 4768 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-spclm"] Feb 17 13:53:44 crc kubenswrapper[4768]: I0217 13:53:44.638770 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"swift-storage-0\" (UID: \"81d96922-74f7-4840-bcad-6f98ffb1bbdf\") " pod="openstack/swift-storage-0" Feb 17 13:53:44 crc kubenswrapper[4768]: I0217 13:53:44.830761 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-6qstf" Feb 17 13:53:44 crc kubenswrapper[4768]: I0217 13:53:44.907490 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lxdqz\" (UniqueName: \"kubernetes.io/projected/ff18ff3e-97c6-433b-8ad9-837a77fb0e88-kube-api-access-lxdqz\") pod \"ff18ff3e-97c6-433b-8ad9-837a77fb0e88\" (UID: \"ff18ff3e-97c6-433b-8ad9-837a77fb0e88\") " Feb 17 13:53:44 crc kubenswrapper[4768]: I0217 13:53:44.907635 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ff18ff3e-97c6-433b-8ad9-837a77fb0e88-operator-scripts\") pod \"ff18ff3e-97c6-433b-8ad9-837a77fb0e88\" (UID: \"ff18ff3e-97c6-433b-8ad9-837a77fb0e88\") " Feb 17 13:53:44 crc kubenswrapper[4768]: I0217 13:53:44.908026 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ff18ff3e-97c6-433b-8ad9-837a77fb0e88-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "ff18ff3e-97c6-433b-8ad9-837a77fb0e88" (UID: "ff18ff3e-97c6-433b-8ad9-837a77fb0e88"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 13:53:44 crc kubenswrapper[4768]: I0217 13:53:44.908466 4768 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ff18ff3e-97c6-433b-8ad9-837a77fb0e88-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 13:53:44 crc kubenswrapper[4768]: I0217 13:53:44.914996 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ff18ff3e-97c6-433b-8ad9-837a77fb0e88-kube-api-access-lxdqz" (OuterVolumeSpecName: "kube-api-access-lxdqz") pod "ff18ff3e-97c6-433b-8ad9-837a77fb0e88" (UID: "ff18ff3e-97c6-433b-8ad9-837a77fb0e88"). InnerVolumeSpecName "kube-api-access-lxdqz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 13:53:44 crc kubenswrapper[4768]: I0217 13:53:44.958069 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-8557-account-create-update-zm7x5" Feb 17 13:53:44 crc kubenswrapper[4768]: I0217 13:53:44.968477 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-3797-account-create-update-bftff" Feb 17 13:53:44 crc kubenswrapper[4768]: I0217 13:53:44.981732 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-2wbdp" Feb 17 13:53:45 crc kubenswrapper[4768]: I0217 13:53:45.009193 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-28vm2\" (UniqueName: \"kubernetes.io/projected/43f9209b-c554-49e3-886a-4e9ee73ebe3e-kube-api-access-28vm2\") pod \"43f9209b-c554-49e3-886a-4e9ee73ebe3e\" (UID: \"43f9209b-c554-49e3-886a-4e9ee73ebe3e\") " Feb 17 13:53:45 crc kubenswrapper[4768]: I0217 13:53:45.009281 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/43f9209b-c554-49e3-886a-4e9ee73ebe3e-operator-scripts\") pod \"43f9209b-c554-49e3-886a-4e9ee73ebe3e\" (UID: \"43f9209b-c554-49e3-886a-4e9ee73ebe3e\") " Feb 17 13:53:45 crc kubenswrapper[4768]: I0217 13:53:45.009406 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fb47d4a7-16fc-402d-8943-40d7d22a00c4-operator-scripts\") pod \"fb47d4a7-16fc-402d-8943-40d7d22a00c4\" (UID: \"fb47d4a7-16fc-402d-8943-40d7d22a00c4\") " Feb 17 13:53:45 crc kubenswrapper[4768]: I0217 13:53:45.009447 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s8hxn\" (UniqueName: \"kubernetes.io/projected/44008c3e-4ca6-4d59-8a7c-046a28c72b7d-kube-api-access-s8hxn\") pod \"44008c3e-4ca6-4d59-8a7c-046a28c72b7d\" (UID: \"44008c3e-4ca6-4d59-8a7c-046a28c72b7d\") " Feb 17 13:53:45 crc kubenswrapper[4768]: I0217 13:53:45.009491 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/44008c3e-4ca6-4d59-8a7c-046a28c72b7d-operator-scripts\") pod \"44008c3e-4ca6-4d59-8a7c-046a28c72b7d\" (UID: \"44008c3e-4ca6-4d59-8a7c-046a28c72b7d\") " Feb 17 13:53:45 crc kubenswrapper[4768]: I0217 13:53:45.009590 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nhrjt\" (UniqueName: \"kubernetes.io/projected/fb47d4a7-16fc-402d-8943-40d7d22a00c4-kube-api-access-nhrjt\") pod \"fb47d4a7-16fc-402d-8943-40d7d22a00c4\" (UID: \"fb47d4a7-16fc-402d-8943-40d7d22a00c4\") " Feb 17 13:53:45 crc kubenswrapper[4768]: I0217 13:53:45.009904 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lxdqz\" (UniqueName: \"kubernetes.io/projected/ff18ff3e-97c6-433b-8ad9-837a77fb0e88-kube-api-access-lxdqz\") on node \"crc\" DevicePath \"\"" Feb 17 13:53:45 crc kubenswrapper[4768]: I0217 13:53:45.010693 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fb47d4a7-16fc-402d-8943-40d7d22a00c4-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "fb47d4a7-16fc-402d-8943-40d7d22a00c4" (UID: "fb47d4a7-16fc-402d-8943-40d7d22a00c4"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 13:53:45 crc kubenswrapper[4768]: I0217 13:53:45.013444 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fb47d4a7-16fc-402d-8943-40d7d22a00c4-kube-api-access-nhrjt" (OuterVolumeSpecName: "kube-api-access-nhrjt") pod "fb47d4a7-16fc-402d-8943-40d7d22a00c4" (UID: "fb47d4a7-16fc-402d-8943-40d7d22a00c4"). InnerVolumeSpecName "kube-api-access-nhrjt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 13:53:45 crc kubenswrapper[4768]: I0217 13:53:45.013989 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43f9209b-c554-49e3-886a-4e9ee73ebe3e-kube-api-access-28vm2" (OuterVolumeSpecName: "kube-api-access-28vm2") pod "43f9209b-c554-49e3-886a-4e9ee73ebe3e" (UID: "43f9209b-c554-49e3-886a-4e9ee73ebe3e"). InnerVolumeSpecName "kube-api-access-28vm2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 13:53:45 crc kubenswrapper[4768]: I0217 13:53:45.014390 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43f9209b-c554-49e3-886a-4e9ee73ebe3e-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "43f9209b-c554-49e3-886a-4e9ee73ebe3e" (UID: "43f9209b-c554-49e3-886a-4e9ee73ebe3e"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 13:53:45 crc kubenswrapper[4768]: I0217 13:53:45.014649 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/44008c3e-4ca6-4d59-8a7c-046a28c72b7d-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "44008c3e-4ca6-4d59-8a7c-046a28c72b7d" (UID: "44008c3e-4ca6-4d59-8a7c-046a28c72b7d"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 13:53:45 crc kubenswrapper[4768]: I0217 13:53:45.015879 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44008c3e-4ca6-4d59-8a7c-046a28c72b7d-kube-api-access-s8hxn" (OuterVolumeSpecName: "kube-api-access-s8hxn") pod "44008c3e-4ca6-4d59-8a7c-046a28c72b7d" (UID: "44008c3e-4ca6-4d59-8a7c-046a28c72b7d"). InnerVolumeSpecName "kube-api-access-s8hxn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 13:53:45 crc kubenswrapper[4768]: I0217 13:53:45.111547 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/81d96922-74f7-4840-bcad-6f98ffb1bbdf-etc-swift\") pod \"swift-storage-0\" (UID: \"81d96922-74f7-4840-bcad-6f98ffb1bbdf\") " pod="openstack/swift-storage-0" Feb 17 13:53:45 crc kubenswrapper[4768]: E0217 13:53:45.111706 4768 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 17 13:53:45 crc kubenswrapper[4768]: I0217 13:53:45.111924 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nhrjt\" (UniqueName: \"kubernetes.io/projected/fb47d4a7-16fc-402d-8943-40d7d22a00c4-kube-api-access-nhrjt\") on node \"crc\" DevicePath \"\"" Feb 17 13:53:45 crc kubenswrapper[4768]: E0217 13:53:45.111936 4768 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 17 13:53:45 crc kubenswrapper[4768]: I0217 13:53:45.111939 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-28vm2\" (UniqueName: \"kubernetes.io/projected/43f9209b-c554-49e3-886a-4e9ee73ebe3e-kube-api-access-28vm2\") on node \"crc\" DevicePath \"\"" Feb 17 13:53:45 crc kubenswrapper[4768]: I0217 13:53:45.111949 4768 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/43f9209b-c554-49e3-886a-4e9ee73ebe3e-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 13:53:45 crc kubenswrapper[4768]: I0217 13:53:45.111958 4768 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fb47d4a7-16fc-402d-8943-40d7d22a00c4-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 13:53:45 crc kubenswrapper[4768]: I0217 13:53:45.111967 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s8hxn\" (UniqueName: \"kubernetes.io/projected/44008c3e-4ca6-4d59-8a7c-046a28c72b7d-kube-api-access-s8hxn\") on node \"crc\" DevicePath \"\"" Feb 17 13:53:45 crc kubenswrapper[4768]: E0217 13:53:45.111990 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/81d96922-74f7-4840-bcad-6f98ffb1bbdf-etc-swift podName:81d96922-74f7-4840-bcad-6f98ffb1bbdf nodeName:}" failed. No retries permitted until 2026-02-17 13:53:46.111971921 +0000 UTC m=+1045.391358363 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/81d96922-74f7-4840-bcad-6f98ffb1bbdf-etc-swift") pod "swift-storage-0" (UID: "81d96922-74f7-4840-bcad-6f98ffb1bbdf") : configmap "swift-ring-files" not found Feb 17 13:53:45 crc kubenswrapper[4768]: I0217 13:53:45.112018 4768 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/44008c3e-4ca6-4d59-8a7c-046a28c72b7d-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 13:53:45 crc kubenswrapper[4768]: I0217 13:53:45.497619 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-6qstf" event={"ID":"ff18ff3e-97c6-433b-8ad9-837a77fb0e88","Type":"ContainerDied","Data":"6deb9698a0dae9fb6c89acb3a1f01c5f420da440b946afcf89e7c42345e79fdf"} Feb 17 13:53:45 crc kubenswrapper[4768]: I0217 13:53:45.497659 4768 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6deb9698a0dae9fb6c89acb3a1f01c5f420da440b946afcf89e7c42345e79fdf" Feb 17 13:53:45 crc kubenswrapper[4768]: I0217 13:53:45.497682 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-6qstf" Feb 17 13:53:45 crc kubenswrapper[4768]: I0217 13:53:45.500942 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-3797-account-create-update-bftff" event={"ID":"43f9209b-c554-49e3-886a-4e9ee73ebe3e","Type":"ContainerDied","Data":"18b6ace3346902bacc9e18e7f2385c7293ebd0a77009a012494e63ab18f39fe4"} Feb 17 13:53:45 crc kubenswrapper[4768]: I0217 13:53:45.500978 4768 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="18b6ace3346902bacc9e18e7f2385c7293ebd0a77009a012494e63ab18f39fe4" Feb 17 13:53:45 crc kubenswrapper[4768]: I0217 13:53:45.500999 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-3797-account-create-update-bftff" Feb 17 13:53:45 crc kubenswrapper[4768]: I0217 13:53:45.503342 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-5ffzn" event={"ID":"d185e75d-9b91-415c-baba-1f5bea3b5ad1","Type":"ContainerStarted","Data":"0436399dcb87e0d3cc204f1f09694009206de47fe76d3fb3e84716289331d201"} Feb 17 13:53:45 crc kubenswrapper[4768]: I0217 13:53:45.503434 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-698758b865-5ffzn" Feb 17 13:53:45 crc kubenswrapper[4768]: I0217 13:53:45.506256 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-8557-account-create-update-zm7x5" event={"ID":"fb47d4a7-16fc-402d-8943-40d7d22a00c4","Type":"ContainerDied","Data":"68f22efb6d3f1a98a35e1a6d6ea3b2712294e9bda0167be8b92de785914cf8a9"} Feb 17 13:53:45 crc kubenswrapper[4768]: I0217 13:53:45.506283 4768 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="68f22efb6d3f1a98a35e1a6d6ea3b2712294e9bda0167be8b92de785914cf8a9" Feb 17 13:53:45 crc kubenswrapper[4768]: I0217 13:53:45.506284 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-8557-account-create-update-zm7x5" Feb 17 13:53:45 crc kubenswrapper[4768]: I0217 13:53:45.507540 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-2wbdp" event={"ID":"44008c3e-4ca6-4d59-8a7c-046a28c72b7d","Type":"ContainerDied","Data":"7362fc9ac08e7584e0ffa2b77aae1983863403224b3bfc0a238141df0d093446"} Feb 17 13:53:45 crc kubenswrapper[4768]: I0217 13:53:45.507562 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-2wbdp" Feb 17 13:53:45 crc kubenswrapper[4768]: I0217 13:53:45.507575 4768 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7362fc9ac08e7584e0ffa2b77aae1983863403224b3bfc0a238141df0d093446" Feb 17 13:53:45 crc kubenswrapper[4768]: I0217 13:53:45.554002 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-698758b865-5ffzn" podStartSLOduration=2.5539777580000003 podStartE2EDuration="2.553977758s" podCreationTimestamp="2026-02-17 13:53:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 13:53:45.530847702 +0000 UTC m=+1044.810234164" watchObservedRunningTime="2026-02-17 13:53:45.553977758 +0000 UTC m=+1044.833364200" Feb 17 13:53:45 crc kubenswrapper[4768]: I0217 13:53:45.555326 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="399cc340-c212-4298-a995-236a556b5108" path="/var/lib/kubelet/pods/399cc340-c212-4298-a995-236a556b5108/volumes" Feb 17 13:53:45 crc kubenswrapper[4768]: I0217 13:53:45.928066 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-create-4h65c"] Feb 17 13:53:45 crc kubenswrapper[4768]: E0217 13:53:45.928438 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="44008c3e-4ca6-4d59-8a7c-046a28c72b7d" containerName="mariadb-database-create" Feb 17 13:53:45 crc kubenswrapper[4768]: I0217 13:53:45.928453 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="44008c3e-4ca6-4d59-8a7c-046a28c72b7d" containerName="mariadb-database-create" Feb 17 13:53:45 crc kubenswrapper[4768]: E0217 13:53:45.928469 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ff18ff3e-97c6-433b-8ad9-837a77fb0e88" containerName="mariadb-database-create" Feb 17 13:53:45 crc kubenswrapper[4768]: I0217 13:53:45.928475 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="ff18ff3e-97c6-433b-8ad9-837a77fb0e88" containerName="mariadb-database-create" Feb 17 13:53:45 crc kubenswrapper[4768]: E0217 13:53:45.928488 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fb47d4a7-16fc-402d-8943-40d7d22a00c4" containerName="mariadb-account-create-update" Feb 17 13:53:45 crc kubenswrapper[4768]: I0217 13:53:45.928494 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="fb47d4a7-16fc-402d-8943-40d7d22a00c4" containerName="mariadb-account-create-update" Feb 17 13:53:45 crc kubenswrapper[4768]: E0217 13:53:45.928504 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="43f9209b-c554-49e3-886a-4e9ee73ebe3e" containerName="mariadb-account-create-update" Feb 17 13:53:45 crc kubenswrapper[4768]: I0217 13:53:45.928510 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="43f9209b-c554-49e3-886a-4e9ee73ebe3e" containerName="mariadb-account-create-update" Feb 17 13:53:45 crc kubenswrapper[4768]: I0217 13:53:45.928653 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="fb47d4a7-16fc-402d-8943-40d7d22a00c4" containerName="mariadb-account-create-update" Feb 17 13:53:45 crc kubenswrapper[4768]: I0217 13:53:45.928663 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="ff18ff3e-97c6-433b-8ad9-837a77fb0e88" containerName="mariadb-database-create" Feb 17 13:53:45 crc kubenswrapper[4768]: I0217 13:53:45.928672 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="43f9209b-c554-49e3-886a-4e9ee73ebe3e" containerName="mariadb-account-create-update" Feb 17 13:53:45 crc kubenswrapper[4768]: I0217 13:53:45.928684 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="44008c3e-4ca6-4d59-8a7c-046a28c72b7d" containerName="mariadb-database-create" Feb 17 13:53:45 crc kubenswrapper[4768]: I0217 13:53:45.929157 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-4h65c" Feb 17 13:53:45 crc kubenswrapper[4768]: I0217 13:53:45.938762 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-4h65c"] Feb 17 13:53:46 crc kubenswrapper[4768]: I0217 13:53:46.019958 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-5156-account-create-update-zbrm9"] Feb 17 13:53:46 crc kubenswrapper[4768]: I0217 13:53:46.021292 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-5156-account-create-update-zbrm9" Feb 17 13:53:46 crc kubenswrapper[4768]: I0217 13:53:46.028825 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-db-secret" Feb 17 13:53:46 crc kubenswrapper[4768]: I0217 13:53:46.038196 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-5156-account-create-update-zbrm9"] Feb 17 13:53:46 crc kubenswrapper[4768]: I0217 13:53:46.126391 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/81d96922-74f7-4840-bcad-6f98ffb1bbdf-etc-swift\") pod \"swift-storage-0\" (UID: \"81d96922-74f7-4840-bcad-6f98ffb1bbdf\") " pod="openstack/swift-storage-0" Feb 17 13:53:46 crc kubenswrapper[4768]: I0217 13:53:46.126446 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-czb5d\" (UniqueName: \"kubernetes.io/projected/f1fec574-62f3-4dfd-a087-8071bb46a099-kube-api-access-czb5d\") pod \"glance-db-create-4h65c\" (UID: \"f1fec574-62f3-4dfd-a087-8071bb46a099\") " pod="openstack/glance-db-create-4h65c" Feb 17 13:53:46 crc kubenswrapper[4768]: I0217 13:53:46.126471 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f1fec574-62f3-4dfd-a087-8071bb46a099-operator-scripts\") pod \"glance-db-create-4h65c\" (UID: \"f1fec574-62f3-4dfd-a087-8071bb46a099\") " pod="openstack/glance-db-create-4h65c" Feb 17 13:53:46 crc kubenswrapper[4768]: I0217 13:53:46.126505 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wmx5d\" (UniqueName: \"kubernetes.io/projected/1796830a-b57b-42b2-8b81-63fbdc349740-kube-api-access-wmx5d\") pod \"glance-5156-account-create-update-zbrm9\" (UID: \"1796830a-b57b-42b2-8b81-63fbdc349740\") " pod="openstack/glance-5156-account-create-update-zbrm9" Feb 17 13:53:46 crc kubenswrapper[4768]: E0217 13:53:46.126629 4768 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 17 13:53:46 crc kubenswrapper[4768]: E0217 13:53:46.126663 4768 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 17 13:53:46 crc kubenswrapper[4768]: I0217 13:53:46.126661 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1796830a-b57b-42b2-8b81-63fbdc349740-operator-scripts\") pod \"glance-5156-account-create-update-zbrm9\" (UID: \"1796830a-b57b-42b2-8b81-63fbdc349740\") " pod="openstack/glance-5156-account-create-update-zbrm9" Feb 17 13:53:46 crc kubenswrapper[4768]: E0217 13:53:46.126784 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/81d96922-74f7-4840-bcad-6f98ffb1bbdf-etc-swift podName:81d96922-74f7-4840-bcad-6f98ffb1bbdf nodeName:}" failed. No retries permitted until 2026-02-17 13:53:48.126752278 +0000 UTC m=+1047.406138780 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/81d96922-74f7-4840-bcad-6f98ffb1bbdf-etc-swift") pod "swift-storage-0" (UID: "81d96922-74f7-4840-bcad-6f98ffb1bbdf") : configmap "swift-ring-files" not found Feb 17 13:53:46 crc kubenswrapper[4768]: I0217 13:53:46.229744 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-czb5d\" (UniqueName: \"kubernetes.io/projected/f1fec574-62f3-4dfd-a087-8071bb46a099-kube-api-access-czb5d\") pod \"glance-db-create-4h65c\" (UID: \"f1fec574-62f3-4dfd-a087-8071bb46a099\") " pod="openstack/glance-db-create-4h65c" Feb 17 13:53:46 crc kubenswrapper[4768]: I0217 13:53:46.229816 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f1fec574-62f3-4dfd-a087-8071bb46a099-operator-scripts\") pod \"glance-db-create-4h65c\" (UID: \"f1fec574-62f3-4dfd-a087-8071bb46a099\") " pod="openstack/glance-db-create-4h65c" Feb 17 13:53:46 crc kubenswrapper[4768]: I0217 13:53:46.229864 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wmx5d\" (UniqueName: \"kubernetes.io/projected/1796830a-b57b-42b2-8b81-63fbdc349740-kube-api-access-wmx5d\") pod \"glance-5156-account-create-update-zbrm9\" (UID: \"1796830a-b57b-42b2-8b81-63fbdc349740\") " pod="openstack/glance-5156-account-create-update-zbrm9" Feb 17 13:53:46 crc kubenswrapper[4768]: I0217 13:53:46.229888 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1796830a-b57b-42b2-8b81-63fbdc349740-operator-scripts\") pod \"glance-5156-account-create-update-zbrm9\" (UID: \"1796830a-b57b-42b2-8b81-63fbdc349740\") " pod="openstack/glance-5156-account-create-update-zbrm9" Feb 17 13:53:46 crc kubenswrapper[4768]: I0217 13:53:46.230956 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1796830a-b57b-42b2-8b81-63fbdc349740-operator-scripts\") pod \"glance-5156-account-create-update-zbrm9\" (UID: \"1796830a-b57b-42b2-8b81-63fbdc349740\") " pod="openstack/glance-5156-account-create-update-zbrm9" Feb 17 13:53:46 crc kubenswrapper[4768]: I0217 13:53:46.231082 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f1fec574-62f3-4dfd-a087-8071bb46a099-operator-scripts\") pod \"glance-db-create-4h65c\" (UID: \"f1fec574-62f3-4dfd-a087-8071bb46a099\") " pod="openstack/glance-db-create-4h65c" Feb 17 13:53:46 crc kubenswrapper[4768]: I0217 13:53:46.257633 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-czb5d\" (UniqueName: \"kubernetes.io/projected/f1fec574-62f3-4dfd-a087-8071bb46a099-kube-api-access-czb5d\") pod \"glance-db-create-4h65c\" (UID: \"f1fec574-62f3-4dfd-a087-8071bb46a099\") " pod="openstack/glance-db-create-4h65c" Feb 17 13:53:46 crc kubenswrapper[4768]: I0217 13:53:46.261789 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wmx5d\" (UniqueName: \"kubernetes.io/projected/1796830a-b57b-42b2-8b81-63fbdc349740-kube-api-access-wmx5d\") pod \"glance-5156-account-create-update-zbrm9\" (UID: \"1796830a-b57b-42b2-8b81-63fbdc349740\") " pod="openstack/glance-5156-account-create-update-zbrm9" Feb 17 13:53:46 crc kubenswrapper[4768]: I0217 13:53:46.337637 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-5156-account-create-update-zbrm9" Feb 17 13:53:46 crc kubenswrapper[4768]: I0217 13:53:46.542899 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-4h65c" Feb 17 13:53:46 crc kubenswrapper[4768]: I0217 13:53:46.820646 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-5156-account-create-update-zbrm9"] Feb 17 13:53:46 crc kubenswrapper[4768]: W0217 13:53:46.821682 4768 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1796830a_b57b_42b2_8b81_63fbdc349740.slice/crio-e557db09d51310488ac0b95b53a8bc410815f80224057ebfa512ad023659b888 WatchSource:0}: Error finding container e557db09d51310488ac0b95b53a8bc410815f80224057ebfa512ad023659b888: Status 404 returned error can't find the container with id e557db09d51310488ac0b95b53a8bc410815f80224057ebfa512ad023659b888 Feb 17 13:53:47 crc kubenswrapper[4768]: I0217 13:53:47.038326 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-4h65c"] Feb 17 13:53:47 crc kubenswrapper[4768]: W0217 13:53:47.049819 4768 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf1fec574_62f3_4dfd_a087_8071bb46a099.slice/crio-d0567d70404d71fab3225e143c6daa50ff9b3fe476471171ac22d5c89ac19a16 WatchSource:0}: Error finding container d0567d70404d71fab3225e143c6daa50ff9b3fe476471171ac22d5c89ac19a16: Status 404 returned error can't find the container with id d0567d70404d71fab3225e143c6daa50ff9b3fe476471171ac22d5c89ac19a16 Feb 17 13:53:47 crc kubenswrapper[4768]: I0217 13:53:47.526308 4768 generic.go:334] "Generic (PLEG): container finished" podID="f1fec574-62f3-4dfd-a087-8071bb46a099" containerID="d9406c637b16ae2ac9e13bad82de5b06d8284624cb2ce93679aebc846e4e102e" exitCode=0 Feb 17 13:53:47 crc kubenswrapper[4768]: I0217 13:53:47.526356 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-4h65c" event={"ID":"f1fec574-62f3-4dfd-a087-8071bb46a099","Type":"ContainerDied","Data":"d9406c637b16ae2ac9e13bad82de5b06d8284624cb2ce93679aebc846e4e102e"} Feb 17 13:53:47 crc kubenswrapper[4768]: I0217 13:53:47.526663 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-4h65c" event={"ID":"f1fec574-62f3-4dfd-a087-8071bb46a099","Type":"ContainerStarted","Data":"d0567d70404d71fab3225e143c6daa50ff9b3fe476471171ac22d5c89ac19a16"} Feb 17 13:53:47 crc kubenswrapper[4768]: I0217 13:53:47.528289 4768 generic.go:334] "Generic (PLEG): container finished" podID="1796830a-b57b-42b2-8b81-63fbdc349740" containerID="71e1adea8f990f4377121cf0b59e8d1ffb7b581c80312ebacf71e34764aceed5" exitCode=0 Feb 17 13:53:47 crc kubenswrapper[4768]: I0217 13:53:47.528319 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-5156-account-create-update-zbrm9" event={"ID":"1796830a-b57b-42b2-8b81-63fbdc349740","Type":"ContainerDied","Data":"71e1adea8f990f4377121cf0b59e8d1ffb7b581c80312ebacf71e34764aceed5"} Feb 17 13:53:47 crc kubenswrapper[4768]: I0217 13:53:47.528337 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-5156-account-create-update-zbrm9" event={"ID":"1796830a-b57b-42b2-8b81-63fbdc349740","Type":"ContainerStarted","Data":"e557db09d51310488ac0b95b53a8bc410815f80224057ebfa512ad023659b888"} Feb 17 13:53:47 crc kubenswrapper[4768]: I0217 13:53:47.846080 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-z7knp"] Feb 17 13:53:47 crc kubenswrapper[4768]: I0217 13:53:47.847397 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-z7knp" Feb 17 13:53:47 crc kubenswrapper[4768]: I0217 13:53:47.849438 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-mariadb-root-db-secret" Feb 17 13:53:47 crc kubenswrapper[4768]: I0217 13:53:47.854775 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-z7knp"] Feb 17 13:53:47 crc kubenswrapper[4768]: I0217 13:53:47.958334 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7e2ca9d7-a491-4da0-a7f1-8d42ff7b9396-operator-scripts\") pod \"root-account-create-update-z7knp\" (UID: \"7e2ca9d7-a491-4da0-a7f1-8d42ff7b9396\") " pod="openstack/root-account-create-update-z7knp" Feb 17 13:53:47 crc kubenswrapper[4768]: I0217 13:53:47.958636 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5pjfw\" (UniqueName: \"kubernetes.io/projected/7e2ca9d7-a491-4da0-a7f1-8d42ff7b9396-kube-api-access-5pjfw\") pod \"root-account-create-update-z7knp\" (UID: \"7e2ca9d7-a491-4da0-a7f1-8d42ff7b9396\") " pod="openstack/root-account-create-update-z7knp" Feb 17 13:53:48 crc kubenswrapper[4768]: I0217 13:53:48.060037 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7e2ca9d7-a491-4da0-a7f1-8d42ff7b9396-operator-scripts\") pod \"root-account-create-update-z7knp\" (UID: \"7e2ca9d7-a491-4da0-a7f1-8d42ff7b9396\") " pod="openstack/root-account-create-update-z7knp" Feb 17 13:53:48 crc kubenswrapper[4768]: I0217 13:53:48.060093 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5pjfw\" (UniqueName: \"kubernetes.io/projected/7e2ca9d7-a491-4da0-a7f1-8d42ff7b9396-kube-api-access-5pjfw\") pod \"root-account-create-update-z7knp\" (UID: \"7e2ca9d7-a491-4da0-a7f1-8d42ff7b9396\") " pod="openstack/root-account-create-update-z7knp" Feb 17 13:53:48 crc kubenswrapper[4768]: I0217 13:53:48.060962 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7e2ca9d7-a491-4da0-a7f1-8d42ff7b9396-operator-scripts\") pod \"root-account-create-update-z7knp\" (UID: \"7e2ca9d7-a491-4da0-a7f1-8d42ff7b9396\") " pod="openstack/root-account-create-update-z7knp" Feb 17 13:53:48 crc kubenswrapper[4768]: I0217 13:53:48.079686 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5pjfw\" (UniqueName: \"kubernetes.io/projected/7e2ca9d7-a491-4da0-a7f1-8d42ff7b9396-kube-api-access-5pjfw\") pod \"root-account-create-update-z7knp\" (UID: \"7e2ca9d7-a491-4da0-a7f1-8d42ff7b9396\") " pod="openstack/root-account-create-update-z7knp" Feb 17 13:53:48 crc kubenswrapper[4768]: I0217 13:53:48.161272 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/81d96922-74f7-4840-bcad-6f98ffb1bbdf-etc-swift\") pod \"swift-storage-0\" (UID: \"81d96922-74f7-4840-bcad-6f98ffb1bbdf\") " pod="openstack/swift-storage-0" Feb 17 13:53:48 crc kubenswrapper[4768]: E0217 13:53:48.161443 4768 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 17 13:53:48 crc kubenswrapper[4768]: E0217 13:53:48.161456 4768 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 17 13:53:48 crc kubenswrapper[4768]: E0217 13:53:48.161504 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/81d96922-74f7-4840-bcad-6f98ffb1bbdf-etc-swift podName:81d96922-74f7-4840-bcad-6f98ffb1bbdf nodeName:}" failed. No retries permitted until 2026-02-17 13:53:52.161491206 +0000 UTC m=+1051.440877648 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/81d96922-74f7-4840-bcad-6f98ffb1bbdf-etc-swift") pod "swift-storage-0" (UID: "81d96922-74f7-4840-bcad-6f98ffb1bbdf") : configmap "swift-ring-files" not found Feb 17 13:53:48 crc kubenswrapper[4768]: I0217 13:53:48.212466 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-z7knp" Feb 17 13:53:48 crc kubenswrapper[4768]: I0217 13:53:48.289922 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-ring-rebalance-wcvmp"] Feb 17 13:53:48 crc kubenswrapper[4768]: I0217 13:53:48.297732 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-wcvmp" Feb 17 13:53:48 crc kubenswrapper[4768]: I0217 13:53:48.300981 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-scripts" Feb 17 13:53:48 crc kubenswrapper[4768]: I0217 13:53:48.301129 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-config-data" Feb 17 13:53:48 crc kubenswrapper[4768]: I0217 13:53:48.305459 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Feb 17 13:53:48 crc kubenswrapper[4768]: I0217 13:53:48.331741 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-wcvmp"] Feb 17 13:53:48 crc kubenswrapper[4768]: I0217 13:53:48.364782 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/3fc3a6f3-433a-44de-bf42-c29e730f2da3-etc-swift\") pod \"swift-ring-rebalance-wcvmp\" (UID: \"3fc3a6f3-433a-44de-bf42-c29e730f2da3\") " pod="openstack/swift-ring-rebalance-wcvmp" Feb 17 13:53:48 crc kubenswrapper[4768]: I0217 13:53:48.365350 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3fc3a6f3-433a-44de-bf42-c29e730f2da3-combined-ca-bundle\") pod \"swift-ring-rebalance-wcvmp\" (UID: \"3fc3a6f3-433a-44de-bf42-c29e730f2da3\") " pod="openstack/swift-ring-rebalance-wcvmp" Feb 17 13:53:48 crc kubenswrapper[4768]: I0217 13:53:48.365456 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/3fc3a6f3-433a-44de-bf42-c29e730f2da3-dispersionconf\") pod \"swift-ring-rebalance-wcvmp\" (UID: \"3fc3a6f3-433a-44de-bf42-c29e730f2da3\") " pod="openstack/swift-ring-rebalance-wcvmp" Feb 17 13:53:48 crc kubenswrapper[4768]: I0217 13:53:48.365577 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/3fc3a6f3-433a-44de-bf42-c29e730f2da3-ring-data-devices\") pod \"swift-ring-rebalance-wcvmp\" (UID: \"3fc3a6f3-433a-44de-bf42-c29e730f2da3\") " pod="openstack/swift-ring-rebalance-wcvmp" Feb 17 13:53:48 crc kubenswrapper[4768]: I0217 13:53:48.366054 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3fc3a6f3-433a-44de-bf42-c29e730f2da3-scripts\") pod \"swift-ring-rebalance-wcvmp\" (UID: \"3fc3a6f3-433a-44de-bf42-c29e730f2da3\") " pod="openstack/swift-ring-rebalance-wcvmp" Feb 17 13:53:48 crc kubenswrapper[4768]: I0217 13:53:48.366223 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2wsxg\" (UniqueName: \"kubernetes.io/projected/3fc3a6f3-433a-44de-bf42-c29e730f2da3-kube-api-access-2wsxg\") pod \"swift-ring-rebalance-wcvmp\" (UID: \"3fc3a6f3-433a-44de-bf42-c29e730f2da3\") " pod="openstack/swift-ring-rebalance-wcvmp" Feb 17 13:53:48 crc kubenswrapper[4768]: I0217 13:53:48.366289 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/3fc3a6f3-433a-44de-bf42-c29e730f2da3-swiftconf\") pod \"swift-ring-rebalance-wcvmp\" (UID: \"3fc3a6f3-433a-44de-bf42-c29e730f2da3\") " pod="openstack/swift-ring-rebalance-wcvmp" Feb 17 13:53:48 crc kubenswrapper[4768]: I0217 13:53:48.468155 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2wsxg\" (UniqueName: \"kubernetes.io/projected/3fc3a6f3-433a-44de-bf42-c29e730f2da3-kube-api-access-2wsxg\") pod \"swift-ring-rebalance-wcvmp\" (UID: \"3fc3a6f3-433a-44de-bf42-c29e730f2da3\") " pod="openstack/swift-ring-rebalance-wcvmp" Feb 17 13:53:48 crc kubenswrapper[4768]: I0217 13:53:48.468597 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/3fc3a6f3-433a-44de-bf42-c29e730f2da3-swiftconf\") pod \"swift-ring-rebalance-wcvmp\" (UID: \"3fc3a6f3-433a-44de-bf42-c29e730f2da3\") " pod="openstack/swift-ring-rebalance-wcvmp" Feb 17 13:53:48 crc kubenswrapper[4768]: I0217 13:53:48.468915 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/3fc3a6f3-433a-44de-bf42-c29e730f2da3-etc-swift\") pod \"swift-ring-rebalance-wcvmp\" (UID: \"3fc3a6f3-433a-44de-bf42-c29e730f2da3\") " pod="openstack/swift-ring-rebalance-wcvmp" Feb 17 13:53:48 crc kubenswrapper[4768]: I0217 13:53:48.469000 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3fc3a6f3-433a-44de-bf42-c29e730f2da3-combined-ca-bundle\") pod \"swift-ring-rebalance-wcvmp\" (UID: \"3fc3a6f3-433a-44de-bf42-c29e730f2da3\") " pod="openstack/swift-ring-rebalance-wcvmp" Feb 17 13:53:48 crc kubenswrapper[4768]: I0217 13:53:48.469048 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/3fc3a6f3-433a-44de-bf42-c29e730f2da3-dispersionconf\") pod \"swift-ring-rebalance-wcvmp\" (UID: \"3fc3a6f3-433a-44de-bf42-c29e730f2da3\") " pod="openstack/swift-ring-rebalance-wcvmp" Feb 17 13:53:48 crc kubenswrapper[4768]: I0217 13:53:48.469074 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/3fc3a6f3-433a-44de-bf42-c29e730f2da3-ring-data-devices\") pod \"swift-ring-rebalance-wcvmp\" (UID: \"3fc3a6f3-433a-44de-bf42-c29e730f2da3\") " pod="openstack/swift-ring-rebalance-wcvmp" Feb 17 13:53:48 crc kubenswrapper[4768]: I0217 13:53:48.469175 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3fc3a6f3-433a-44de-bf42-c29e730f2da3-scripts\") pod \"swift-ring-rebalance-wcvmp\" (UID: \"3fc3a6f3-433a-44de-bf42-c29e730f2da3\") " pod="openstack/swift-ring-rebalance-wcvmp" Feb 17 13:53:48 crc kubenswrapper[4768]: I0217 13:53:48.470177 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3fc3a6f3-433a-44de-bf42-c29e730f2da3-scripts\") pod \"swift-ring-rebalance-wcvmp\" (UID: \"3fc3a6f3-433a-44de-bf42-c29e730f2da3\") " pod="openstack/swift-ring-rebalance-wcvmp" Feb 17 13:53:48 crc kubenswrapper[4768]: I0217 13:53:48.471433 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/3fc3a6f3-433a-44de-bf42-c29e730f2da3-etc-swift\") pod \"swift-ring-rebalance-wcvmp\" (UID: \"3fc3a6f3-433a-44de-bf42-c29e730f2da3\") " pod="openstack/swift-ring-rebalance-wcvmp" Feb 17 13:53:48 crc kubenswrapper[4768]: I0217 13:53:48.472310 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/3fc3a6f3-433a-44de-bf42-c29e730f2da3-ring-data-devices\") pod \"swift-ring-rebalance-wcvmp\" (UID: \"3fc3a6f3-433a-44de-bf42-c29e730f2da3\") " pod="openstack/swift-ring-rebalance-wcvmp" Feb 17 13:53:48 crc kubenswrapper[4768]: I0217 13:53:48.475310 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3fc3a6f3-433a-44de-bf42-c29e730f2da3-combined-ca-bundle\") pod \"swift-ring-rebalance-wcvmp\" (UID: \"3fc3a6f3-433a-44de-bf42-c29e730f2da3\") " pod="openstack/swift-ring-rebalance-wcvmp" Feb 17 13:53:48 crc kubenswrapper[4768]: I0217 13:53:48.476268 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/3fc3a6f3-433a-44de-bf42-c29e730f2da3-dispersionconf\") pod \"swift-ring-rebalance-wcvmp\" (UID: \"3fc3a6f3-433a-44de-bf42-c29e730f2da3\") " pod="openstack/swift-ring-rebalance-wcvmp" Feb 17 13:53:48 crc kubenswrapper[4768]: I0217 13:53:48.478873 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/3fc3a6f3-433a-44de-bf42-c29e730f2da3-swiftconf\") pod \"swift-ring-rebalance-wcvmp\" (UID: \"3fc3a6f3-433a-44de-bf42-c29e730f2da3\") " pod="openstack/swift-ring-rebalance-wcvmp" Feb 17 13:53:48 crc kubenswrapper[4768]: I0217 13:53:48.488464 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2wsxg\" (UniqueName: \"kubernetes.io/projected/3fc3a6f3-433a-44de-bf42-c29e730f2da3-kube-api-access-2wsxg\") pod \"swift-ring-rebalance-wcvmp\" (UID: \"3fc3a6f3-433a-44de-bf42-c29e730f2da3\") " pod="openstack/swift-ring-rebalance-wcvmp" Feb 17 13:53:48 crc kubenswrapper[4768]: I0217 13:53:48.646628 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-wcvmp" Feb 17 13:53:48 crc kubenswrapper[4768]: I0217 13:53:48.695069 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-z7knp"] Feb 17 13:53:48 crc kubenswrapper[4768]: I0217 13:53:48.885954 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-4h65c" Feb 17 13:53:48 crc kubenswrapper[4768]: I0217 13:53:48.894715 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-5156-account-create-update-zbrm9" Feb 17 13:53:48 crc kubenswrapper[4768]: I0217 13:53:48.979000 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wmx5d\" (UniqueName: \"kubernetes.io/projected/1796830a-b57b-42b2-8b81-63fbdc349740-kube-api-access-wmx5d\") pod \"1796830a-b57b-42b2-8b81-63fbdc349740\" (UID: \"1796830a-b57b-42b2-8b81-63fbdc349740\") " Feb 17 13:53:48 crc kubenswrapper[4768]: I0217 13:53:48.979090 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1796830a-b57b-42b2-8b81-63fbdc349740-operator-scripts\") pod \"1796830a-b57b-42b2-8b81-63fbdc349740\" (UID: \"1796830a-b57b-42b2-8b81-63fbdc349740\") " Feb 17 13:53:48 crc kubenswrapper[4768]: I0217 13:53:48.979389 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-czb5d\" (UniqueName: \"kubernetes.io/projected/f1fec574-62f3-4dfd-a087-8071bb46a099-kube-api-access-czb5d\") pod \"f1fec574-62f3-4dfd-a087-8071bb46a099\" (UID: \"f1fec574-62f3-4dfd-a087-8071bb46a099\") " Feb 17 13:53:48 crc kubenswrapper[4768]: I0217 13:53:48.979586 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f1fec574-62f3-4dfd-a087-8071bb46a099-operator-scripts\") pod \"f1fec574-62f3-4dfd-a087-8071bb46a099\" (UID: \"f1fec574-62f3-4dfd-a087-8071bb46a099\") " Feb 17 13:53:48 crc kubenswrapper[4768]: I0217 13:53:48.981789 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f1fec574-62f3-4dfd-a087-8071bb46a099-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "f1fec574-62f3-4dfd-a087-8071bb46a099" (UID: "f1fec574-62f3-4dfd-a087-8071bb46a099"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 13:53:48 crc kubenswrapper[4768]: I0217 13:53:48.982561 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1796830a-b57b-42b2-8b81-63fbdc349740-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "1796830a-b57b-42b2-8b81-63fbdc349740" (UID: "1796830a-b57b-42b2-8b81-63fbdc349740"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 13:53:48 crc kubenswrapper[4768]: I0217 13:53:48.994584 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f1fec574-62f3-4dfd-a087-8071bb46a099-kube-api-access-czb5d" (OuterVolumeSpecName: "kube-api-access-czb5d") pod "f1fec574-62f3-4dfd-a087-8071bb46a099" (UID: "f1fec574-62f3-4dfd-a087-8071bb46a099"). InnerVolumeSpecName "kube-api-access-czb5d". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 13:53:49 crc kubenswrapper[4768]: I0217 13:53:49.001211 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1796830a-b57b-42b2-8b81-63fbdc349740-kube-api-access-wmx5d" (OuterVolumeSpecName: "kube-api-access-wmx5d") pod "1796830a-b57b-42b2-8b81-63fbdc349740" (UID: "1796830a-b57b-42b2-8b81-63fbdc349740"). InnerVolumeSpecName "kube-api-access-wmx5d". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 13:53:49 crc kubenswrapper[4768]: I0217 13:53:49.068532 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-wcvmp"] Feb 17 13:53:49 crc kubenswrapper[4768]: W0217 13:53:49.073442 4768 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3fc3a6f3_433a_44de_bf42_c29e730f2da3.slice/crio-275efe9db2b06091f71092185ba2a52f5b89458dadb5f711e5d04948adc191aa WatchSource:0}: Error finding container 275efe9db2b06091f71092185ba2a52f5b89458dadb5f711e5d04948adc191aa: Status 404 returned error can't find the container with id 275efe9db2b06091f71092185ba2a52f5b89458dadb5f711e5d04948adc191aa Feb 17 13:53:49 crc kubenswrapper[4768]: I0217 13:53:49.082878 4768 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f1fec574-62f3-4dfd-a087-8071bb46a099-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 13:53:49 crc kubenswrapper[4768]: I0217 13:53:49.082970 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wmx5d\" (UniqueName: \"kubernetes.io/projected/1796830a-b57b-42b2-8b81-63fbdc349740-kube-api-access-wmx5d\") on node \"crc\" DevicePath \"\"" Feb 17 13:53:49 crc kubenswrapper[4768]: I0217 13:53:49.082986 4768 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1796830a-b57b-42b2-8b81-63fbdc349740-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 13:53:49 crc kubenswrapper[4768]: I0217 13:53:49.083008 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-czb5d\" (UniqueName: \"kubernetes.io/projected/f1fec574-62f3-4dfd-a087-8071bb46a099-kube-api-access-czb5d\") on node \"crc\" DevicePath \"\"" Feb 17 13:53:49 crc kubenswrapper[4768]: I0217 13:53:49.546571 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-5156-account-create-update-zbrm9" event={"ID":"1796830a-b57b-42b2-8b81-63fbdc349740","Type":"ContainerDied","Data":"e557db09d51310488ac0b95b53a8bc410815f80224057ebfa512ad023659b888"} Feb 17 13:53:49 crc kubenswrapper[4768]: I0217 13:53:49.548169 4768 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e557db09d51310488ac0b95b53a8bc410815f80224057ebfa512ad023659b888" Feb 17 13:53:49 crc kubenswrapper[4768]: I0217 13:53:49.548250 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-5156-account-create-update-zbrm9" Feb 17 13:53:49 crc kubenswrapper[4768]: I0217 13:53:49.550813 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-wcvmp" event={"ID":"3fc3a6f3-433a-44de-bf42-c29e730f2da3","Type":"ContainerStarted","Data":"275efe9db2b06091f71092185ba2a52f5b89458dadb5f711e5d04948adc191aa"} Feb 17 13:53:49 crc kubenswrapper[4768]: I0217 13:53:49.552518 4768 generic.go:334] "Generic (PLEG): container finished" podID="7e2ca9d7-a491-4da0-a7f1-8d42ff7b9396" containerID="b4194a847faac318eeaa30f98b816cfdd6e63015d5385b43fccd8b85282eacf8" exitCode=0 Feb 17 13:53:49 crc kubenswrapper[4768]: I0217 13:53:49.552590 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-z7knp" event={"ID":"7e2ca9d7-a491-4da0-a7f1-8d42ff7b9396","Type":"ContainerDied","Data":"b4194a847faac318eeaa30f98b816cfdd6e63015d5385b43fccd8b85282eacf8"} Feb 17 13:53:49 crc kubenswrapper[4768]: I0217 13:53:49.552627 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-z7knp" event={"ID":"7e2ca9d7-a491-4da0-a7f1-8d42ff7b9396","Type":"ContainerStarted","Data":"ac5d728169fa55a92706be6506794f84a9edc0b0cb81bd3c8d8e19f536c16236"} Feb 17 13:53:49 crc kubenswrapper[4768]: I0217 13:53:49.554327 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-4h65c" event={"ID":"f1fec574-62f3-4dfd-a087-8071bb46a099","Type":"ContainerDied","Data":"d0567d70404d71fab3225e143c6daa50ff9b3fe476471171ac22d5c89ac19a16"} Feb 17 13:53:49 crc kubenswrapper[4768]: I0217 13:53:49.554369 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-4h65c" Feb 17 13:53:49 crc kubenswrapper[4768]: I0217 13:53:49.554395 4768 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d0567d70404d71fab3225e143c6daa50ff9b3fe476471171ac22d5c89ac19a16" Feb 17 13:53:50 crc kubenswrapper[4768]: I0217 13:53:50.931400 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-z7knp" Feb 17 13:53:51 crc kubenswrapper[4768]: I0217 13:53:51.016303 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5pjfw\" (UniqueName: \"kubernetes.io/projected/7e2ca9d7-a491-4da0-a7f1-8d42ff7b9396-kube-api-access-5pjfw\") pod \"7e2ca9d7-a491-4da0-a7f1-8d42ff7b9396\" (UID: \"7e2ca9d7-a491-4da0-a7f1-8d42ff7b9396\") " Feb 17 13:53:51 crc kubenswrapper[4768]: I0217 13:53:51.016431 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7e2ca9d7-a491-4da0-a7f1-8d42ff7b9396-operator-scripts\") pod \"7e2ca9d7-a491-4da0-a7f1-8d42ff7b9396\" (UID: \"7e2ca9d7-a491-4da0-a7f1-8d42ff7b9396\") " Feb 17 13:53:51 crc kubenswrapper[4768]: I0217 13:53:51.017472 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7e2ca9d7-a491-4da0-a7f1-8d42ff7b9396-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "7e2ca9d7-a491-4da0-a7f1-8d42ff7b9396" (UID: "7e2ca9d7-a491-4da0-a7f1-8d42ff7b9396"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 13:53:51 crc kubenswrapper[4768]: I0217 13:53:51.026720 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7e2ca9d7-a491-4da0-a7f1-8d42ff7b9396-kube-api-access-5pjfw" (OuterVolumeSpecName: "kube-api-access-5pjfw") pod "7e2ca9d7-a491-4da0-a7f1-8d42ff7b9396" (UID: "7e2ca9d7-a491-4da0-a7f1-8d42ff7b9396"). InnerVolumeSpecName "kube-api-access-5pjfw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 13:53:51 crc kubenswrapper[4768]: I0217 13:53:51.118057 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5pjfw\" (UniqueName: \"kubernetes.io/projected/7e2ca9d7-a491-4da0-a7f1-8d42ff7b9396-kube-api-access-5pjfw\") on node \"crc\" DevicePath \"\"" Feb 17 13:53:51 crc kubenswrapper[4768]: I0217 13:53:51.118094 4768 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7e2ca9d7-a491-4da0-a7f1-8d42ff7b9396-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 13:53:51 crc kubenswrapper[4768]: I0217 13:53:51.269421 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-sync-4xwgl"] Feb 17 13:53:51 crc kubenswrapper[4768]: E0217 13:53:51.269826 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7e2ca9d7-a491-4da0-a7f1-8d42ff7b9396" containerName="mariadb-account-create-update" Feb 17 13:53:51 crc kubenswrapper[4768]: I0217 13:53:51.269848 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="7e2ca9d7-a491-4da0-a7f1-8d42ff7b9396" containerName="mariadb-account-create-update" Feb 17 13:53:51 crc kubenswrapper[4768]: E0217 13:53:51.269875 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1796830a-b57b-42b2-8b81-63fbdc349740" containerName="mariadb-account-create-update" Feb 17 13:53:51 crc kubenswrapper[4768]: I0217 13:53:51.269882 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="1796830a-b57b-42b2-8b81-63fbdc349740" containerName="mariadb-account-create-update" Feb 17 13:53:51 crc kubenswrapper[4768]: E0217 13:53:51.269895 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f1fec574-62f3-4dfd-a087-8071bb46a099" containerName="mariadb-database-create" Feb 17 13:53:51 crc kubenswrapper[4768]: I0217 13:53:51.269901 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="f1fec574-62f3-4dfd-a087-8071bb46a099" containerName="mariadb-database-create" Feb 17 13:53:51 crc kubenswrapper[4768]: I0217 13:53:51.270053 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="1796830a-b57b-42b2-8b81-63fbdc349740" containerName="mariadb-account-create-update" Feb 17 13:53:51 crc kubenswrapper[4768]: I0217 13:53:51.270072 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="7e2ca9d7-a491-4da0-a7f1-8d42ff7b9396" containerName="mariadb-account-create-update" Feb 17 13:53:51 crc kubenswrapper[4768]: I0217 13:53:51.270080 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="f1fec574-62f3-4dfd-a087-8071bb46a099" containerName="mariadb-database-create" Feb 17 13:53:51 crc kubenswrapper[4768]: I0217 13:53:51.270642 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-4xwgl" Feb 17 13:53:51 crc kubenswrapper[4768]: I0217 13:53:51.272566 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-config-data" Feb 17 13:53:51 crc kubenswrapper[4768]: I0217 13:53:51.275010 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-5n7xj" Feb 17 13:53:51 crc kubenswrapper[4768]: I0217 13:53:51.281521 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-4xwgl"] Feb 17 13:53:51 crc kubenswrapper[4768]: I0217 13:53:51.423085 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hxj92\" (UniqueName: \"kubernetes.io/projected/63f492b6-e295-4f78-9d73-0643188ffe1c-kube-api-access-hxj92\") pod \"glance-db-sync-4xwgl\" (UID: \"63f492b6-e295-4f78-9d73-0643188ffe1c\") " pod="openstack/glance-db-sync-4xwgl" Feb 17 13:53:51 crc kubenswrapper[4768]: I0217 13:53:51.423223 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/63f492b6-e295-4f78-9d73-0643188ffe1c-db-sync-config-data\") pod \"glance-db-sync-4xwgl\" (UID: \"63f492b6-e295-4f78-9d73-0643188ffe1c\") " pod="openstack/glance-db-sync-4xwgl" Feb 17 13:53:51 crc kubenswrapper[4768]: I0217 13:53:51.423256 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/63f492b6-e295-4f78-9d73-0643188ffe1c-combined-ca-bundle\") pod \"glance-db-sync-4xwgl\" (UID: \"63f492b6-e295-4f78-9d73-0643188ffe1c\") " pod="openstack/glance-db-sync-4xwgl" Feb 17 13:53:51 crc kubenswrapper[4768]: I0217 13:53:51.423299 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/63f492b6-e295-4f78-9d73-0643188ffe1c-config-data\") pod \"glance-db-sync-4xwgl\" (UID: \"63f492b6-e295-4f78-9d73-0643188ffe1c\") " pod="openstack/glance-db-sync-4xwgl" Feb 17 13:53:51 crc kubenswrapper[4768]: I0217 13:53:51.525451 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hxj92\" (UniqueName: \"kubernetes.io/projected/63f492b6-e295-4f78-9d73-0643188ffe1c-kube-api-access-hxj92\") pod \"glance-db-sync-4xwgl\" (UID: \"63f492b6-e295-4f78-9d73-0643188ffe1c\") " pod="openstack/glance-db-sync-4xwgl" Feb 17 13:53:51 crc kubenswrapper[4768]: I0217 13:53:51.525689 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/63f492b6-e295-4f78-9d73-0643188ffe1c-db-sync-config-data\") pod \"glance-db-sync-4xwgl\" (UID: \"63f492b6-e295-4f78-9d73-0643188ffe1c\") " pod="openstack/glance-db-sync-4xwgl" Feb 17 13:53:51 crc kubenswrapper[4768]: I0217 13:53:51.525736 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/63f492b6-e295-4f78-9d73-0643188ffe1c-combined-ca-bundle\") pod \"glance-db-sync-4xwgl\" (UID: \"63f492b6-e295-4f78-9d73-0643188ffe1c\") " pod="openstack/glance-db-sync-4xwgl" Feb 17 13:53:51 crc kubenswrapper[4768]: I0217 13:53:51.525801 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/63f492b6-e295-4f78-9d73-0643188ffe1c-config-data\") pod \"glance-db-sync-4xwgl\" (UID: \"63f492b6-e295-4f78-9d73-0643188ffe1c\") " pod="openstack/glance-db-sync-4xwgl" Feb 17 13:53:51 crc kubenswrapper[4768]: I0217 13:53:51.530169 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/63f492b6-e295-4f78-9d73-0643188ffe1c-db-sync-config-data\") pod \"glance-db-sync-4xwgl\" (UID: \"63f492b6-e295-4f78-9d73-0643188ffe1c\") " pod="openstack/glance-db-sync-4xwgl" Feb 17 13:53:51 crc kubenswrapper[4768]: I0217 13:53:51.534814 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/63f492b6-e295-4f78-9d73-0643188ffe1c-config-data\") pod \"glance-db-sync-4xwgl\" (UID: \"63f492b6-e295-4f78-9d73-0643188ffe1c\") " pod="openstack/glance-db-sync-4xwgl" Feb 17 13:53:51 crc kubenswrapper[4768]: I0217 13:53:51.534893 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/63f492b6-e295-4f78-9d73-0643188ffe1c-combined-ca-bundle\") pod \"glance-db-sync-4xwgl\" (UID: \"63f492b6-e295-4f78-9d73-0643188ffe1c\") " pod="openstack/glance-db-sync-4xwgl" Feb 17 13:53:51 crc kubenswrapper[4768]: I0217 13:53:51.562767 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hxj92\" (UniqueName: \"kubernetes.io/projected/63f492b6-e295-4f78-9d73-0643188ffe1c-kube-api-access-hxj92\") pod \"glance-db-sync-4xwgl\" (UID: \"63f492b6-e295-4f78-9d73-0643188ffe1c\") " pod="openstack/glance-db-sync-4xwgl" Feb 17 13:53:51 crc kubenswrapper[4768]: I0217 13:53:51.574177 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-z7knp" event={"ID":"7e2ca9d7-a491-4da0-a7f1-8d42ff7b9396","Type":"ContainerDied","Data":"ac5d728169fa55a92706be6506794f84a9edc0b0cb81bd3c8d8e19f536c16236"} Feb 17 13:53:51 crc kubenswrapper[4768]: I0217 13:53:51.574214 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-z7knp" Feb 17 13:53:51 crc kubenswrapper[4768]: I0217 13:53:51.574230 4768 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ac5d728169fa55a92706be6506794f84a9edc0b0cb81bd3c8d8e19f536c16236" Feb 17 13:53:51 crc kubenswrapper[4768]: I0217 13:53:51.587958 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-4xwgl" Feb 17 13:53:52 crc kubenswrapper[4768]: I0217 13:53:52.106707 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-4xwgl"] Feb 17 13:53:52 crc kubenswrapper[4768]: I0217 13:53:52.241521 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/81d96922-74f7-4840-bcad-6f98ffb1bbdf-etc-swift\") pod \"swift-storage-0\" (UID: \"81d96922-74f7-4840-bcad-6f98ffb1bbdf\") " pod="openstack/swift-storage-0" Feb 17 13:53:52 crc kubenswrapper[4768]: E0217 13:53:52.241677 4768 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 17 13:53:52 crc kubenswrapper[4768]: E0217 13:53:52.241703 4768 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 17 13:53:52 crc kubenswrapper[4768]: E0217 13:53:52.241756 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/81d96922-74f7-4840-bcad-6f98ffb1bbdf-etc-swift podName:81d96922-74f7-4840-bcad-6f98ffb1bbdf nodeName:}" failed. No retries permitted until 2026-02-17 13:54:00.241742317 +0000 UTC m=+1059.521128759 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/81d96922-74f7-4840-bcad-6f98ffb1bbdf-etc-swift") pod "swift-storage-0" (UID: "81d96922-74f7-4840-bcad-6f98ffb1bbdf") : configmap "swift-ring-files" not found Feb 17 13:53:53 crc kubenswrapper[4768]: I0217 13:53:53.658272 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-698758b865-5ffzn" Feb 17 13:53:53 crc kubenswrapper[4768]: I0217 13:53:53.731210 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-ftfmm"] Feb 17 13:53:53 crc kubenswrapper[4768]: I0217 13:53:53.731466 4768 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-57d769cc4f-ftfmm" podUID="efe2337b-7579-4cc3-9de6-4076d51d3fdf" containerName="dnsmasq-dns" containerID="cri-o://8eb11cadfe99c80f2546fe63cb8692ac81b28d49a4b32f5db3107e1d20302acf" gracePeriod=10 Feb 17 13:53:53 crc kubenswrapper[4768]: W0217 13:53:53.976139 4768 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod63f492b6_e295_4f78_9d73_0643188ffe1c.slice/crio-c870c84ffa9a05f10374095b21a7190d34986445247bb48152e9e310ad0927f6 WatchSource:0}: Error finding container c870c84ffa9a05f10374095b21a7190d34986445247bb48152e9e310ad0927f6: Status 404 returned error can't find the container with id c870c84ffa9a05f10374095b21a7190d34986445247bb48152e9e310ad0927f6 Feb 17 13:53:54 crc kubenswrapper[4768]: I0217 13:53:54.238056 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-z7knp"] Feb 17 13:53:54 crc kubenswrapper[4768]: I0217 13:53:54.244199 4768 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-z7knp"] Feb 17 13:53:54 crc kubenswrapper[4768]: I0217 13:53:54.268098 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-ftfmm" Feb 17 13:53:54 crc kubenswrapper[4768]: I0217 13:53:54.376952 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/efe2337b-7579-4cc3-9de6-4076d51d3fdf-config\") pod \"efe2337b-7579-4cc3-9de6-4076d51d3fdf\" (UID: \"efe2337b-7579-4cc3-9de6-4076d51d3fdf\") " Feb 17 13:53:54 crc kubenswrapper[4768]: I0217 13:53:54.377118 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4ngsx\" (UniqueName: \"kubernetes.io/projected/efe2337b-7579-4cc3-9de6-4076d51d3fdf-kube-api-access-4ngsx\") pod \"efe2337b-7579-4cc3-9de6-4076d51d3fdf\" (UID: \"efe2337b-7579-4cc3-9de6-4076d51d3fdf\") " Feb 17 13:53:54 crc kubenswrapper[4768]: I0217 13:53:54.377165 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/efe2337b-7579-4cc3-9de6-4076d51d3fdf-dns-svc\") pod \"efe2337b-7579-4cc3-9de6-4076d51d3fdf\" (UID: \"efe2337b-7579-4cc3-9de6-4076d51d3fdf\") " Feb 17 13:53:54 crc kubenswrapper[4768]: I0217 13:53:54.382338 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efe2337b-7579-4cc3-9de6-4076d51d3fdf-kube-api-access-4ngsx" (OuterVolumeSpecName: "kube-api-access-4ngsx") pod "efe2337b-7579-4cc3-9de6-4076d51d3fdf" (UID: "efe2337b-7579-4cc3-9de6-4076d51d3fdf"). InnerVolumeSpecName "kube-api-access-4ngsx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 13:53:54 crc kubenswrapper[4768]: I0217 13:53:54.414271 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/efe2337b-7579-4cc3-9de6-4076d51d3fdf-config" (OuterVolumeSpecName: "config") pod "efe2337b-7579-4cc3-9de6-4076d51d3fdf" (UID: "efe2337b-7579-4cc3-9de6-4076d51d3fdf"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 13:53:54 crc kubenswrapper[4768]: I0217 13:53:54.415342 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/efe2337b-7579-4cc3-9de6-4076d51d3fdf-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "efe2337b-7579-4cc3-9de6-4076d51d3fdf" (UID: "efe2337b-7579-4cc3-9de6-4076d51d3fdf"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 13:53:54 crc kubenswrapper[4768]: I0217 13:53:54.478920 4768 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/efe2337b-7579-4cc3-9de6-4076d51d3fdf-config\") on node \"crc\" DevicePath \"\"" Feb 17 13:53:54 crc kubenswrapper[4768]: I0217 13:53:54.478951 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4ngsx\" (UniqueName: \"kubernetes.io/projected/efe2337b-7579-4cc3-9de6-4076d51d3fdf-kube-api-access-4ngsx\") on node \"crc\" DevicePath \"\"" Feb 17 13:53:54 crc kubenswrapper[4768]: I0217 13:53:54.478960 4768 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/efe2337b-7579-4cc3-9de6-4076d51d3fdf-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 17 13:53:54 crc kubenswrapper[4768]: I0217 13:53:54.598652 4768 generic.go:334] "Generic (PLEG): container finished" podID="efe2337b-7579-4cc3-9de6-4076d51d3fdf" containerID="8eb11cadfe99c80f2546fe63cb8692ac81b28d49a4b32f5db3107e1d20302acf" exitCode=0 Feb 17 13:53:54 crc kubenswrapper[4768]: I0217 13:53:54.598715 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-ftfmm" Feb 17 13:53:54 crc kubenswrapper[4768]: I0217 13:53:54.598734 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-ftfmm" event={"ID":"efe2337b-7579-4cc3-9de6-4076d51d3fdf","Type":"ContainerDied","Data":"8eb11cadfe99c80f2546fe63cb8692ac81b28d49a4b32f5db3107e1d20302acf"} Feb 17 13:53:54 crc kubenswrapper[4768]: I0217 13:53:54.598768 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-ftfmm" event={"ID":"efe2337b-7579-4cc3-9de6-4076d51d3fdf","Type":"ContainerDied","Data":"ae4bdc67098bd33126dc0b1ece45fe5f713557d06f572c403cdaa5eaa103d742"} Feb 17 13:53:54 crc kubenswrapper[4768]: I0217 13:53:54.598784 4768 scope.go:117] "RemoveContainer" containerID="8eb11cadfe99c80f2546fe63cb8692ac81b28d49a4b32f5db3107e1d20302acf" Feb 17 13:53:54 crc kubenswrapper[4768]: I0217 13:53:54.601460 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-wcvmp" event={"ID":"3fc3a6f3-433a-44de-bf42-c29e730f2da3","Type":"ContainerStarted","Data":"35f5a48812a8485cf50090a4ab2be5b389b6a1d85c281ee047cdc7084668fdb7"} Feb 17 13:53:54 crc kubenswrapper[4768]: I0217 13:53:54.606906 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-4xwgl" event={"ID":"63f492b6-e295-4f78-9d73-0643188ffe1c","Type":"ContainerStarted","Data":"c870c84ffa9a05f10374095b21a7190d34986445247bb48152e9e310ad0927f6"} Feb 17 13:53:54 crc kubenswrapper[4768]: I0217 13:53:54.618167 4768 scope.go:117] "RemoveContainer" containerID="f35026217ccdf2ab9d8f040a0d7a43bbf7e594438122c4d491c6f7347bb2586c" Feb 17 13:53:54 crc kubenswrapper[4768]: I0217 13:53:54.627597 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-ring-rebalance-wcvmp" podStartSLOduration=1.593746416 podStartE2EDuration="6.627579362s" podCreationTimestamp="2026-02-17 13:53:48 +0000 UTC" firstStartedPulling="2026-02-17 13:53:49.075767392 +0000 UTC m=+1048.355153834" lastFinishedPulling="2026-02-17 13:53:54.109600338 +0000 UTC m=+1053.388986780" observedRunningTime="2026-02-17 13:53:54.624981632 +0000 UTC m=+1053.904368094" watchObservedRunningTime="2026-02-17 13:53:54.627579362 +0000 UTC m=+1053.906965804" Feb 17 13:53:54 crc kubenswrapper[4768]: I0217 13:53:54.644577 4768 scope.go:117] "RemoveContainer" containerID="8eb11cadfe99c80f2546fe63cb8692ac81b28d49a4b32f5db3107e1d20302acf" Feb 17 13:53:54 crc kubenswrapper[4768]: I0217 13:53:54.646047 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-ftfmm"] Feb 17 13:53:54 crc kubenswrapper[4768]: E0217 13:53:54.652832 4768 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8eb11cadfe99c80f2546fe63cb8692ac81b28d49a4b32f5db3107e1d20302acf\": container with ID starting with 8eb11cadfe99c80f2546fe63cb8692ac81b28d49a4b32f5db3107e1d20302acf not found: ID does not exist" containerID="8eb11cadfe99c80f2546fe63cb8692ac81b28d49a4b32f5db3107e1d20302acf" Feb 17 13:53:54 crc kubenswrapper[4768]: I0217 13:53:54.652867 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8eb11cadfe99c80f2546fe63cb8692ac81b28d49a4b32f5db3107e1d20302acf"} err="failed to get container status \"8eb11cadfe99c80f2546fe63cb8692ac81b28d49a4b32f5db3107e1d20302acf\": rpc error: code = NotFound desc = could not find container \"8eb11cadfe99c80f2546fe63cb8692ac81b28d49a4b32f5db3107e1d20302acf\": container with ID starting with 8eb11cadfe99c80f2546fe63cb8692ac81b28d49a4b32f5db3107e1d20302acf not found: ID does not exist" Feb 17 13:53:54 crc kubenswrapper[4768]: I0217 13:53:54.652887 4768 scope.go:117] "RemoveContainer" containerID="f35026217ccdf2ab9d8f040a0d7a43bbf7e594438122c4d491c6f7347bb2586c" Feb 17 13:53:54 crc kubenswrapper[4768]: E0217 13:53:54.653399 4768 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f35026217ccdf2ab9d8f040a0d7a43bbf7e594438122c4d491c6f7347bb2586c\": container with ID starting with f35026217ccdf2ab9d8f040a0d7a43bbf7e594438122c4d491c6f7347bb2586c not found: ID does not exist" containerID="f35026217ccdf2ab9d8f040a0d7a43bbf7e594438122c4d491c6f7347bb2586c" Feb 17 13:53:54 crc kubenswrapper[4768]: I0217 13:53:54.653453 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f35026217ccdf2ab9d8f040a0d7a43bbf7e594438122c4d491c6f7347bb2586c"} err="failed to get container status \"f35026217ccdf2ab9d8f040a0d7a43bbf7e594438122c4d491c6f7347bb2586c\": rpc error: code = NotFound desc = could not find container \"f35026217ccdf2ab9d8f040a0d7a43bbf7e594438122c4d491c6f7347bb2586c\": container with ID starting with f35026217ccdf2ab9d8f040a0d7a43bbf7e594438122c4d491c6f7347bb2586c not found: ID does not exist" Feb 17 13:53:54 crc kubenswrapper[4768]: I0217 13:53:54.672991 4768 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-ftfmm"] Feb 17 13:53:55 crc kubenswrapper[4768]: I0217 13:53:55.544928 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7e2ca9d7-a491-4da0-a7f1-8d42ff7b9396" path="/var/lib/kubelet/pods/7e2ca9d7-a491-4da0-a7f1-8d42ff7b9396/volumes" Feb 17 13:53:55 crc kubenswrapper[4768]: I0217 13:53:55.545624 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="efe2337b-7579-4cc3-9de6-4076d51d3fdf" path="/var/lib/kubelet/pods/efe2337b-7579-4cc3-9de6-4076d51d3fdf/volumes" Feb 17 13:53:56 crc kubenswrapper[4768]: I0217 13:53:56.238225 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-northd-0" Feb 17 13:53:59 crc kubenswrapper[4768]: I0217 13:53:59.240689 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-fz9q7"] Feb 17 13:53:59 crc kubenswrapper[4768]: E0217 13:53:59.241420 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="efe2337b-7579-4cc3-9de6-4076d51d3fdf" containerName="dnsmasq-dns" Feb 17 13:53:59 crc kubenswrapper[4768]: I0217 13:53:59.241436 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="efe2337b-7579-4cc3-9de6-4076d51d3fdf" containerName="dnsmasq-dns" Feb 17 13:53:59 crc kubenswrapper[4768]: E0217 13:53:59.241466 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="efe2337b-7579-4cc3-9de6-4076d51d3fdf" containerName="init" Feb 17 13:53:59 crc kubenswrapper[4768]: I0217 13:53:59.241475 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="efe2337b-7579-4cc3-9de6-4076d51d3fdf" containerName="init" Feb 17 13:53:59 crc kubenswrapper[4768]: I0217 13:53:59.241689 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="efe2337b-7579-4cc3-9de6-4076d51d3fdf" containerName="dnsmasq-dns" Feb 17 13:53:59 crc kubenswrapper[4768]: I0217 13:53:59.242415 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-fz9q7" Feb 17 13:53:59 crc kubenswrapper[4768]: I0217 13:53:59.244343 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-mariadb-root-db-secret" Feb 17 13:53:59 crc kubenswrapper[4768]: I0217 13:53:59.257864 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-fz9q7"] Feb 17 13:53:59 crc kubenswrapper[4768]: I0217 13:53:59.362417 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k5rvj\" (UniqueName: \"kubernetes.io/projected/cabcbc7a-674e-499d-a30f-037a35c12ba7-kube-api-access-k5rvj\") pod \"root-account-create-update-fz9q7\" (UID: \"cabcbc7a-674e-499d-a30f-037a35c12ba7\") " pod="openstack/root-account-create-update-fz9q7" Feb 17 13:53:59 crc kubenswrapper[4768]: I0217 13:53:59.362797 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cabcbc7a-674e-499d-a30f-037a35c12ba7-operator-scripts\") pod \"root-account-create-update-fz9q7\" (UID: \"cabcbc7a-674e-499d-a30f-037a35c12ba7\") " pod="openstack/root-account-create-update-fz9q7" Feb 17 13:53:59 crc kubenswrapper[4768]: I0217 13:53:59.464242 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cabcbc7a-674e-499d-a30f-037a35c12ba7-operator-scripts\") pod \"root-account-create-update-fz9q7\" (UID: \"cabcbc7a-674e-499d-a30f-037a35c12ba7\") " pod="openstack/root-account-create-update-fz9q7" Feb 17 13:53:59 crc kubenswrapper[4768]: I0217 13:53:59.464332 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k5rvj\" (UniqueName: \"kubernetes.io/projected/cabcbc7a-674e-499d-a30f-037a35c12ba7-kube-api-access-k5rvj\") pod \"root-account-create-update-fz9q7\" (UID: \"cabcbc7a-674e-499d-a30f-037a35c12ba7\") " pod="openstack/root-account-create-update-fz9q7" Feb 17 13:53:59 crc kubenswrapper[4768]: I0217 13:53:59.465069 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cabcbc7a-674e-499d-a30f-037a35c12ba7-operator-scripts\") pod \"root-account-create-update-fz9q7\" (UID: \"cabcbc7a-674e-499d-a30f-037a35c12ba7\") " pod="openstack/root-account-create-update-fz9q7" Feb 17 13:53:59 crc kubenswrapper[4768]: I0217 13:53:59.492987 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k5rvj\" (UniqueName: \"kubernetes.io/projected/cabcbc7a-674e-499d-a30f-037a35c12ba7-kube-api-access-k5rvj\") pod \"root-account-create-update-fz9q7\" (UID: \"cabcbc7a-674e-499d-a30f-037a35c12ba7\") " pod="openstack/root-account-create-update-fz9q7" Feb 17 13:53:59 crc kubenswrapper[4768]: I0217 13:53:59.560190 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-fz9q7" Feb 17 13:54:00 crc kubenswrapper[4768]: I0217 13:54:00.279429 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/81d96922-74f7-4840-bcad-6f98ffb1bbdf-etc-swift\") pod \"swift-storage-0\" (UID: \"81d96922-74f7-4840-bcad-6f98ffb1bbdf\") " pod="openstack/swift-storage-0" Feb 17 13:54:00 crc kubenswrapper[4768]: E0217 13:54:00.279652 4768 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 17 13:54:00 crc kubenswrapper[4768]: E0217 13:54:00.279672 4768 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 17 13:54:00 crc kubenswrapper[4768]: E0217 13:54:00.279722 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/81d96922-74f7-4840-bcad-6f98ffb1bbdf-etc-swift podName:81d96922-74f7-4840-bcad-6f98ffb1bbdf nodeName:}" failed. No retries permitted until 2026-02-17 13:54:16.279707061 +0000 UTC m=+1075.559093503 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/81d96922-74f7-4840-bcad-6f98ffb1bbdf-etc-swift") pod "swift-storage-0" (UID: "81d96922-74f7-4840-bcad-6f98ffb1bbdf") : configmap "swift-ring-files" not found Feb 17 13:54:00 crc kubenswrapper[4768]: I0217 13:54:00.979231 4768 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-gnb4g" podUID="39dede0b-4ddc-46ea-81c1-a8e7e576aa78" containerName="ovn-controller" probeResult="failure" output=< Feb 17 13:54:00 crc kubenswrapper[4768]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Feb 17 13:54:00 crc kubenswrapper[4768]: > Feb 17 13:54:04 crc kubenswrapper[4768]: I0217 13:54:04.708857 4768 generic.go:334] "Generic (PLEG): container finished" podID="9615d9e4-113e-4282-a091-a8c69a0c7968" containerID="e1a6eaa5057abc76e416b61565b243db97335cdb11555ea5038bf95b5c8e8de4" exitCode=0 Feb 17 13:54:04 crc kubenswrapper[4768]: I0217 13:54:04.708952 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"9615d9e4-113e-4282-a091-a8c69a0c7968","Type":"ContainerDied","Data":"e1a6eaa5057abc76e416b61565b243db97335cdb11555ea5038bf95b5c8e8de4"} Feb 17 13:54:04 crc kubenswrapper[4768]: I0217 13:54:04.711064 4768 generic.go:334] "Generic (PLEG): container finished" podID="7d5df4be-f003-429d-8a84-81a239db88c0" containerID="16e2b6d9fb217b4e0023a2a85ee90118aa1eb1227cce38b0a37ae2b1cbcf7add" exitCode=0 Feb 17 13:54:04 crc kubenswrapper[4768]: I0217 13:54:04.711175 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"7d5df4be-f003-429d-8a84-81a239db88c0","Type":"ContainerDied","Data":"16e2b6d9fb217b4e0023a2a85ee90118aa1eb1227cce38b0a37ae2b1cbcf7add"} Feb 17 13:54:05 crc kubenswrapper[4768]: I0217 13:54:05.994713 4768 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-gnb4g" podUID="39dede0b-4ddc-46ea-81c1-a8e7e576aa78" containerName="ovn-controller" probeResult="failure" output=< Feb 17 13:54:05 crc kubenswrapper[4768]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Feb 17 13:54:05 crc kubenswrapper[4768]: > Feb 17 13:54:06 crc kubenswrapper[4768]: I0217 13:54:06.026666 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-rkhhj" Feb 17 13:54:06 crc kubenswrapper[4768]: I0217 13:54:06.028790 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-rkhhj" Feb 17 13:54:06 crc kubenswrapper[4768]: I0217 13:54:06.281482 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-gnb4g-config-hqv7z"] Feb 17 13:54:06 crc kubenswrapper[4768]: I0217 13:54:06.282667 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-gnb4g-config-hqv7z" Feb 17 13:54:06 crc kubenswrapper[4768]: I0217 13:54:06.284655 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Feb 17 13:54:06 crc kubenswrapper[4768]: I0217 13:54:06.295126 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-gnb4g-config-hqv7z"] Feb 17 13:54:06 crc kubenswrapper[4768]: I0217 13:54:06.432041 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/f9349284-dab2-496d-b1af-bc835d9495f6-var-log-ovn\") pod \"ovn-controller-gnb4g-config-hqv7z\" (UID: \"f9349284-dab2-496d-b1af-bc835d9495f6\") " pod="openstack/ovn-controller-gnb4g-config-hqv7z" Feb 17 13:54:06 crc kubenswrapper[4768]: I0217 13:54:06.432189 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vgxmq\" (UniqueName: \"kubernetes.io/projected/f9349284-dab2-496d-b1af-bc835d9495f6-kube-api-access-vgxmq\") pod \"ovn-controller-gnb4g-config-hqv7z\" (UID: \"f9349284-dab2-496d-b1af-bc835d9495f6\") " pod="openstack/ovn-controller-gnb4g-config-hqv7z" Feb 17 13:54:06 crc kubenswrapper[4768]: I0217 13:54:06.432312 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/f9349284-dab2-496d-b1af-bc835d9495f6-var-run\") pod \"ovn-controller-gnb4g-config-hqv7z\" (UID: \"f9349284-dab2-496d-b1af-bc835d9495f6\") " pod="openstack/ovn-controller-gnb4g-config-hqv7z" Feb 17 13:54:06 crc kubenswrapper[4768]: I0217 13:54:06.432388 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f9349284-dab2-496d-b1af-bc835d9495f6-scripts\") pod \"ovn-controller-gnb4g-config-hqv7z\" (UID: \"f9349284-dab2-496d-b1af-bc835d9495f6\") " pod="openstack/ovn-controller-gnb4g-config-hqv7z" Feb 17 13:54:06 crc kubenswrapper[4768]: I0217 13:54:06.432595 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/f9349284-dab2-496d-b1af-bc835d9495f6-var-run-ovn\") pod \"ovn-controller-gnb4g-config-hqv7z\" (UID: \"f9349284-dab2-496d-b1af-bc835d9495f6\") " pod="openstack/ovn-controller-gnb4g-config-hqv7z" Feb 17 13:54:06 crc kubenswrapper[4768]: I0217 13:54:06.432626 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/f9349284-dab2-496d-b1af-bc835d9495f6-additional-scripts\") pod \"ovn-controller-gnb4g-config-hqv7z\" (UID: \"f9349284-dab2-496d-b1af-bc835d9495f6\") " pod="openstack/ovn-controller-gnb4g-config-hqv7z" Feb 17 13:54:06 crc kubenswrapper[4768]: I0217 13:54:06.535014 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/f9349284-dab2-496d-b1af-bc835d9495f6-var-log-ovn\") pod \"ovn-controller-gnb4g-config-hqv7z\" (UID: \"f9349284-dab2-496d-b1af-bc835d9495f6\") " pod="openstack/ovn-controller-gnb4g-config-hqv7z" Feb 17 13:54:06 crc kubenswrapper[4768]: I0217 13:54:06.535135 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vgxmq\" (UniqueName: \"kubernetes.io/projected/f9349284-dab2-496d-b1af-bc835d9495f6-kube-api-access-vgxmq\") pod \"ovn-controller-gnb4g-config-hqv7z\" (UID: \"f9349284-dab2-496d-b1af-bc835d9495f6\") " pod="openstack/ovn-controller-gnb4g-config-hqv7z" Feb 17 13:54:06 crc kubenswrapper[4768]: I0217 13:54:06.535191 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/f9349284-dab2-496d-b1af-bc835d9495f6-var-run\") pod \"ovn-controller-gnb4g-config-hqv7z\" (UID: \"f9349284-dab2-496d-b1af-bc835d9495f6\") " pod="openstack/ovn-controller-gnb4g-config-hqv7z" Feb 17 13:54:06 crc kubenswrapper[4768]: I0217 13:54:06.535243 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f9349284-dab2-496d-b1af-bc835d9495f6-scripts\") pod \"ovn-controller-gnb4g-config-hqv7z\" (UID: \"f9349284-dab2-496d-b1af-bc835d9495f6\") " pod="openstack/ovn-controller-gnb4g-config-hqv7z" Feb 17 13:54:06 crc kubenswrapper[4768]: I0217 13:54:06.535332 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/f9349284-dab2-496d-b1af-bc835d9495f6-var-run-ovn\") pod \"ovn-controller-gnb4g-config-hqv7z\" (UID: \"f9349284-dab2-496d-b1af-bc835d9495f6\") " pod="openstack/ovn-controller-gnb4g-config-hqv7z" Feb 17 13:54:06 crc kubenswrapper[4768]: I0217 13:54:06.535362 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/f9349284-dab2-496d-b1af-bc835d9495f6-additional-scripts\") pod \"ovn-controller-gnb4g-config-hqv7z\" (UID: \"f9349284-dab2-496d-b1af-bc835d9495f6\") " pod="openstack/ovn-controller-gnb4g-config-hqv7z" Feb 17 13:54:06 crc kubenswrapper[4768]: I0217 13:54:06.535585 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/f9349284-dab2-496d-b1af-bc835d9495f6-var-log-ovn\") pod \"ovn-controller-gnb4g-config-hqv7z\" (UID: \"f9349284-dab2-496d-b1af-bc835d9495f6\") " pod="openstack/ovn-controller-gnb4g-config-hqv7z" Feb 17 13:54:06 crc kubenswrapper[4768]: I0217 13:54:06.535611 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/f9349284-dab2-496d-b1af-bc835d9495f6-var-run\") pod \"ovn-controller-gnb4g-config-hqv7z\" (UID: \"f9349284-dab2-496d-b1af-bc835d9495f6\") " pod="openstack/ovn-controller-gnb4g-config-hqv7z" Feb 17 13:54:06 crc kubenswrapper[4768]: I0217 13:54:06.535927 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/f9349284-dab2-496d-b1af-bc835d9495f6-var-run-ovn\") pod \"ovn-controller-gnb4g-config-hqv7z\" (UID: \"f9349284-dab2-496d-b1af-bc835d9495f6\") " pod="openstack/ovn-controller-gnb4g-config-hqv7z" Feb 17 13:54:06 crc kubenswrapper[4768]: I0217 13:54:06.536741 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/f9349284-dab2-496d-b1af-bc835d9495f6-additional-scripts\") pod \"ovn-controller-gnb4g-config-hqv7z\" (UID: \"f9349284-dab2-496d-b1af-bc835d9495f6\") " pod="openstack/ovn-controller-gnb4g-config-hqv7z" Feb 17 13:54:06 crc kubenswrapper[4768]: I0217 13:54:06.537887 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f9349284-dab2-496d-b1af-bc835d9495f6-scripts\") pod \"ovn-controller-gnb4g-config-hqv7z\" (UID: \"f9349284-dab2-496d-b1af-bc835d9495f6\") " pod="openstack/ovn-controller-gnb4g-config-hqv7z" Feb 17 13:54:06 crc kubenswrapper[4768]: I0217 13:54:06.569021 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vgxmq\" (UniqueName: \"kubernetes.io/projected/f9349284-dab2-496d-b1af-bc835d9495f6-kube-api-access-vgxmq\") pod \"ovn-controller-gnb4g-config-hqv7z\" (UID: \"f9349284-dab2-496d-b1af-bc835d9495f6\") " pod="openstack/ovn-controller-gnb4g-config-hqv7z" Feb 17 13:54:06 crc kubenswrapper[4768]: I0217 13:54:06.648509 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-gnb4g-config-hqv7z" Feb 17 13:54:09 crc kubenswrapper[4768]: E0217 13:54:09.349986 4768 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-glance-api:current-podified" Feb 17 13:54:09 crc kubenswrapper[4768]: E0217 13:54:09.350506 4768 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:glance-db-sync,Image:quay.io/podified-antelope-centos9/openstack-glance-api:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/glance/glance.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:db-sync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hxj92,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42415,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42415,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod glance-db-sync-4xwgl_openstack(63f492b6-e295-4f78-9d73-0643188ffe1c): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 17 13:54:09 crc kubenswrapper[4768]: E0217 13:54:09.351606 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"glance-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/glance-db-sync-4xwgl" podUID="63f492b6-e295-4f78-9d73-0643188ffe1c" Feb 17 13:54:09 crc kubenswrapper[4768]: I0217 13:54:09.573143 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-gnb4g-config-hqv7z"] Feb 17 13:54:09 crc kubenswrapper[4768]: W0217 13:54:09.584551 4768 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf9349284_dab2_496d_b1af_bc835d9495f6.slice/crio-a61bfb9f51537cc576c7846ff219199b9bae2f97e93225cfe2a99378b57dcb57 WatchSource:0}: Error finding container a61bfb9f51537cc576c7846ff219199b9bae2f97e93225cfe2a99378b57dcb57: Status 404 returned error can't find the container with id a61bfb9f51537cc576c7846ff219199b9bae2f97e93225cfe2a99378b57dcb57 Feb 17 13:54:09 crc kubenswrapper[4768]: I0217 13:54:09.641845 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-fz9q7"] Feb 17 13:54:09 crc kubenswrapper[4768]: W0217 13:54:09.658218 4768 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podcabcbc7a_674e_499d_a30f_037a35c12ba7.slice/crio-f176fe6b733f9ff1589394cd11d693006e9b85d59cb8c849abbc194b0172fb9a WatchSource:0}: Error finding container f176fe6b733f9ff1589394cd11d693006e9b85d59cb8c849abbc194b0172fb9a: Status 404 returned error can't find the container with id f176fe6b733f9ff1589394cd11d693006e9b85d59cb8c849abbc194b0172fb9a Feb 17 13:54:09 crc kubenswrapper[4768]: I0217 13:54:09.763672 4768 generic.go:334] "Generic (PLEG): container finished" podID="3fc3a6f3-433a-44de-bf42-c29e730f2da3" containerID="35f5a48812a8485cf50090a4ab2be5b389b6a1d85c281ee047cdc7084668fdb7" exitCode=0 Feb 17 13:54:09 crc kubenswrapper[4768]: I0217 13:54:09.763769 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-wcvmp" event={"ID":"3fc3a6f3-433a-44de-bf42-c29e730f2da3","Type":"ContainerDied","Data":"35f5a48812a8485cf50090a4ab2be5b389b6a1d85c281ee047cdc7084668fdb7"} Feb 17 13:54:09 crc kubenswrapper[4768]: I0217 13:54:09.767357 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"9615d9e4-113e-4282-a091-a8c69a0c7968","Type":"ContainerStarted","Data":"d89e9e7f992f67ca081114ce9e3f958251d8095b8b48f2d57f1b38ff3926adbb"} Feb 17 13:54:09 crc kubenswrapper[4768]: I0217 13:54:09.767852 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Feb 17 13:54:09 crc kubenswrapper[4768]: I0217 13:54:09.769810 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"7d5df4be-f003-429d-8a84-81a239db88c0","Type":"ContainerStarted","Data":"ddaeb65503c006bae90a4e798bb20a1953508e688ef15a8bfd1b5a49cdcdc151"} Feb 17 13:54:09 crc kubenswrapper[4768]: I0217 13:54:09.770315 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Feb 17 13:54:09 crc kubenswrapper[4768]: I0217 13:54:09.772205 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-fz9q7" event={"ID":"cabcbc7a-674e-499d-a30f-037a35c12ba7","Type":"ContainerStarted","Data":"f176fe6b733f9ff1589394cd11d693006e9b85d59cb8c849abbc194b0172fb9a"} Feb 17 13:54:09 crc kubenswrapper[4768]: I0217 13:54:09.773608 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-gnb4g-config-hqv7z" event={"ID":"f9349284-dab2-496d-b1af-bc835d9495f6","Type":"ContainerStarted","Data":"a61bfb9f51537cc576c7846ff219199b9bae2f97e93225cfe2a99378b57dcb57"} Feb 17 13:54:09 crc kubenswrapper[4768]: E0217 13:54:09.774609 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"glance-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-glance-api:current-podified\\\"\"" pod="openstack/glance-db-sync-4xwgl" podUID="63f492b6-e295-4f78-9d73-0643188ffe1c" Feb 17 13:54:09 crc kubenswrapper[4768]: I0217 13:54:09.805509 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=57.281288208 podStartE2EDuration="1m3.805493833s" podCreationTimestamp="2026-02-17 13:53:06 +0000 UTC" firstStartedPulling="2026-02-17 13:53:22.477919135 +0000 UTC m=+1021.757305587" lastFinishedPulling="2026-02-17 13:53:29.00212477 +0000 UTC m=+1028.281511212" observedRunningTime="2026-02-17 13:54:09.802829691 +0000 UTC m=+1069.082216153" watchObservedRunningTime="2026-02-17 13:54:09.805493833 +0000 UTC m=+1069.084880265" Feb 17 13:54:09 crc kubenswrapper[4768]: I0217 13:54:09.832715 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=57.279568172 podStartE2EDuration="1m3.832699941s" podCreationTimestamp="2026-02-17 13:53:06 +0000 UTC" firstStartedPulling="2026-02-17 13:53:23.003089128 +0000 UTC m=+1022.282475560" lastFinishedPulling="2026-02-17 13:53:29.556220887 +0000 UTC m=+1028.835607329" observedRunningTime="2026-02-17 13:54:09.830048148 +0000 UTC m=+1069.109434600" watchObservedRunningTime="2026-02-17 13:54:09.832699941 +0000 UTC m=+1069.112086383" Feb 17 13:54:10 crc kubenswrapper[4768]: I0217 13:54:10.782956 4768 generic.go:334] "Generic (PLEG): container finished" podID="cabcbc7a-674e-499d-a30f-037a35c12ba7" containerID="bf6acf4ae817d5dce7f5800bd78bd14b3933a032b692cb1b600c1ca83c8a4ab1" exitCode=0 Feb 17 13:54:10 crc kubenswrapper[4768]: I0217 13:54:10.783186 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-fz9q7" event={"ID":"cabcbc7a-674e-499d-a30f-037a35c12ba7","Type":"ContainerDied","Data":"bf6acf4ae817d5dce7f5800bd78bd14b3933a032b692cb1b600c1ca83c8a4ab1"} Feb 17 13:54:10 crc kubenswrapper[4768]: I0217 13:54:10.785550 4768 generic.go:334] "Generic (PLEG): container finished" podID="f9349284-dab2-496d-b1af-bc835d9495f6" containerID="8abc67fc73d38665c484d4c5e1f4ba6f1822919ebbb0332d72519dd155e51f39" exitCode=0 Feb 17 13:54:10 crc kubenswrapper[4768]: I0217 13:54:10.785597 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-gnb4g-config-hqv7z" event={"ID":"f9349284-dab2-496d-b1af-bc835d9495f6","Type":"ContainerDied","Data":"8abc67fc73d38665c484d4c5e1f4ba6f1822919ebbb0332d72519dd155e51f39"} Feb 17 13:54:11 crc kubenswrapper[4768]: I0217 13:54:11.043275 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-gnb4g" Feb 17 13:54:11 crc kubenswrapper[4768]: I0217 13:54:11.182625 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-wcvmp" Feb 17 13:54:11 crc kubenswrapper[4768]: I0217 13:54:11.233115 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2wsxg\" (UniqueName: \"kubernetes.io/projected/3fc3a6f3-433a-44de-bf42-c29e730f2da3-kube-api-access-2wsxg\") pod \"3fc3a6f3-433a-44de-bf42-c29e730f2da3\" (UID: \"3fc3a6f3-433a-44de-bf42-c29e730f2da3\") " Feb 17 13:54:11 crc kubenswrapper[4768]: I0217 13:54:11.233372 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/3fc3a6f3-433a-44de-bf42-c29e730f2da3-swiftconf\") pod \"3fc3a6f3-433a-44de-bf42-c29e730f2da3\" (UID: \"3fc3a6f3-433a-44de-bf42-c29e730f2da3\") " Feb 17 13:54:11 crc kubenswrapper[4768]: I0217 13:54:11.233532 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3fc3a6f3-433a-44de-bf42-c29e730f2da3-scripts\") pod \"3fc3a6f3-433a-44de-bf42-c29e730f2da3\" (UID: \"3fc3a6f3-433a-44de-bf42-c29e730f2da3\") " Feb 17 13:54:11 crc kubenswrapper[4768]: I0217 13:54:11.233610 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3fc3a6f3-433a-44de-bf42-c29e730f2da3-combined-ca-bundle\") pod \"3fc3a6f3-433a-44de-bf42-c29e730f2da3\" (UID: \"3fc3a6f3-433a-44de-bf42-c29e730f2da3\") " Feb 17 13:54:11 crc kubenswrapper[4768]: I0217 13:54:11.233688 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/3fc3a6f3-433a-44de-bf42-c29e730f2da3-dispersionconf\") pod \"3fc3a6f3-433a-44de-bf42-c29e730f2da3\" (UID: \"3fc3a6f3-433a-44de-bf42-c29e730f2da3\") " Feb 17 13:54:11 crc kubenswrapper[4768]: I0217 13:54:11.233761 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/3fc3a6f3-433a-44de-bf42-c29e730f2da3-ring-data-devices\") pod \"3fc3a6f3-433a-44de-bf42-c29e730f2da3\" (UID: \"3fc3a6f3-433a-44de-bf42-c29e730f2da3\") " Feb 17 13:54:11 crc kubenswrapper[4768]: I0217 13:54:11.233878 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/3fc3a6f3-433a-44de-bf42-c29e730f2da3-etc-swift\") pod \"3fc3a6f3-433a-44de-bf42-c29e730f2da3\" (UID: \"3fc3a6f3-433a-44de-bf42-c29e730f2da3\") " Feb 17 13:54:11 crc kubenswrapper[4768]: I0217 13:54:11.234974 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3fc3a6f3-433a-44de-bf42-c29e730f2da3-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "3fc3a6f3-433a-44de-bf42-c29e730f2da3" (UID: "3fc3a6f3-433a-44de-bf42-c29e730f2da3"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 13:54:11 crc kubenswrapper[4768]: I0217 13:54:11.236973 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3fc3a6f3-433a-44de-bf42-c29e730f2da3-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "3fc3a6f3-433a-44de-bf42-c29e730f2da3" (UID: "3fc3a6f3-433a-44de-bf42-c29e730f2da3"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 13:54:11 crc kubenswrapper[4768]: I0217 13:54:11.241040 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3fc3a6f3-433a-44de-bf42-c29e730f2da3-kube-api-access-2wsxg" (OuterVolumeSpecName: "kube-api-access-2wsxg") pod "3fc3a6f3-433a-44de-bf42-c29e730f2da3" (UID: "3fc3a6f3-433a-44de-bf42-c29e730f2da3"). InnerVolumeSpecName "kube-api-access-2wsxg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 13:54:11 crc kubenswrapper[4768]: I0217 13:54:11.243620 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3fc3a6f3-433a-44de-bf42-c29e730f2da3-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "3fc3a6f3-433a-44de-bf42-c29e730f2da3" (UID: "3fc3a6f3-433a-44de-bf42-c29e730f2da3"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 13:54:11 crc kubenswrapper[4768]: I0217 13:54:11.259879 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3fc3a6f3-433a-44de-bf42-c29e730f2da3-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3fc3a6f3-433a-44de-bf42-c29e730f2da3" (UID: "3fc3a6f3-433a-44de-bf42-c29e730f2da3"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 13:54:11 crc kubenswrapper[4768]: I0217 13:54:11.266414 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3fc3a6f3-433a-44de-bf42-c29e730f2da3-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "3fc3a6f3-433a-44de-bf42-c29e730f2da3" (UID: "3fc3a6f3-433a-44de-bf42-c29e730f2da3"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 13:54:11 crc kubenswrapper[4768]: I0217 13:54:11.274020 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3fc3a6f3-433a-44de-bf42-c29e730f2da3-scripts" (OuterVolumeSpecName: "scripts") pod "3fc3a6f3-433a-44de-bf42-c29e730f2da3" (UID: "3fc3a6f3-433a-44de-bf42-c29e730f2da3"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 13:54:11 crc kubenswrapper[4768]: I0217 13:54:11.335995 4768 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/3fc3a6f3-433a-44de-bf42-c29e730f2da3-etc-swift\") on node \"crc\" DevicePath \"\"" Feb 17 13:54:11 crc kubenswrapper[4768]: I0217 13:54:11.336040 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2wsxg\" (UniqueName: \"kubernetes.io/projected/3fc3a6f3-433a-44de-bf42-c29e730f2da3-kube-api-access-2wsxg\") on node \"crc\" DevicePath \"\"" Feb 17 13:54:11 crc kubenswrapper[4768]: I0217 13:54:11.336055 4768 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/3fc3a6f3-433a-44de-bf42-c29e730f2da3-swiftconf\") on node \"crc\" DevicePath \"\"" Feb 17 13:54:11 crc kubenswrapper[4768]: I0217 13:54:11.336067 4768 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3fc3a6f3-433a-44de-bf42-c29e730f2da3-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 13:54:11 crc kubenswrapper[4768]: I0217 13:54:11.336079 4768 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3fc3a6f3-433a-44de-bf42-c29e730f2da3-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 13:54:11 crc kubenswrapper[4768]: I0217 13:54:11.336091 4768 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/3fc3a6f3-433a-44de-bf42-c29e730f2da3-dispersionconf\") on node \"crc\" DevicePath \"\"" Feb 17 13:54:11 crc kubenswrapper[4768]: I0217 13:54:11.336118 4768 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/3fc3a6f3-433a-44de-bf42-c29e730f2da3-ring-data-devices\") on node \"crc\" DevicePath \"\"" Feb 17 13:54:11 crc kubenswrapper[4768]: I0217 13:54:11.794702 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-wcvmp" event={"ID":"3fc3a6f3-433a-44de-bf42-c29e730f2da3","Type":"ContainerDied","Data":"275efe9db2b06091f71092185ba2a52f5b89458dadb5f711e5d04948adc191aa"} Feb 17 13:54:11 crc kubenswrapper[4768]: I0217 13:54:11.794754 4768 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="275efe9db2b06091f71092185ba2a52f5b89458dadb5f711e5d04948adc191aa" Feb 17 13:54:11 crc kubenswrapper[4768]: I0217 13:54:11.795935 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-wcvmp" Feb 17 13:54:12 crc kubenswrapper[4768]: I0217 13:54:12.179188 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-gnb4g-config-hqv7z" Feb 17 13:54:12 crc kubenswrapper[4768]: I0217 13:54:12.183815 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-fz9q7" Feb 17 13:54:12 crc kubenswrapper[4768]: I0217 13:54:12.249072 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k5rvj\" (UniqueName: \"kubernetes.io/projected/cabcbc7a-674e-499d-a30f-037a35c12ba7-kube-api-access-k5rvj\") pod \"cabcbc7a-674e-499d-a30f-037a35c12ba7\" (UID: \"cabcbc7a-674e-499d-a30f-037a35c12ba7\") " Feb 17 13:54:12 crc kubenswrapper[4768]: I0217 13:54:12.249203 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cabcbc7a-674e-499d-a30f-037a35c12ba7-operator-scripts\") pod \"cabcbc7a-674e-499d-a30f-037a35c12ba7\" (UID: \"cabcbc7a-674e-499d-a30f-037a35c12ba7\") " Feb 17 13:54:12 crc kubenswrapper[4768]: I0217 13:54:12.249268 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/f9349284-dab2-496d-b1af-bc835d9495f6-var-run\") pod \"f9349284-dab2-496d-b1af-bc835d9495f6\" (UID: \"f9349284-dab2-496d-b1af-bc835d9495f6\") " Feb 17 13:54:12 crc kubenswrapper[4768]: I0217 13:54:12.249317 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/f9349284-dab2-496d-b1af-bc835d9495f6-var-run-ovn\") pod \"f9349284-dab2-496d-b1af-bc835d9495f6\" (UID: \"f9349284-dab2-496d-b1af-bc835d9495f6\") " Feb 17 13:54:12 crc kubenswrapper[4768]: I0217 13:54:12.249391 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f9349284-dab2-496d-b1af-bc835d9495f6-scripts\") pod \"f9349284-dab2-496d-b1af-bc835d9495f6\" (UID: \"f9349284-dab2-496d-b1af-bc835d9495f6\") " Feb 17 13:54:12 crc kubenswrapper[4768]: I0217 13:54:12.249434 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/f9349284-dab2-496d-b1af-bc835d9495f6-var-log-ovn\") pod \"f9349284-dab2-496d-b1af-bc835d9495f6\" (UID: \"f9349284-dab2-496d-b1af-bc835d9495f6\") " Feb 17 13:54:12 crc kubenswrapper[4768]: I0217 13:54:12.249436 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f9349284-dab2-496d-b1af-bc835d9495f6-var-run" (OuterVolumeSpecName: "var-run") pod "f9349284-dab2-496d-b1af-bc835d9495f6" (UID: "f9349284-dab2-496d-b1af-bc835d9495f6"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 13:54:12 crc kubenswrapper[4768]: I0217 13:54:12.249459 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vgxmq\" (UniqueName: \"kubernetes.io/projected/f9349284-dab2-496d-b1af-bc835d9495f6-kube-api-access-vgxmq\") pod \"f9349284-dab2-496d-b1af-bc835d9495f6\" (UID: \"f9349284-dab2-496d-b1af-bc835d9495f6\") " Feb 17 13:54:12 crc kubenswrapper[4768]: I0217 13:54:12.249472 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f9349284-dab2-496d-b1af-bc835d9495f6-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "f9349284-dab2-496d-b1af-bc835d9495f6" (UID: "f9349284-dab2-496d-b1af-bc835d9495f6"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 13:54:12 crc kubenswrapper[4768]: I0217 13:54:12.249537 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/f9349284-dab2-496d-b1af-bc835d9495f6-additional-scripts\") pod \"f9349284-dab2-496d-b1af-bc835d9495f6\" (UID: \"f9349284-dab2-496d-b1af-bc835d9495f6\") " Feb 17 13:54:12 crc kubenswrapper[4768]: I0217 13:54:12.249538 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f9349284-dab2-496d-b1af-bc835d9495f6-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "f9349284-dab2-496d-b1af-bc835d9495f6" (UID: "f9349284-dab2-496d-b1af-bc835d9495f6"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 13:54:12 crc kubenswrapper[4768]: I0217 13:54:12.250225 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f9349284-dab2-496d-b1af-bc835d9495f6-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "f9349284-dab2-496d-b1af-bc835d9495f6" (UID: "f9349284-dab2-496d-b1af-bc835d9495f6"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 13:54:12 crc kubenswrapper[4768]: I0217 13:54:12.250241 4768 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/f9349284-dab2-496d-b1af-bc835d9495f6-var-log-ovn\") on node \"crc\" DevicePath \"\"" Feb 17 13:54:12 crc kubenswrapper[4768]: I0217 13:54:12.250261 4768 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/f9349284-dab2-496d-b1af-bc835d9495f6-var-run\") on node \"crc\" DevicePath \"\"" Feb 17 13:54:12 crc kubenswrapper[4768]: I0217 13:54:12.250271 4768 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/f9349284-dab2-496d-b1af-bc835d9495f6-var-run-ovn\") on node \"crc\" DevicePath \"\"" Feb 17 13:54:12 crc kubenswrapper[4768]: I0217 13:54:12.250368 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cabcbc7a-674e-499d-a30f-037a35c12ba7-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "cabcbc7a-674e-499d-a30f-037a35c12ba7" (UID: "cabcbc7a-674e-499d-a30f-037a35c12ba7"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 13:54:12 crc kubenswrapper[4768]: I0217 13:54:12.250525 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f9349284-dab2-496d-b1af-bc835d9495f6-scripts" (OuterVolumeSpecName: "scripts") pod "f9349284-dab2-496d-b1af-bc835d9495f6" (UID: "f9349284-dab2-496d-b1af-bc835d9495f6"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 13:54:12 crc kubenswrapper[4768]: I0217 13:54:12.253847 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f9349284-dab2-496d-b1af-bc835d9495f6-kube-api-access-vgxmq" (OuterVolumeSpecName: "kube-api-access-vgxmq") pod "f9349284-dab2-496d-b1af-bc835d9495f6" (UID: "f9349284-dab2-496d-b1af-bc835d9495f6"). InnerVolumeSpecName "kube-api-access-vgxmq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 13:54:12 crc kubenswrapper[4768]: I0217 13:54:12.256764 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cabcbc7a-674e-499d-a30f-037a35c12ba7-kube-api-access-k5rvj" (OuterVolumeSpecName: "kube-api-access-k5rvj") pod "cabcbc7a-674e-499d-a30f-037a35c12ba7" (UID: "cabcbc7a-674e-499d-a30f-037a35c12ba7"). InnerVolumeSpecName "kube-api-access-k5rvj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 13:54:12 crc kubenswrapper[4768]: I0217 13:54:12.351972 4768 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f9349284-dab2-496d-b1af-bc835d9495f6-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 13:54:12 crc kubenswrapper[4768]: I0217 13:54:12.352245 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vgxmq\" (UniqueName: \"kubernetes.io/projected/f9349284-dab2-496d-b1af-bc835d9495f6-kube-api-access-vgxmq\") on node \"crc\" DevicePath \"\"" Feb 17 13:54:12 crc kubenswrapper[4768]: I0217 13:54:12.352309 4768 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/f9349284-dab2-496d-b1af-bc835d9495f6-additional-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 13:54:12 crc kubenswrapper[4768]: I0217 13:54:12.352375 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k5rvj\" (UniqueName: \"kubernetes.io/projected/cabcbc7a-674e-499d-a30f-037a35c12ba7-kube-api-access-k5rvj\") on node \"crc\" DevicePath \"\"" Feb 17 13:54:12 crc kubenswrapper[4768]: I0217 13:54:12.352449 4768 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cabcbc7a-674e-499d-a30f-037a35c12ba7-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 13:54:12 crc kubenswrapper[4768]: I0217 13:54:12.801862 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-gnb4g-config-hqv7z" event={"ID":"f9349284-dab2-496d-b1af-bc835d9495f6","Type":"ContainerDied","Data":"a61bfb9f51537cc576c7846ff219199b9bae2f97e93225cfe2a99378b57dcb57"} Feb 17 13:54:12 crc kubenswrapper[4768]: I0217 13:54:12.801905 4768 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a61bfb9f51537cc576c7846ff219199b9bae2f97e93225cfe2a99378b57dcb57" Feb 17 13:54:12 crc kubenswrapper[4768]: I0217 13:54:12.801920 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-gnb4g-config-hqv7z" Feb 17 13:54:12 crc kubenswrapper[4768]: I0217 13:54:12.804451 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-fz9q7" event={"ID":"cabcbc7a-674e-499d-a30f-037a35c12ba7","Type":"ContainerDied","Data":"f176fe6b733f9ff1589394cd11d693006e9b85d59cb8c849abbc194b0172fb9a"} Feb 17 13:54:12 crc kubenswrapper[4768]: I0217 13:54:12.804472 4768 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f176fe6b733f9ff1589394cd11d693006e9b85d59cb8c849abbc194b0172fb9a" Feb 17 13:54:12 crc kubenswrapper[4768]: I0217 13:54:12.804501 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-fz9q7" Feb 17 13:54:13 crc kubenswrapper[4768]: I0217 13:54:13.291277 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-gnb4g-config-hqv7z"] Feb 17 13:54:13 crc kubenswrapper[4768]: I0217 13:54:13.298853 4768 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-gnb4g-config-hqv7z"] Feb 17 13:54:13 crc kubenswrapper[4768]: I0217 13:54:13.543683 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f9349284-dab2-496d-b1af-bc835d9495f6" path="/var/lib/kubelet/pods/f9349284-dab2-496d-b1af-bc835d9495f6/volumes" Feb 17 13:54:16 crc kubenswrapper[4768]: I0217 13:54:16.321233 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/81d96922-74f7-4840-bcad-6f98ffb1bbdf-etc-swift\") pod \"swift-storage-0\" (UID: \"81d96922-74f7-4840-bcad-6f98ffb1bbdf\") " pod="openstack/swift-storage-0" Feb 17 13:54:16 crc kubenswrapper[4768]: I0217 13:54:16.328987 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/81d96922-74f7-4840-bcad-6f98ffb1bbdf-etc-swift\") pod \"swift-storage-0\" (UID: \"81d96922-74f7-4840-bcad-6f98ffb1bbdf\") " pod="openstack/swift-storage-0" Feb 17 13:54:16 crc kubenswrapper[4768]: I0217 13:54:16.366347 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Feb 17 13:54:16 crc kubenswrapper[4768]: I0217 13:54:16.749215 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Feb 17 13:54:16 crc kubenswrapper[4768]: I0217 13:54:16.835789 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"81d96922-74f7-4840-bcad-6f98ffb1bbdf","Type":"ContainerStarted","Data":"768e2577d158f803ef3e78e6f59b1edb290298f2fc063f5adf8cf45474c10c86"} Feb 17 13:54:17 crc kubenswrapper[4768]: I0217 13:54:17.844828 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"81d96922-74f7-4840-bcad-6f98ffb1bbdf","Type":"ContainerStarted","Data":"d106366044ba9b07f8607743bde77aa6e6a4f4723c048969db51b6e9605959d1"} Feb 17 13:54:18 crc kubenswrapper[4768]: I0217 13:54:18.854858 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"81d96922-74f7-4840-bcad-6f98ffb1bbdf","Type":"ContainerStarted","Data":"17b0fa76b94b6ff2578b8de574944611f1e120b8868e7d7f3d3b09f3b939d402"} Feb 17 13:54:18 crc kubenswrapper[4768]: I0217 13:54:18.854910 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"81d96922-74f7-4840-bcad-6f98ffb1bbdf","Type":"ContainerStarted","Data":"04088c1be1dfad9ce6b70201c0be2abbc037123ddc1685f33a8bb45057dfd10b"} Feb 17 13:54:18 crc kubenswrapper[4768]: I0217 13:54:18.854928 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"81d96922-74f7-4840-bcad-6f98ffb1bbdf","Type":"ContainerStarted","Data":"11d8fe29673525c1243f72a62c6abca274c7f0dc14ad8fe70bfb70d37e7c44ec"} Feb 17 13:54:19 crc kubenswrapper[4768]: I0217 13:54:19.864876 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"81d96922-74f7-4840-bcad-6f98ffb1bbdf","Type":"ContainerStarted","Data":"8f39c847efded8b4264fde39bc319a4e4101710a510f83e7de7d52c1478c46eb"} Feb 17 13:54:19 crc kubenswrapper[4768]: I0217 13:54:19.865145 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"81d96922-74f7-4840-bcad-6f98ffb1bbdf","Type":"ContainerStarted","Data":"c4f246287f6497d198b43fb2b3916b4167b1a4cedef3854b5845211bfc1859b0"} Feb 17 13:54:19 crc kubenswrapper[4768]: I0217 13:54:19.865158 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"81d96922-74f7-4840-bcad-6f98ffb1bbdf","Type":"ContainerStarted","Data":"3562a9f5c694a8ea4e30d1a5e1bc60c9fc70d18d8df004e0d2793d1cdec19692"} Feb 17 13:54:20 crc kubenswrapper[4768]: I0217 13:54:20.876633 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"81d96922-74f7-4840-bcad-6f98ffb1bbdf","Type":"ContainerStarted","Data":"45e689d50e455eb111034dac4e879cd15aa2bb812e2133f3463cdea98df1b8a9"} Feb 17 13:54:21 crc kubenswrapper[4768]: I0217 13:54:21.892730 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"81d96922-74f7-4840-bcad-6f98ffb1bbdf","Type":"ContainerStarted","Data":"624784e83b6114343ae5dbe35f59f2814418e06825e206a41db92e03649f718f"} Feb 17 13:54:21 crc kubenswrapper[4768]: I0217 13:54:21.893554 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"81d96922-74f7-4840-bcad-6f98ffb1bbdf","Type":"ContainerStarted","Data":"956c0dd7950d08b254de4e4a7eaf88666e50725af7c7a43fa69efe0c56a5fab8"} Feb 17 13:54:21 crc kubenswrapper[4768]: I0217 13:54:21.893634 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"81d96922-74f7-4840-bcad-6f98ffb1bbdf","Type":"ContainerStarted","Data":"f88e26ce4a6bf33a6bbbcd945c0ebefa4318afdb3b3458ea9e81ddd397325928"} Feb 17 13:54:21 crc kubenswrapper[4768]: I0217 13:54:21.893711 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"81d96922-74f7-4840-bcad-6f98ffb1bbdf","Type":"ContainerStarted","Data":"7546c511a265b01e5fed219d1ef14f96d60f541725a1ca411ad76c2cd60a51cb"} Feb 17 13:54:22 crc kubenswrapper[4768]: I0217 13:54:22.906338 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"81d96922-74f7-4840-bcad-6f98ffb1bbdf","Type":"ContainerStarted","Data":"411c4ddfad7e612f718ae3a2a66cde0cfe97f93fb08723956fcf4338ca1e87f7"} Feb 17 13:54:23 crc kubenswrapper[4768]: I0217 13:54:23.919171 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"81d96922-74f7-4840-bcad-6f98ffb1bbdf","Type":"ContainerStarted","Data":"84dbf7c6826bf79ca1d4844a8a30f32b51e42c44564016def8e6c54926cb23b6"} Feb 17 13:54:23 crc kubenswrapper[4768]: I0217 13:54:23.919490 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"81d96922-74f7-4840-bcad-6f98ffb1bbdf","Type":"ContainerStarted","Data":"83e55b33f4e98395dddb4c21a961e7731e05c040eb2cf3d9aedaa50e787cb988"} Feb 17 13:54:23 crc kubenswrapper[4768]: I0217 13:54:23.922258 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-4xwgl" event={"ID":"63f492b6-e295-4f78-9d73-0643188ffe1c","Type":"ContainerStarted","Data":"dd796d4b4a78e0e26f4248f26ee369b9a738e40d3896b5ef5141cc8645aafe76"} Feb 17 13:54:23 crc kubenswrapper[4768]: I0217 13:54:23.959034 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-storage-0" podStartSLOduration=36.860990913 podStartE2EDuration="40.959004542s" podCreationTimestamp="2026-02-17 13:53:43 +0000 UTC" firstStartedPulling="2026-02-17 13:54:16.778972114 +0000 UTC m=+1076.058358566" lastFinishedPulling="2026-02-17 13:54:20.876985753 +0000 UTC m=+1080.156372195" observedRunningTime="2026-02-17 13:54:23.94952054 +0000 UTC m=+1083.228907072" watchObservedRunningTime="2026-02-17 13:54:23.959004542 +0000 UTC m=+1083.238391014" Feb 17 13:54:23 crc kubenswrapper[4768]: I0217 13:54:23.985322 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-sync-4xwgl" podStartSLOduration=3.942783944 podStartE2EDuration="32.985301024s" podCreationTimestamp="2026-02-17 13:53:51 +0000 UTC" firstStartedPulling="2026-02-17 13:53:53.979966615 +0000 UTC m=+1053.259353067" lastFinishedPulling="2026-02-17 13:54:23.022483675 +0000 UTC m=+1082.301870147" observedRunningTime="2026-02-17 13:54:23.979643568 +0000 UTC m=+1083.259030040" watchObservedRunningTime="2026-02-17 13:54:23.985301024 +0000 UTC m=+1083.264687456" Feb 17 13:54:24 crc kubenswrapper[4768]: I0217 13:54:24.259300 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-764c5664d7-6pghj"] Feb 17 13:54:24 crc kubenswrapper[4768]: E0217 13:54:24.259705 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f9349284-dab2-496d-b1af-bc835d9495f6" containerName="ovn-config" Feb 17 13:54:24 crc kubenswrapper[4768]: I0217 13:54:24.259731 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="f9349284-dab2-496d-b1af-bc835d9495f6" containerName="ovn-config" Feb 17 13:54:24 crc kubenswrapper[4768]: E0217 13:54:24.259756 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3fc3a6f3-433a-44de-bf42-c29e730f2da3" containerName="swift-ring-rebalance" Feb 17 13:54:24 crc kubenswrapper[4768]: I0217 13:54:24.259765 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="3fc3a6f3-433a-44de-bf42-c29e730f2da3" containerName="swift-ring-rebalance" Feb 17 13:54:24 crc kubenswrapper[4768]: E0217 13:54:24.259790 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cabcbc7a-674e-499d-a30f-037a35c12ba7" containerName="mariadb-account-create-update" Feb 17 13:54:24 crc kubenswrapper[4768]: I0217 13:54:24.259800 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="cabcbc7a-674e-499d-a30f-037a35c12ba7" containerName="mariadb-account-create-update" Feb 17 13:54:24 crc kubenswrapper[4768]: I0217 13:54:24.259987 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="cabcbc7a-674e-499d-a30f-037a35c12ba7" containerName="mariadb-account-create-update" Feb 17 13:54:24 crc kubenswrapper[4768]: I0217 13:54:24.260006 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="3fc3a6f3-433a-44de-bf42-c29e730f2da3" containerName="swift-ring-rebalance" Feb 17 13:54:24 crc kubenswrapper[4768]: I0217 13:54:24.260018 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="f9349284-dab2-496d-b1af-bc835d9495f6" containerName="ovn-config" Feb 17 13:54:24 crc kubenswrapper[4768]: I0217 13:54:24.261074 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-764c5664d7-6pghj" Feb 17 13:54:24 crc kubenswrapper[4768]: I0217 13:54:24.265400 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-swift-storage-0" Feb 17 13:54:24 crc kubenswrapper[4768]: I0217 13:54:24.274594 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-764c5664d7-6pghj"] Feb 17 13:54:24 crc kubenswrapper[4768]: I0217 13:54:24.363980 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e0ab0a41-18cb-4ed9-9c99-009e71c184f6-dns-swift-storage-0\") pod \"dnsmasq-dns-764c5664d7-6pghj\" (UID: \"e0ab0a41-18cb-4ed9-9c99-009e71c184f6\") " pod="openstack/dnsmasq-dns-764c5664d7-6pghj" Feb 17 13:54:24 crc kubenswrapper[4768]: I0217 13:54:24.364295 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e0ab0a41-18cb-4ed9-9c99-009e71c184f6-config\") pod \"dnsmasq-dns-764c5664d7-6pghj\" (UID: \"e0ab0a41-18cb-4ed9-9c99-009e71c184f6\") " pod="openstack/dnsmasq-dns-764c5664d7-6pghj" Feb 17 13:54:24 crc kubenswrapper[4768]: I0217 13:54:24.364387 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e0ab0a41-18cb-4ed9-9c99-009e71c184f6-dns-svc\") pod \"dnsmasq-dns-764c5664d7-6pghj\" (UID: \"e0ab0a41-18cb-4ed9-9c99-009e71c184f6\") " pod="openstack/dnsmasq-dns-764c5664d7-6pghj" Feb 17 13:54:24 crc kubenswrapper[4768]: I0217 13:54:24.364458 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f4v6f\" (UniqueName: \"kubernetes.io/projected/e0ab0a41-18cb-4ed9-9c99-009e71c184f6-kube-api-access-f4v6f\") pod \"dnsmasq-dns-764c5664d7-6pghj\" (UID: \"e0ab0a41-18cb-4ed9-9c99-009e71c184f6\") " pod="openstack/dnsmasq-dns-764c5664d7-6pghj" Feb 17 13:54:24 crc kubenswrapper[4768]: I0217 13:54:24.364504 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e0ab0a41-18cb-4ed9-9c99-009e71c184f6-ovsdbserver-sb\") pod \"dnsmasq-dns-764c5664d7-6pghj\" (UID: \"e0ab0a41-18cb-4ed9-9c99-009e71c184f6\") " pod="openstack/dnsmasq-dns-764c5664d7-6pghj" Feb 17 13:54:24 crc kubenswrapper[4768]: I0217 13:54:24.364534 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e0ab0a41-18cb-4ed9-9c99-009e71c184f6-ovsdbserver-nb\") pod \"dnsmasq-dns-764c5664d7-6pghj\" (UID: \"e0ab0a41-18cb-4ed9-9c99-009e71c184f6\") " pod="openstack/dnsmasq-dns-764c5664d7-6pghj" Feb 17 13:54:24 crc kubenswrapper[4768]: I0217 13:54:24.465976 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e0ab0a41-18cb-4ed9-9c99-009e71c184f6-dns-svc\") pod \"dnsmasq-dns-764c5664d7-6pghj\" (UID: \"e0ab0a41-18cb-4ed9-9c99-009e71c184f6\") " pod="openstack/dnsmasq-dns-764c5664d7-6pghj" Feb 17 13:54:24 crc kubenswrapper[4768]: I0217 13:54:24.467013 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e0ab0a41-18cb-4ed9-9c99-009e71c184f6-dns-svc\") pod \"dnsmasq-dns-764c5664d7-6pghj\" (UID: \"e0ab0a41-18cb-4ed9-9c99-009e71c184f6\") " pod="openstack/dnsmasq-dns-764c5664d7-6pghj" Feb 17 13:54:24 crc kubenswrapper[4768]: I0217 13:54:24.467161 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f4v6f\" (UniqueName: \"kubernetes.io/projected/e0ab0a41-18cb-4ed9-9c99-009e71c184f6-kube-api-access-f4v6f\") pod \"dnsmasq-dns-764c5664d7-6pghj\" (UID: \"e0ab0a41-18cb-4ed9-9c99-009e71c184f6\") " pod="openstack/dnsmasq-dns-764c5664d7-6pghj" Feb 17 13:54:24 crc kubenswrapper[4768]: I0217 13:54:24.467198 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e0ab0a41-18cb-4ed9-9c99-009e71c184f6-ovsdbserver-sb\") pod \"dnsmasq-dns-764c5664d7-6pghj\" (UID: \"e0ab0a41-18cb-4ed9-9c99-009e71c184f6\") " pod="openstack/dnsmasq-dns-764c5664d7-6pghj" Feb 17 13:54:24 crc kubenswrapper[4768]: I0217 13:54:24.467556 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e0ab0a41-18cb-4ed9-9c99-009e71c184f6-ovsdbserver-nb\") pod \"dnsmasq-dns-764c5664d7-6pghj\" (UID: \"e0ab0a41-18cb-4ed9-9c99-009e71c184f6\") " pod="openstack/dnsmasq-dns-764c5664d7-6pghj" Feb 17 13:54:24 crc kubenswrapper[4768]: I0217 13:54:24.468054 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e0ab0a41-18cb-4ed9-9c99-009e71c184f6-ovsdbserver-sb\") pod \"dnsmasq-dns-764c5664d7-6pghj\" (UID: \"e0ab0a41-18cb-4ed9-9c99-009e71c184f6\") " pod="openstack/dnsmasq-dns-764c5664d7-6pghj" Feb 17 13:54:24 crc kubenswrapper[4768]: I0217 13:54:24.468385 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e0ab0a41-18cb-4ed9-9c99-009e71c184f6-ovsdbserver-nb\") pod \"dnsmasq-dns-764c5664d7-6pghj\" (UID: \"e0ab0a41-18cb-4ed9-9c99-009e71c184f6\") " pod="openstack/dnsmasq-dns-764c5664d7-6pghj" Feb 17 13:54:24 crc kubenswrapper[4768]: I0217 13:54:24.468539 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e0ab0a41-18cb-4ed9-9c99-009e71c184f6-dns-swift-storage-0\") pod \"dnsmasq-dns-764c5664d7-6pghj\" (UID: \"e0ab0a41-18cb-4ed9-9c99-009e71c184f6\") " pod="openstack/dnsmasq-dns-764c5664d7-6pghj" Feb 17 13:54:24 crc kubenswrapper[4768]: I0217 13:54:24.469262 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e0ab0a41-18cb-4ed9-9c99-009e71c184f6-dns-swift-storage-0\") pod \"dnsmasq-dns-764c5664d7-6pghj\" (UID: \"e0ab0a41-18cb-4ed9-9c99-009e71c184f6\") " pod="openstack/dnsmasq-dns-764c5664d7-6pghj" Feb 17 13:54:24 crc kubenswrapper[4768]: I0217 13:54:24.469389 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e0ab0a41-18cb-4ed9-9c99-009e71c184f6-config\") pod \"dnsmasq-dns-764c5664d7-6pghj\" (UID: \"e0ab0a41-18cb-4ed9-9c99-009e71c184f6\") " pod="openstack/dnsmasq-dns-764c5664d7-6pghj" Feb 17 13:54:24 crc kubenswrapper[4768]: I0217 13:54:24.470087 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e0ab0a41-18cb-4ed9-9c99-009e71c184f6-config\") pod \"dnsmasq-dns-764c5664d7-6pghj\" (UID: \"e0ab0a41-18cb-4ed9-9c99-009e71c184f6\") " pod="openstack/dnsmasq-dns-764c5664d7-6pghj" Feb 17 13:54:24 crc kubenswrapper[4768]: I0217 13:54:24.486170 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f4v6f\" (UniqueName: \"kubernetes.io/projected/e0ab0a41-18cb-4ed9-9c99-009e71c184f6-kube-api-access-f4v6f\") pod \"dnsmasq-dns-764c5664d7-6pghj\" (UID: \"e0ab0a41-18cb-4ed9-9c99-009e71c184f6\") " pod="openstack/dnsmasq-dns-764c5664d7-6pghj" Feb 17 13:54:24 crc kubenswrapper[4768]: I0217 13:54:24.579749 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-764c5664d7-6pghj" Feb 17 13:54:25 crc kubenswrapper[4768]: I0217 13:54:25.023910 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-764c5664d7-6pghj"] Feb 17 13:54:25 crc kubenswrapper[4768]: W0217 13:54:25.027214 4768 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode0ab0a41_18cb_4ed9_9c99_009e71c184f6.slice/crio-cc94329e1aff3185766a6487d10350a3d6f6172799b2507b11fa914e7edf9210 WatchSource:0}: Error finding container cc94329e1aff3185766a6487d10350a3d6f6172799b2507b11fa914e7edf9210: Status 404 returned error can't find the container with id cc94329e1aff3185766a6487d10350a3d6f6172799b2507b11fa914e7edf9210 Feb 17 13:54:25 crc kubenswrapper[4768]: I0217 13:54:25.938549 4768 generic.go:334] "Generic (PLEG): container finished" podID="e0ab0a41-18cb-4ed9-9c99-009e71c184f6" containerID="2e9636a2cc437556f953304bf94ccb0842ded9aa2e1cd892196e5f4f432ad6ce" exitCode=0 Feb 17 13:54:25 crc kubenswrapper[4768]: I0217 13:54:25.938659 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-764c5664d7-6pghj" event={"ID":"e0ab0a41-18cb-4ed9-9c99-009e71c184f6","Type":"ContainerDied","Data":"2e9636a2cc437556f953304bf94ccb0842ded9aa2e1cd892196e5f4f432ad6ce"} Feb 17 13:54:25 crc kubenswrapper[4768]: I0217 13:54:25.938908 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-764c5664d7-6pghj" event={"ID":"e0ab0a41-18cb-4ed9-9c99-009e71c184f6","Type":"ContainerStarted","Data":"cc94329e1aff3185766a6487d10350a3d6f6172799b2507b11fa914e7edf9210"} Feb 17 13:54:26 crc kubenswrapper[4768]: I0217 13:54:26.950174 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-764c5664d7-6pghj" event={"ID":"e0ab0a41-18cb-4ed9-9c99-009e71c184f6","Type":"ContainerStarted","Data":"87d58f42c17bfa2aa8f87897bc651d29bae5ab1acae7c5e997259fb061e0bc27"} Feb 17 13:54:26 crc kubenswrapper[4768]: I0217 13:54:26.950535 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-764c5664d7-6pghj" Feb 17 13:54:26 crc kubenswrapper[4768]: I0217 13:54:26.976239 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-764c5664d7-6pghj" podStartSLOduration=2.976214059 podStartE2EDuration="2.976214059s" podCreationTimestamp="2026-02-17 13:54:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 13:54:26.970091271 +0000 UTC m=+1086.249477723" watchObservedRunningTime="2026-02-17 13:54:26.976214059 +0000 UTC m=+1086.255600511" Feb 17 13:54:27 crc kubenswrapper[4768]: I0217 13:54:27.724315 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Feb 17 13:54:28 crc kubenswrapper[4768]: I0217 13:54:28.003078 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Feb 17 13:54:28 crc kubenswrapper[4768]: I0217 13:54:28.059935 4768 patch_prober.go:28] interesting pod/machine-config-daemon-p97z4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 13:54:28 crc kubenswrapper[4768]: I0217 13:54:28.059993 4768 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-p97z4" podUID="10c685ba-8fe0-425c-958c-3fb6754d3d84" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 13:54:28 crc kubenswrapper[4768]: I0217 13:54:28.120717 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-create-9cb6l"] Feb 17 13:54:28 crc kubenswrapper[4768]: I0217 13:54:28.121681 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-9cb6l" Feb 17 13:54:28 crc kubenswrapper[4768]: I0217 13:54:28.132066 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tn2rj\" (UniqueName: \"kubernetes.io/projected/31314241-7fb9-41ba-811f-64a9a907f49a-kube-api-access-tn2rj\") pod \"cinder-db-create-9cb6l\" (UID: \"31314241-7fb9-41ba-811f-64a9a907f49a\") " pod="openstack/cinder-db-create-9cb6l" Feb 17 13:54:28 crc kubenswrapper[4768]: I0217 13:54:28.132217 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/31314241-7fb9-41ba-811f-64a9a907f49a-operator-scripts\") pod \"cinder-db-create-9cb6l\" (UID: \"31314241-7fb9-41ba-811f-64a9a907f49a\") " pod="openstack/cinder-db-create-9cb6l" Feb 17 13:54:28 crc kubenswrapper[4768]: I0217 13:54:28.154515 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-9cb6l"] Feb 17 13:54:28 crc kubenswrapper[4768]: I0217 13:54:28.233387 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/31314241-7fb9-41ba-811f-64a9a907f49a-operator-scripts\") pod \"cinder-db-create-9cb6l\" (UID: \"31314241-7fb9-41ba-811f-64a9a907f49a\") " pod="openstack/cinder-db-create-9cb6l" Feb 17 13:54:28 crc kubenswrapper[4768]: I0217 13:54:28.233460 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tn2rj\" (UniqueName: \"kubernetes.io/projected/31314241-7fb9-41ba-811f-64a9a907f49a-kube-api-access-tn2rj\") pod \"cinder-db-create-9cb6l\" (UID: \"31314241-7fb9-41ba-811f-64a9a907f49a\") " pod="openstack/cinder-db-create-9cb6l" Feb 17 13:54:28 crc kubenswrapper[4768]: I0217 13:54:28.234421 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/31314241-7fb9-41ba-811f-64a9a907f49a-operator-scripts\") pod \"cinder-db-create-9cb6l\" (UID: \"31314241-7fb9-41ba-811f-64a9a907f49a\") " pod="openstack/cinder-db-create-9cb6l" Feb 17 13:54:28 crc kubenswrapper[4768]: I0217 13:54:28.266760 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tn2rj\" (UniqueName: \"kubernetes.io/projected/31314241-7fb9-41ba-811f-64a9a907f49a-kube-api-access-tn2rj\") pod \"cinder-db-create-9cb6l\" (UID: \"31314241-7fb9-41ba-811f-64a9a907f49a\") " pod="openstack/cinder-db-create-9cb6l" Feb 17 13:54:28 crc kubenswrapper[4768]: I0217 13:54:28.299930 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-d091-account-create-update-mmwqv"] Feb 17 13:54:28 crc kubenswrapper[4768]: I0217 13:54:28.300893 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-d091-account-create-update-mmwqv" Feb 17 13:54:28 crc kubenswrapper[4768]: I0217 13:54:28.303822 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-db-secret" Feb 17 13:54:28 crc kubenswrapper[4768]: I0217 13:54:28.324677 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-d091-account-create-update-mmwqv"] Feb 17 13:54:28 crc kubenswrapper[4768]: I0217 13:54:28.335159 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/02492e8d-aeb8-48ae-b6a3-05bcbcf88eb1-operator-scripts\") pod \"cinder-d091-account-create-update-mmwqv\" (UID: \"02492e8d-aeb8-48ae-b6a3-05bcbcf88eb1\") " pod="openstack/cinder-d091-account-create-update-mmwqv" Feb 17 13:54:28 crc kubenswrapper[4768]: I0217 13:54:28.335442 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pgc8t\" (UniqueName: \"kubernetes.io/projected/02492e8d-aeb8-48ae-b6a3-05bcbcf88eb1-kube-api-access-pgc8t\") pod \"cinder-d091-account-create-update-mmwqv\" (UID: \"02492e8d-aeb8-48ae-b6a3-05bcbcf88eb1\") " pod="openstack/cinder-d091-account-create-update-mmwqv" Feb 17 13:54:28 crc kubenswrapper[4768]: I0217 13:54:28.435808 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-9cb6l" Feb 17 13:54:28 crc kubenswrapper[4768]: I0217 13:54:28.436945 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/02492e8d-aeb8-48ae-b6a3-05bcbcf88eb1-operator-scripts\") pod \"cinder-d091-account-create-update-mmwqv\" (UID: \"02492e8d-aeb8-48ae-b6a3-05bcbcf88eb1\") " pod="openstack/cinder-d091-account-create-update-mmwqv" Feb 17 13:54:28 crc kubenswrapper[4768]: I0217 13:54:28.436995 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pgc8t\" (UniqueName: \"kubernetes.io/projected/02492e8d-aeb8-48ae-b6a3-05bcbcf88eb1-kube-api-access-pgc8t\") pod \"cinder-d091-account-create-update-mmwqv\" (UID: \"02492e8d-aeb8-48ae-b6a3-05bcbcf88eb1\") " pod="openstack/cinder-d091-account-create-update-mmwqv" Feb 17 13:54:28 crc kubenswrapper[4768]: I0217 13:54:28.437791 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/02492e8d-aeb8-48ae-b6a3-05bcbcf88eb1-operator-scripts\") pod \"cinder-d091-account-create-update-mmwqv\" (UID: \"02492e8d-aeb8-48ae-b6a3-05bcbcf88eb1\") " pod="openstack/cinder-d091-account-create-update-mmwqv" Feb 17 13:54:28 crc kubenswrapper[4768]: I0217 13:54:28.465668 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pgc8t\" (UniqueName: \"kubernetes.io/projected/02492e8d-aeb8-48ae-b6a3-05bcbcf88eb1-kube-api-access-pgc8t\") pod \"cinder-d091-account-create-update-mmwqv\" (UID: \"02492e8d-aeb8-48ae-b6a3-05bcbcf88eb1\") " pod="openstack/cinder-d091-account-create-update-mmwqv" Feb 17 13:54:28 crc kubenswrapper[4768]: I0217 13:54:28.593849 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-create-6bgrn"] Feb 17 13:54:28 crc kubenswrapper[4768]: I0217 13:54:28.599408 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-6bgrn" Feb 17 13:54:28 crc kubenswrapper[4768]: I0217 13:54:28.604466 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-eeff-account-create-update-d4h7z"] Feb 17 13:54:28 crc kubenswrapper[4768]: I0217 13:54:28.605453 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-eeff-account-create-update-d4h7z" Feb 17 13:54:28 crc kubenswrapper[4768]: I0217 13:54:28.608363 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-db-secret" Feb 17 13:54:28 crc kubenswrapper[4768]: I0217 13:54:28.616503 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-6bgrn"] Feb 17 13:54:28 crc kubenswrapper[4768]: I0217 13:54:28.621688 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-d091-account-create-update-mmwqv" Feb 17 13:54:28 crc kubenswrapper[4768]: I0217 13:54:28.622591 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-eeff-account-create-update-d4h7z"] Feb 17 13:54:28 crc kubenswrapper[4768]: I0217 13:54:28.703219 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-sync-76b7s"] Feb 17 13:54:28 crc kubenswrapper[4768]: I0217 13:54:28.704283 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-76b7s" Feb 17 13:54:28 crc kubenswrapper[4768]: I0217 13:54:28.710441 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-xwvsr" Feb 17 13:54:28 crc kubenswrapper[4768]: I0217 13:54:28.710456 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Feb 17 13:54:28 crc kubenswrapper[4768]: I0217 13:54:28.710518 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Feb 17 13:54:28 crc kubenswrapper[4768]: I0217 13:54:28.710623 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Feb 17 13:54:28 crc kubenswrapper[4768]: I0217 13:54:28.712693 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-create-vkdnz"] Feb 17 13:54:28 crc kubenswrapper[4768]: I0217 13:54:28.714058 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-vkdnz" Feb 17 13:54:28 crc kubenswrapper[4768]: I0217 13:54:28.720488 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-76b7s"] Feb 17 13:54:28 crc kubenswrapper[4768]: I0217 13:54:28.725651 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-vkdnz"] Feb 17 13:54:28 crc kubenswrapper[4768]: I0217 13:54:28.750861 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-68fnk\" (UniqueName: \"kubernetes.io/projected/7f2dd794-fc73-4f82-a57d-e9d9314e8b7c-kube-api-access-68fnk\") pod \"neutron-eeff-account-create-update-d4h7z\" (UID: \"7f2dd794-fc73-4f82-a57d-e9d9314e8b7c\") " pod="openstack/neutron-eeff-account-create-update-d4h7z" Feb 17 13:54:28 crc kubenswrapper[4768]: I0217 13:54:28.750925 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qmqb2\" (UniqueName: \"kubernetes.io/projected/df03a5cd-6bf6-4275-bb4f-0310e49656fd-kube-api-access-qmqb2\") pod \"neutron-db-create-6bgrn\" (UID: \"df03a5cd-6bf6-4275-bb4f-0310e49656fd\") " pod="openstack/neutron-db-create-6bgrn" Feb 17 13:54:28 crc kubenswrapper[4768]: I0217 13:54:28.750950 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7f2dd794-fc73-4f82-a57d-e9d9314e8b7c-operator-scripts\") pod \"neutron-eeff-account-create-update-d4h7z\" (UID: \"7f2dd794-fc73-4f82-a57d-e9d9314e8b7c\") " pod="openstack/neutron-eeff-account-create-update-d4h7z" Feb 17 13:54:28 crc kubenswrapper[4768]: I0217 13:54:28.751069 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/df03a5cd-6bf6-4275-bb4f-0310e49656fd-operator-scripts\") pod \"neutron-db-create-6bgrn\" (UID: \"df03a5cd-6bf6-4275-bb4f-0310e49656fd\") " pod="openstack/neutron-db-create-6bgrn" Feb 17 13:54:28 crc kubenswrapper[4768]: I0217 13:54:28.792833 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-1c1c-account-create-update-9knv8"] Feb 17 13:54:28 crc kubenswrapper[4768]: I0217 13:54:28.793837 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-1c1c-account-create-update-9knv8" Feb 17 13:54:28 crc kubenswrapper[4768]: I0217 13:54:28.799089 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-db-secret" Feb 17 13:54:28 crc kubenswrapper[4768]: I0217 13:54:28.815400 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-1c1c-account-create-update-9knv8"] Feb 17 13:54:28 crc kubenswrapper[4768]: I0217 13:54:28.852884 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-68fnk\" (UniqueName: \"kubernetes.io/projected/7f2dd794-fc73-4f82-a57d-e9d9314e8b7c-kube-api-access-68fnk\") pod \"neutron-eeff-account-create-update-d4h7z\" (UID: \"7f2dd794-fc73-4f82-a57d-e9d9314e8b7c\") " pod="openstack/neutron-eeff-account-create-update-d4h7z" Feb 17 13:54:28 crc kubenswrapper[4768]: I0217 13:54:28.852953 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qmqb2\" (UniqueName: \"kubernetes.io/projected/df03a5cd-6bf6-4275-bb4f-0310e49656fd-kube-api-access-qmqb2\") pod \"neutron-db-create-6bgrn\" (UID: \"df03a5cd-6bf6-4275-bb4f-0310e49656fd\") " pod="openstack/neutron-db-create-6bgrn" Feb 17 13:54:28 crc kubenswrapper[4768]: I0217 13:54:28.852974 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7f2dd794-fc73-4f82-a57d-e9d9314e8b7c-operator-scripts\") pod \"neutron-eeff-account-create-update-d4h7z\" (UID: \"7f2dd794-fc73-4f82-a57d-e9d9314e8b7c\") " pod="openstack/neutron-eeff-account-create-update-d4h7z" Feb 17 13:54:28 crc kubenswrapper[4768]: I0217 13:54:28.852993 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s68nv\" (UniqueName: \"kubernetes.io/projected/2a6f0a6c-7ca0-4f3b-ab3b-d5e548d4874a-kube-api-access-s68nv\") pod \"barbican-db-create-vkdnz\" (UID: \"2a6f0a6c-7ca0-4f3b-ab3b-d5e548d4874a\") " pod="openstack/barbican-db-create-vkdnz" Feb 17 13:54:28 crc kubenswrapper[4768]: I0217 13:54:28.854396 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7f2dd794-fc73-4f82-a57d-e9d9314e8b7c-operator-scripts\") pod \"neutron-eeff-account-create-update-d4h7z\" (UID: \"7f2dd794-fc73-4f82-a57d-e9d9314e8b7c\") " pod="openstack/neutron-eeff-account-create-update-d4h7z" Feb 17 13:54:28 crc kubenswrapper[4768]: I0217 13:54:28.854495 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9a664abb-3a1f-405e-830b-f3f2ad8c4d22-config-data\") pod \"keystone-db-sync-76b7s\" (UID: \"9a664abb-3a1f-405e-830b-f3f2ad8c4d22\") " pod="openstack/keystone-db-sync-76b7s" Feb 17 13:54:28 crc kubenswrapper[4768]: I0217 13:54:28.854574 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9a664abb-3a1f-405e-830b-f3f2ad8c4d22-combined-ca-bundle\") pod \"keystone-db-sync-76b7s\" (UID: \"9a664abb-3a1f-405e-830b-f3f2ad8c4d22\") " pod="openstack/keystone-db-sync-76b7s" Feb 17 13:54:28 crc kubenswrapper[4768]: I0217 13:54:28.854623 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2xnsn\" (UniqueName: \"kubernetes.io/projected/9a664abb-3a1f-405e-830b-f3f2ad8c4d22-kube-api-access-2xnsn\") pod \"keystone-db-sync-76b7s\" (UID: \"9a664abb-3a1f-405e-830b-f3f2ad8c4d22\") " pod="openstack/keystone-db-sync-76b7s" Feb 17 13:54:28 crc kubenswrapper[4768]: I0217 13:54:28.854745 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/df03a5cd-6bf6-4275-bb4f-0310e49656fd-operator-scripts\") pod \"neutron-db-create-6bgrn\" (UID: \"df03a5cd-6bf6-4275-bb4f-0310e49656fd\") " pod="openstack/neutron-db-create-6bgrn" Feb 17 13:54:28 crc kubenswrapper[4768]: I0217 13:54:28.854781 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2a6f0a6c-7ca0-4f3b-ab3b-d5e548d4874a-operator-scripts\") pod \"barbican-db-create-vkdnz\" (UID: \"2a6f0a6c-7ca0-4f3b-ab3b-d5e548d4874a\") " pod="openstack/barbican-db-create-vkdnz" Feb 17 13:54:28 crc kubenswrapper[4768]: I0217 13:54:28.855690 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/df03a5cd-6bf6-4275-bb4f-0310e49656fd-operator-scripts\") pod \"neutron-db-create-6bgrn\" (UID: \"df03a5cd-6bf6-4275-bb4f-0310e49656fd\") " pod="openstack/neutron-db-create-6bgrn" Feb 17 13:54:28 crc kubenswrapper[4768]: I0217 13:54:28.874011 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qmqb2\" (UniqueName: \"kubernetes.io/projected/df03a5cd-6bf6-4275-bb4f-0310e49656fd-kube-api-access-qmqb2\") pod \"neutron-db-create-6bgrn\" (UID: \"df03a5cd-6bf6-4275-bb4f-0310e49656fd\") " pod="openstack/neutron-db-create-6bgrn" Feb 17 13:54:28 crc kubenswrapper[4768]: I0217 13:54:28.874790 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-68fnk\" (UniqueName: \"kubernetes.io/projected/7f2dd794-fc73-4f82-a57d-e9d9314e8b7c-kube-api-access-68fnk\") pod \"neutron-eeff-account-create-update-d4h7z\" (UID: \"7f2dd794-fc73-4f82-a57d-e9d9314e8b7c\") " pod="openstack/neutron-eeff-account-create-update-d4h7z" Feb 17 13:54:28 crc kubenswrapper[4768]: I0217 13:54:28.934202 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-6bgrn" Feb 17 13:54:28 crc kubenswrapper[4768]: I0217 13:54:28.956496 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s68nv\" (UniqueName: \"kubernetes.io/projected/2a6f0a6c-7ca0-4f3b-ab3b-d5e548d4874a-kube-api-access-s68nv\") pod \"barbican-db-create-vkdnz\" (UID: \"2a6f0a6c-7ca0-4f3b-ab3b-d5e548d4874a\") " pod="openstack/barbican-db-create-vkdnz" Feb 17 13:54:28 crc kubenswrapper[4768]: I0217 13:54:28.956540 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9a664abb-3a1f-405e-830b-f3f2ad8c4d22-config-data\") pod \"keystone-db-sync-76b7s\" (UID: \"9a664abb-3a1f-405e-830b-f3f2ad8c4d22\") " pod="openstack/keystone-db-sync-76b7s" Feb 17 13:54:28 crc kubenswrapper[4768]: I0217 13:54:28.956561 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hznxh\" (UniqueName: \"kubernetes.io/projected/9aa6c65b-8d9c-4e2d-a8c6-fa9845261fde-kube-api-access-hznxh\") pod \"barbican-1c1c-account-create-update-9knv8\" (UID: \"9aa6c65b-8d9c-4e2d-a8c6-fa9845261fde\") " pod="openstack/barbican-1c1c-account-create-update-9knv8" Feb 17 13:54:28 crc kubenswrapper[4768]: I0217 13:54:28.956628 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9a664abb-3a1f-405e-830b-f3f2ad8c4d22-combined-ca-bundle\") pod \"keystone-db-sync-76b7s\" (UID: \"9a664abb-3a1f-405e-830b-f3f2ad8c4d22\") " pod="openstack/keystone-db-sync-76b7s" Feb 17 13:54:28 crc kubenswrapper[4768]: I0217 13:54:28.956653 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2xnsn\" (UniqueName: \"kubernetes.io/projected/9a664abb-3a1f-405e-830b-f3f2ad8c4d22-kube-api-access-2xnsn\") pod \"keystone-db-sync-76b7s\" (UID: \"9a664abb-3a1f-405e-830b-f3f2ad8c4d22\") " pod="openstack/keystone-db-sync-76b7s" Feb 17 13:54:28 crc kubenswrapper[4768]: I0217 13:54:28.956756 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9aa6c65b-8d9c-4e2d-a8c6-fa9845261fde-operator-scripts\") pod \"barbican-1c1c-account-create-update-9knv8\" (UID: \"9aa6c65b-8d9c-4e2d-a8c6-fa9845261fde\") " pod="openstack/barbican-1c1c-account-create-update-9knv8" Feb 17 13:54:28 crc kubenswrapper[4768]: I0217 13:54:28.956870 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2a6f0a6c-7ca0-4f3b-ab3b-d5e548d4874a-operator-scripts\") pod \"barbican-db-create-vkdnz\" (UID: \"2a6f0a6c-7ca0-4f3b-ab3b-d5e548d4874a\") " pod="openstack/barbican-db-create-vkdnz" Feb 17 13:54:28 crc kubenswrapper[4768]: I0217 13:54:28.957579 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2a6f0a6c-7ca0-4f3b-ab3b-d5e548d4874a-operator-scripts\") pod \"barbican-db-create-vkdnz\" (UID: \"2a6f0a6c-7ca0-4f3b-ab3b-d5e548d4874a\") " pod="openstack/barbican-db-create-vkdnz" Feb 17 13:54:28 crc kubenswrapper[4768]: I0217 13:54:28.960747 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9a664abb-3a1f-405e-830b-f3f2ad8c4d22-config-data\") pod \"keystone-db-sync-76b7s\" (UID: \"9a664abb-3a1f-405e-830b-f3f2ad8c4d22\") " pod="openstack/keystone-db-sync-76b7s" Feb 17 13:54:28 crc kubenswrapper[4768]: I0217 13:54:28.960937 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9a664abb-3a1f-405e-830b-f3f2ad8c4d22-combined-ca-bundle\") pod \"keystone-db-sync-76b7s\" (UID: \"9a664abb-3a1f-405e-830b-f3f2ad8c4d22\") " pod="openstack/keystone-db-sync-76b7s" Feb 17 13:54:28 crc kubenswrapper[4768]: I0217 13:54:28.967898 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-eeff-account-create-update-d4h7z" Feb 17 13:54:28 crc kubenswrapper[4768]: I0217 13:54:28.973262 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2xnsn\" (UniqueName: \"kubernetes.io/projected/9a664abb-3a1f-405e-830b-f3f2ad8c4d22-kube-api-access-2xnsn\") pod \"keystone-db-sync-76b7s\" (UID: \"9a664abb-3a1f-405e-830b-f3f2ad8c4d22\") " pod="openstack/keystone-db-sync-76b7s" Feb 17 13:54:28 crc kubenswrapper[4768]: I0217 13:54:28.973565 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-9cb6l"] Feb 17 13:54:28 crc kubenswrapper[4768]: I0217 13:54:28.975013 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s68nv\" (UniqueName: \"kubernetes.io/projected/2a6f0a6c-7ca0-4f3b-ab3b-d5e548d4874a-kube-api-access-s68nv\") pod \"barbican-db-create-vkdnz\" (UID: \"2a6f0a6c-7ca0-4f3b-ab3b-d5e548d4874a\") " pod="openstack/barbican-db-create-vkdnz" Feb 17 13:54:29 crc kubenswrapper[4768]: I0217 13:54:29.033445 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-76b7s" Feb 17 13:54:29 crc kubenswrapper[4768]: I0217 13:54:29.043898 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-vkdnz" Feb 17 13:54:29 crc kubenswrapper[4768]: I0217 13:54:29.059046 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hznxh\" (UniqueName: \"kubernetes.io/projected/9aa6c65b-8d9c-4e2d-a8c6-fa9845261fde-kube-api-access-hznxh\") pod \"barbican-1c1c-account-create-update-9knv8\" (UID: \"9aa6c65b-8d9c-4e2d-a8c6-fa9845261fde\") " pod="openstack/barbican-1c1c-account-create-update-9knv8" Feb 17 13:54:29 crc kubenswrapper[4768]: I0217 13:54:29.059147 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9aa6c65b-8d9c-4e2d-a8c6-fa9845261fde-operator-scripts\") pod \"barbican-1c1c-account-create-update-9knv8\" (UID: \"9aa6c65b-8d9c-4e2d-a8c6-fa9845261fde\") " pod="openstack/barbican-1c1c-account-create-update-9knv8" Feb 17 13:54:29 crc kubenswrapper[4768]: I0217 13:54:29.059926 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9aa6c65b-8d9c-4e2d-a8c6-fa9845261fde-operator-scripts\") pod \"barbican-1c1c-account-create-update-9knv8\" (UID: \"9aa6c65b-8d9c-4e2d-a8c6-fa9845261fde\") " pod="openstack/barbican-1c1c-account-create-update-9knv8" Feb 17 13:54:29 crc kubenswrapper[4768]: I0217 13:54:29.090388 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hznxh\" (UniqueName: \"kubernetes.io/projected/9aa6c65b-8d9c-4e2d-a8c6-fa9845261fde-kube-api-access-hznxh\") pod \"barbican-1c1c-account-create-update-9knv8\" (UID: \"9aa6c65b-8d9c-4e2d-a8c6-fa9845261fde\") " pod="openstack/barbican-1c1c-account-create-update-9knv8" Feb 17 13:54:29 crc kubenswrapper[4768]: I0217 13:54:29.112345 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-1c1c-account-create-update-9knv8" Feb 17 13:54:31 crc kubenswrapper[4768]: E0217 13:54:31.532830 4768 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.102.83.36:51652->38.102.83.36:45817: write tcp 38.102.83.36:51652->38.102.83.36:45817: write: broken pipe Feb 17 13:54:32 crc kubenswrapper[4768]: I0217 13:54:32.658564 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-d091-account-create-update-mmwqv"] Feb 17 13:54:32 crc kubenswrapper[4768]: I0217 13:54:32.782523 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-eeff-account-create-update-d4h7z"] Feb 17 13:54:32 crc kubenswrapper[4768]: I0217 13:54:32.789713 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-1c1c-account-create-update-9knv8"] Feb 17 13:54:32 crc kubenswrapper[4768]: W0217 13:54:32.794940 4768 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7f2dd794_fc73_4f82_a57d_e9d9314e8b7c.slice/crio-fffac3e7e0498f289fdfd0105f9303f7d6bea06f00baef21d65fc31bcf83c532 WatchSource:0}: Error finding container fffac3e7e0498f289fdfd0105f9303f7d6bea06f00baef21d65fc31bcf83c532: Status 404 returned error can't find the container with id fffac3e7e0498f289fdfd0105f9303f7d6bea06f00baef21d65fc31bcf83c532 Feb 17 13:54:32 crc kubenswrapper[4768]: W0217 13:54:32.906960 4768 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9a664abb_3a1f_405e_830b_f3f2ad8c4d22.slice/crio-0288557dfc6b0f199ecd708cc5c588911b1626068cc380c30a19befcecc3fab7 WatchSource:0}: Error finding container 0288557dfc6b0f199ecd708cc5c588911b1626068cc380c30a19befcecc3fab7: Status 404 returned error can't find the container with id 0288557dfc6b0f199ecd708cc5c588911b1626068cc380c30a19befcecc3fab7 Feb 17 13:54:32 crc kubenswrapper[4768]: I0217 13:54:32.908424 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-76b7s"] Feb 17 13:54:32 crc kubenswrapper[4768]: W0217 13:54:32.916243 4768 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2a6f0a6c_7ca0_4f3b_ab3b_d5e548d4874a.slice/crio-c8bb6caa67865b9d899faef11edb9191dbbf458c007a90448051f226d940f9d3 WatchSource:0}: Error finding container c8bb6caa67865b9d899faef11edb9191dbbf458c007a90448051f226d940f9d3: Status 404 returned error can't find the container with id c8bb6caa67865b9d899faef11edb9191dbbf458c007a90448051f226d940f9d3 Feb 17 13:54:32 crc kubenswrapper[4768]: I0217 13:54:32.921373 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-vkdnz"] Feb 17 13:54:32 crc kubenswrapper[4768]: W0217 13:54:32.923991 4768 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddf03a5cd_6bf6_4275_bb4f_0310e49656fd.slice/crio-f76211f7ede3cb77ac5677f0c54697393399bd560567dd11464db92c5640a333 WatchSource:0}: Error finding container f76211f7ede3cb77ac5677f0c54697393399bd560567dd11464db92c5640a333: Status 404 returned error can't find the container with id f76211f7ede3cb77ac5677f0c54697393399bd560567dd11464db92c5640a333 Feb 17 13:54:32 crc kubenswrapper[4768]: I0217 13:54:32.930757 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-6bgrn"] Feb 17 13:54:33 crc kubenswrapper[4768]: I0217 13:54:33.010346 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-vkdnz" event={"ID":"2a6f0a6c-7ca0-4f3b-ab3b-d5e548d4874a","Type":"ContainerStarted","Data":"c8bb6caa67865b9d899faef11edb9191dbbf458c007a90448051f226d940f9d3"} Feb 17 13:54:33 crc kubenswrapper[4768]: I0217 13:54:33.011061 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-eeff-account-create-update-d4h7z" event={"ID":"7f2dd794-fc73-4f82-a57d-e9d9314e8b7c","Type":"ContainerStarted","Data":"fffac3e7e0498f289fdfd0105f9303f7d6bea06f00baef21d65fc31bcf83c532"} Feb 17 13:54:33 crc kubenswrapper[4768]: I0217 13:54:33.012352 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-76b7s" event={"ID":"9a664abb-3a1f-405e-830b-f3f2ad8c4d22","Type":"ContainerStarted","Data":"0288557dfc6b0f199ecd708cc5c588911b1626068cc380c30a19befcecc3fab7"} Feb 17 13:54:33 crc kubenswrapper[4768]: I0217 13:54:33.014990 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-9cb6l" event={"ID":"31314241-7fb9-41ba-811f-64a9a907f49a","Type":"ContainerStarted","Data":"b7cbda1c78a11c1c1d437344068ceeaf9597976d28efbe80f286c3cff5bf8d1b"} Feb 17 13:54:33 crc kubenswrapper[4768]: I0217 13:54:33.016186 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-6bgrn" event={"ID":"df03a5cd-6bf6-4275-bb4f-0310e49656fd","Type":"ContainerStarted","Data":"f76211f7ede3cb77ac5677f0c54697393399bd560567dd11464db92c5640a333"} Feb 17 13:54:33 crc kubenswrapper[4768]: I0217 13:54:33.017310 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-d091-account-create-update-mmwqv" event={"ID":"02492e8d-aeb8-48ae-b6a3-05bcbcf88eb1","Type":"ContainerStarted","Data":"0a6a0241d3b2a72b6820316dedd31e4b30b947204c7d8973716a9ab6d452b2ce"} Feb 17 13:54:33 crc kubenswrapper[4768]: I0217 13:54:33.018199 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-1c1c-account-create-update-9knv8" event={"ID":"9aa6c65b-8d9c-4e2d-a8c6-fa9845261fde","Type":"ContainerStarted","Data":"06d76b84c4a2a90b1ac8bb2ef6ad4c09f0118104561f1832cf9dd2e622a3c456"} Feb 17 13:54:34 crc kubenswrapper[4768]: I0217 13:54:34.027988 4768 generic.go:334] "Generic (PLEG): container finished" podID="02492e8d-aeb8-48ae-b6a3-05bcbcf88eb1" containerID="53973795e45a389ef48509cb52732f8a2459dfd39953db7f6644005b0b1daa69" exitCode=0 Feb 17 13:54:34 crc kubenswrapper[4768]: I0217 13:54:34.028226 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-d091-account-create-update-mmwqv" event={"ID":"02492e8d-aeb8-48ae-b6a3-05bcbcf88eb1","Type":"ContainerDied","Data":"53973795e45a389ef48509cb52732f8a2459dfd39953db7f6644005b0b1daa69"} Feb 17 13:54:34 crc kubenswrapper[4768]: I0217 13:54:34.030832 4768 generic.go:334] "Generic (PLEG): container finished" podID="9aa6c65b-8d9c-4e2d-a8c6-fa9845261fde" containerID="287a30ba8a24d538044a95f5da0b65dc6dadf2c4c58e322bcdf289f4acb987f2" exitCode=0 Feb 17 13:54:34 crc kubenswrapper[4768]: I0217 13:54:34.030997 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-1c1c-account-create-update-9knv8" event={"ID":"9aa6c65b-8d9c-4e2d-a8c6-fa9845261fde","Type":"ContainerDied","Data":"287a30ba8a24d538044a95f5da0b65dc6dadf2c4c58e322bcdf289f4acb987f2"} Feb 17 13:54:34 crc kubenswrapper[4768]: I0217 13:54:34.037866 4768 generic.go:334] "Generic (PLEG): container finished" podID="2a6f0a6c-7ca0-4f3b-ab3b-d5e548d4874a" containerID="68948f5019072470fe8cd0a2b36a2fd7dc1ce4a5f4323051921c897110b76a7e" exitCode=0 Feb 17 13:54:34 crc kubenswrapper[4768]: I0217 13:54:34.038037 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-vkdnz" event={"ID":"2a6f0a6c-7ca0-4f3b-ab3b-d5e548d4874a","Type":"ContainerDied","Data":"68948f5019072470fe8cd0a2b36a2fd7dc1ce4a5f4323051921c897110b76a7e"} Feb 17 13:54:34 crc kubenswrapper[4768]: I0217 13:54:34.045121 4768 generic.go:334] "Generic (PLEG): container finished" podID="7f2dd794-fc73-4f82-a57d-e9d9314e8b7c" containerID="d17a952c29f40f15fe6174f2dc06dfb5a24b20f0edbcd4e2c6e6fcce7c2ef88d" exitCode=0 Feb 17 13:54:34 crc kubenswrapper[4768]: I0217 13:54:34.045350 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-eeff-account-create-update-d4h7z" event={"ID":"7f2dd794-fc73-4f82-a57d-e9d9314e8b7c","Type":"ContainerDied","Data":"d17a952c29f40f15fe6174f2dc06dfb5a24b20f0edbcd4e2c6e6fcce7c2ef88d"} Feb 17 13:54:34 crc kubenswrapper[4768]: I0217 13:54:34.049503 4768 generic.go:334] "Generic (PLEG): container finished" podID="31314241-7fb9-41ba-811f-64a9a907f49a" containerID="6cad7da28298a9a03fbe52f9d8f1b2a16ea7ad53f48e2e65ed46870f19e25384" exitCode=0 Feb 17 13:54:34 crc kubenswrapper[4768]: I0217 13:54:34.049582 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-9cb6l" event={"ID":"31314241-7fb9-41ba-811f-64a9a907f49a","Type":"ContainerDied","Data":"6cad7da28298a9a03fbe52f9d8f1b2a16ea7ad53f48e2e65ed46870f19e25384"} Feb 17 13:54:34 crc kubenswrapper[4768]: I0217 13:54:34.051584 4768 generic.go:334] "Generic (PLEG): container finished" podID="df03a5cd-6bf6-4275-bb4f-0310e49656fd" containerID="2b899bd14c79681239889a47f05b90923fe2933934a6fd482410670324cca7c8" exitCode=0 Feb 17 13:54:34 crc kubenswrapper[4768]: I0217 13:54:34.051750 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-6bgrn" event={"ID":"df03a5cd-6bf6-4275-bb4f-0310e49656fd","Type":"ContainerDied","Data":"2b899bd14c79681239889a47f05b90923fe2933934a6fd482410670324cca7c8"} Feb 17 13:54:34 crc kubenswrapper[4768]: I0217 13:54:34.581605 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-764c5664d7-6pghj" Feb 17 13:54:34 crc kubenswrapper[4768]: I0217 13:54:34.649903 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-698758b865-5ffzn"] Feb 17 13:54:34 crc kubenswrapper[4768]: I0217 13:54:34.650233 4768 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-698758b865-5ffzn" podUID="d185e75d-9b91-415c-baba-1f5bea3b5ad1" containerName="dnsmasq-dns" containerID="cri-o://0436399dcb87e0d3cc204f1f09694009206de47fe76d3fb3e84716289331d201" gracePeriod=10 Feb 17 13:54:35 crc kubenswrapper[4768]: I0217 13:54:35.063062 4768 generic.go:334] "Generic (PLEG): container finished" podID="d185e75d-9b91-415c-baba-1f5bea3b5ad1" containerID="0436399dcb87e0d3cc204f1f09694009206de47fe76d3fb3e84716289331d201" exitCode=0 Feb 17 13:54:35 crc kubenswrapper[4768]: I0217 13:54:35.063143 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-5ffzn" event={"ID":"d185e75d-9b91-415c-baba-1f5bea3b5ad1","Type":"ContainerDied","Data":"0436399dcb87e0d3cc204f1f09694009206de47fe76d3fb3e84716289331d201"} Feb 17 13:54:35 crc kubenswrapper[4768]: I0217 13:54:35.065599 4768 generic.go:334] "Generic (PLEG): container finished" podID="63f492b6-e295-4f78-9d73-0643188ffe1c" containerID="dd796d4b4a78e0e26f4248f26ee369b9a738e40d3896b5ef5141cc8645aafe76" exitCode=0 Feb 17 13:54:35 crc kubenswrapper[4768]: I0217 13:54:35.065742 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-4xwgl" event={"ID":"63f492b6-e295-4f78-9d73-0643188ffe1c","Type":"ContainerDied","Data":"dd796d4b4a78e0e26f4248f26ee369b9a738e40d3896b5ef5141cc8645aafe76"} Feb 17 13:54:37 crc kubenswrapper[4768]: I0217 13:54:37.366146 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-1c1c-account-create-update-9knv8" Feb 17 13:54:37 crc kubenswrapper[4768]: I0217 13:54:37.445358 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hznxh\" (UniqueName: \"kubernetes.io/projected/9aa6c65b-8d9c-4e2d-a8c6-fa9845261fde-kube-api-access-hznxh\") pod \"9aa6c65b-8d9c-4e2d-a8c6-fa9845261fde\" (UID: \"9aa6c65b-8d9c-4e2d-a8c6-fa9845261fde\") " Feb 17 13:54:37 crc kubenswrapper[4768]: I0217 13:54:37.445487 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9aa6c65b-8d9c-4e2d-a8c6-fa9845261fde-operator-scripts\") pod \"9aa6c65b-8d9c-4e2d-a8c6-fa9845261fde\" (UID: \"9aa6c65b-8d9c-4e2d-a8c6-fa9845261fde\") " Feb 17 13:54:37 crc kubenswrapper[4768]: I0217 13:54:37.446529 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9aa6c65b-8d9c-4e2d-a8c6-fa9845261fde-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "9aa6c65b-8d9c-4e2d-a8c6-fa9845261fde" (UID: "9aa6c65b-8d9c-4e2d-a8c6-fa9845261fde"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 13:54:37 crc kubenswrapper[4768]: I0217 13:54:37.449267 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9aa6c65b-8d9c-4e2d-a8c6-fa9845261fde-kube-api-access-hznxh" (OuterVolumeSpecName: "kube-api-access-hznxh") pod "9aa6c65b-8d9c-4e2d-a8c6-fa9845261fde" (UID: "9aa6c65b-8d9c-4e2d-a8c6-fa9845261fde"). InnerVolumeSpecName "kube-api-access-hznxh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 13:54:37 crc kubenswrapper[4768]: I0217 13:54:37.460853 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-6bgrn" Feb 17 13:54:37 crc kubenswrapper[4768]: I0217 13:54:37.500318 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-eeff-account-create-update-d4h7z" Feb 17 13:54:37 crc kubenswrapper[4768]: I0217 13:54:37.514115 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-9cb6l" Feb 17 13:54:37 crc kubenswrapper[4768]: I0217 13:54:37.528793 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-vkdnz" Feb 17 13:54:37 crc kubenswrapper[4768]: I0217 13:54:37.536706 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-4xwgl" Feb 17 13:54:37 crc kubenswrapper[4768]: I0217 13:54:37.548231 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7f2dd794-fc73-4f82-a57d-e9d9314e8b7c-operator-scripts\") pod \"7f2dd794-fc73-4f82-a57d-e9d9314e8b7c\" (UID: \"7f2dd794-fc73-4f82-a57d-e9d9314e8b7c\") " Feb 17 13:54:37 crc kubenswrapper[4768]: I0217 13:54:37.548458 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hxj92\" (UniqueName: \"kubernetes.io/projected/63f492b6-e295-4f78-9d73-0643188ffe1c-kube-api-access-hxj92\") pod \"63f492b6-e295-4f78-9d73-0643188ffe1c\" (UID: \"63f492b6-e295-4f78-9d73-0643188ffe1c\") " Feb 17 13:54:37 crc kubenswrapper[4768]: I0217 13:54:37.548585 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/31314241-7fb9-41ba-811f-64a9a907f49a-operator-scripts\") pod \"31314241-7fb9-41ba-811f-64a9a907f49a\" (UID: \"31314241-7fb9-41ba-811f-64a9a907f49a\") " Feb 17 13:54:37 crc kubenswrapper[4768]: I0217 13:54:37.549241 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qmqb2\" (UniqueName: \"kubernetes.io/projected/df03a5cd-6bf6-4275-bb4f-0310e49656fd-kube-api-access-qmqb2\") pod \"df03a5cd-6bf6-4275-bb4f-0310e49656fd\" (UID: \"df03a5cd-6bf6-4275-bb4f-0310e49656fd\") " Feb 17 13:54:37 crc kubenswrapper[4768]: I0217 13:54:37.549281 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7f2dd794-fc73-4f82-a57d-e9d9314e8b7c-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "7f2dd794-fc73-4f82-a57d-e9d9314e8b7c" (UID: "7f2dd794-fc73-4f82-a57d-e9d9314e8b7c"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 13:54:37 crc kubenswrapper[4768]: I0217 13:54:37.549314 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s68nv\" (UniqueName: \"kubernetes.io/projected/2a6f0a6c-7ca0-4f3b-ab3b-d5e548d4874a-kube-api-access-s68nv\") pod \"2a6f0a6c-7ca0-4f3b-ab3b-d5e548d4874a\" (UID: \"2a6f0a6c-7ca0-4f3b-ab3b-d5e548d4874a\") " Feb 17 13:54:37 crc kubenswrapper[4768]: I0217 13:54:37.549372 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2a6f0a6c-7ca0-4f3b-ab3b-d5e548d4874a-operator-scripts\") pod \"2a6f0a6c-7ca0-4f3b-ab3b-d5e548d4874a\" (UID: \"2a6f0a6c-7ca0-4f3b-ab3b-d5e548d4874a\") " Feb 17 13:54:37 crc kubenswrapper[4768]: I0217 13:54:37.549417 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-68fnk\" (UniqueName: \"kubernetes.io/projected/7f2dd794-fc73-4f82-a57d-e9d9314e8b7c-kube-api-access-68fnk\") pod \"7f2dd794-fc73-4f82-a57d-e9d9314e8b7c\" (UID: \"7f2dd794-fc73-4f82-a57d-e9d9314e8b7c\") " Feb 17 13:54:37 crc kubenswrapper[4768]: I0217 13:54:37.549487 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tn2rj\" (UniqueName: \"kubernetes.io/projected/31314241-7fb9-41ba-811f-64a9a907f49a-kube-api-access-tn2rj\") pod \"31314241-7fb9-41ba-811f-64a9a907f49a\" (UID: \"31314241-7fb9-41ba-811f-64a9a907f49a\") " Feb 17 13:54:37 crc kubenswrapper[4768]: I0217 13:54:37.549529 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/63f492b6-e295-4f78-9d73-0643188ffe1c-combined-ca-bundle\") pod \"63f492b6-e295-4f78-9d73-0643188ffe1c\" (UID: \"63f492b6-e295-4f78-9d73-0643188ffe1c\") " Feb 17 13:54:37 crc kubenswrapper[4768]: I0217 13:54:37.549581 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/63f492b6-e295-4f78-9d73-0643188ffe1c-db-sync-config-data\") pod \"63f492b6-e295-4f78-9d73-0643188ffe1c\" (UID: \"63f492b6-e295-4f78-9d73-0643188ffe1c\") " Feb 17 13:54:37 crc kubenswrapper[4768]: I0217 13:54:37.549619 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/df03a5cd-6bf6-4275-bb4f-0310e49656fd-operator-scripts\") pod \"df03a5cd-6bf6-4275-bb4f-0310e49656fd\" (UID: \"df03a5cd-6bf6-4275-bb4f-0310e49656fd\") " Feb 17 13:54:37 crc kubenswrapper[4768]: I0217 13:54:37.549691 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/63f492b6-e295-4f78-9d73-0643188ffe1c-config-data\") pod \"63f492b6-e295-4f78-9d73-0643188ffe1c\" (UID: \"63f492b6-e295-4f78-9d73-0643188ffe1c\") " Feb 17 13:54:37 crc kubenswrapper[4768]: I0217 13:54:37.549857 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31314241-7fb9-41ba-811f-64a9a907f49a-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "31314241-7fb9-41ba-811f-64a9a907f49a" (UID: "31314241-7fb9-41ba-811f-64a9a907f49a"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 13:54:37 crc kubenswrapper[4768]: I0217 13:54:37.549944 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2a6f0a6c-7ca0-4f3b-ab3b-d5e548d4874a-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "2a6f0a6c-7ca0-4f3b-ab3b-d5e548d4874a" (UID: "2a6f0a6c-7ca0-4f3b-ab3b-d5e548d4874a"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 13:54:37 crc kubenswrapper[4768]: I0217 13:54:37.550659 4768 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7f2dd794-fc73-4f82-a57d-e9d9314e8b7c-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 13:54:37 crc kubenswrapper[4768]: I0217 13:54:37.551491 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/df03a5cd-6bf6-4275-bb4f-0310e49656fd-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "df03a5cd-6bf6-4275-bb4f-0310e49656fd" (UID: "df03a5cd-6bf6-4275-bb4f-0310e49656fd"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 13:54:37 crc kubenswrapper[4768]: I0217 13:54:37.551713 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-d091-account-create-update-mmwqv" Feb 17 13:54:37 crc kubenswrapper[4768]: I0217 13:54:37.553835 4768 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9aa6c65b-8d9c-4e2d-a8c6-fa9845261fde-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 13:54:37 crc kubenswrapper[4768]: I0217 13:54:37.553863 4768 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/31314241-7fb9-41ba-811f-64a9a907f49a-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 13:54:37 crc kubenswrapper[4768]: I0217 13:54:37.553892 4768 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2a6f0a6c-7ca0-4f3b-ab3b-d5e548d4874a-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 13:54:37 crc kubenswrapper[4768]: I0217 13:54:37.553918 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hznxh\" (UniqueName: \"kubernetes.io/projected/9aa6c65b-8d9c-4e2d-a8c6-fa9845261fde-kube-api-access-hznxh\") on node \"crc\" DevicePath \"\"" Feb 17 13:54:37 crc kubenswrapper[4768]: I0217 13:54:37.554800 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/63f492b6-e295-4f78-9d73-0643188ffe1c-kube-api-access-hxj92" (OuterVolumeSpecName: "kube-api-access-hxj92") pod "63f492b6-e295-4f78-9d73-0643188ffe1c" (UID: "63f492b6-e295-4f78-9d73-0643188ffe1c"). InnerVolumeSpecName "kube-api-access-hxj92". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 13:54:37 crc kubenswrapper[4768]: I0217 13:54:37.555054 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7f2dd794-fc73-4f82-a57d-e9d9314e8b7c-kube-api-access-68fnk" (OuterVolumeSpecName: "kube-api-access-68fnk") pod "7f2dd794-fc73-4f82-a57d-e9d9314e8b7c" (UID: "7f2dd794-fc73-4f82-a57d-e9d9314e8b7c"). InnerVolumeSpecName "kube-api-access-68fnk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 13:54:37 crc kubenswrapper[4768]: I0217 13:54:37.566304 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/63f492b6-e295-4f78-9d73-0643188ffe1c-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "63f492b6-e295-4f78-9d73-0643188ffe1c" (UID: "63f492b6-e295-4f78-9d73-0643188ffe1c"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 13:54:37 crc kubenswrapper[4768]: I0217 13:54:37.566483 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2a6f0a6c-7ca0-4f3b-ab3b-d5e548d4874a-kube-api-access-s68nv" (OuterVolumeSpecName: "kube-api-access-s68nv") pod "2a6f0a6c-7ca0-4f3b-ab3b-d5e548d4874a" (UID: "2a6f0a6c-7ca0-4f3b-ab3b-d5e548d4874a"). InnerVolumeSpecName "kube-api-access-s68nv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 13:54:37 crc kubenswrapper[4768]: I0217 13:54:37.566572 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/df03a5cd-6bf6-4275-bb4f-0310e49656fd-kube-api-access-qmqb2" (OuterVolumeSpecName: "kube-api-access-qmqb2") pod "df03a5cd-6bf6-4275-bb4f-0310e49656fd" (UID: "df03a5cd-6bf6-4275-bb4f-0310e49656fd"). InnerVolumeSpecName "kube-api-access-qmqb2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 13:54:37 crc kubenswrapper[4768]: I0217 13:54:37.566982 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31314241-7fb9-41ba-811f-64a9a907f49a-kube-api-access-tn2rj" (OuterVolumeSpecName: "kube-api-access-tn2rj") pod "31314241-7fb9-41ba-811f-64a9a907f49a" (UID: "31314241-7fb9-41ba-811f-64a9a907f49a"). InnerVolumeSpecName "kube-api-access-tn2rj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 13:54:37 crc kubenswrapper[4768]: I0217 13:54:37.578937 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-5ffzn" Feb 17 13:54:37 crc kubenswrapper[4768]: I0217 13:54:37.589629 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/63f492b6-e295-4f78-9d73-0643188ffe1c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "63f492b6-e295-4f78-9d73-0643188ffe1c" (UID: "63f492b6-e295-4f78-9d73-0643188ffe1c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 13:54:37 crc kubenswrapper[4768]: I0217 13:54:37.626269 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/63f492b6-e295-4f78-9d73-0643188ffe1c-config-data" (OuterVolumeSpecName: "config-data") pod "63f492b6-e295-4f78-9d73-0643188ffe1c" (UID: "63f492b6-e295-4f78-9d73-0643188ffe1c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 13:54:37 crc kubenswrapper[4768]: I0217 13:54:37.654690 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d185e75d-9b91-415c-baba-1f5bea3b5ad1-dns-svc\") pod \"d185e75d-9b91-415c-baba-1f5bea3b5ad1\" (UID: \"d185e75d-9b91-415c-baba-1f5bea3b5ad1\") " Feb 17 13:54:37 crc kubenswrapper[4768]: I0217 13:54:37.654950 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-59ljb\" (UniqueName: \"kubernetes.io/projected/d185e75d-9b91-415c-baba-1f5bea3b5ad1-kube-api-access-59ljb\") pod \"d185e75d-9b91-415c-baba-1f5bea3b5ad1\" (UID: \"d185e75d-9b91-415c-baba-1f5bea3b5ad1\") " Feb 17 13:54:37 crc kubenswrapper[4768]: I0217 13:54:37.655049 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d185e75d-9b91-415c-baba-1f5bea3b5ad1-ovsdbserver-nb\") pod \"d185e75d-9b91-415c-baba-1f5bea3b5ad1\" (UID: \"d185e75d-9b91-415c-baba-1f5bea3b5ad1\") " Feb 17 13:54:37 crc kubenswrapper[4768]: I0217 13:54:37.655154 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pgc8t\" (UniqueName: \"kubernetes.io/projected/02492e8d-aeb8-48ae-b6a3-05bcbcf88eb1-kube-api-access-pgc8t\") pod \"02492e8d-aeb8-48ae-b6a3-05bcbcf88eb1\" (UID: \"02492e8d-aeb8-48ae-b6a3-05bcbcf88eb1\") " Feb 17 13:54:37 crc kubenswrapper[4768]: I0217 13:54:37.655255 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d185e75d-9b91-415c-baba-1f5bea3b5ad1-config\") pod \"d185e75d-9b91-415c-baba-1f5bea3b5ad1\" (UID: \"d185e75d-9b91-415c-baba-1f5bea3b5ad1\") " Feb 17 13:54:37 crc kubenswrapper[4768]: I0217 13:54:37.655415 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/02492e8d-aeb8-48ae-b6a3-05bcbcf88eb1-operator-scripts\") pod \"02492e8d-aeb8-48ae-b6a3-05bcbcf88eb1\" (UID: \"02492e8d-aeb8-48ae-b6a3-05bcbcf88eb1\") " Feb 17 13:54:37 crc kubenswrapper[4768]: I0217 13:54:37.655522 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d185e75d-9b91-415c-baba-1f5bea3b5ad1-ovsdbserver-sb\") pod \"d185e75d-9b91-415c-baba-1f5bea3b5ad1\" (UID: \"d185e75d-9b91-415c-baba-1f5bea3b5ad1\") " Feb 17 13:54:37 crc kubenswrapper[4768]: I0217 13:54:37.655872 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/02492e8d-aeb8-48ae-b6a3-05bcbcf88eb1-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "02492e8d-aeb8-48ae-b6a3-05bcbcf88eb1" (UID: "02492e8d-aeb8-48ae-b6a3-05bcbcf88eb1"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 13:54:37 crc kubenswrapper[4768]: I0217 13:54:37.655954 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qmqb2\" (UniqueName: \"kubernetes.io/projected/df03a5cd-6bf6-4275-bb4f-0310e49656fd-kube-api-access-qmqb2\") on node \"crc\" DevicePath \"\"" Feb 17 13:54:37 crc kubenswrapper[4768]: I0217 13:54:37.656028 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s68nv\" (UniqueName: \"kubernetes.io/projected/2a6f0a6c-7ca0-4f3b-ab3b-d5e548d4874a-kube-api-access-s68nv\") on node \"crc\" DevicePath \"\"" Feb 17 13:54:37 crc kubenswrapper[4768]: I0217 13:54:37.656084 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-68fnk\" (UniqueName: \"kubernetes.io/projected/7f2dd794-fc73-4f82-a57d-e9d9314e8b7c-kube-api-access-68fnk\") on node \"crc\" DevicePath \"\"" Feb 17 13:54:37 crc kubenswrapper[4768]: I0217 13:54:37.656165 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tn2rj\" (UniqueName: \"kubernetes.io/projected/31314241-7fb9-41ba-811f-64a9a907f49a-kube-api-access-tn2rj\") on node \"crc\" DevicePath \"\"" Feb 17 13:54:37 crc kubenswrapper[4768]: I0217 13:54:37.656221 4768 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/63f492b6-e295-4f78-9d73-0643188ffe1c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 13:54:37 crc kubenswrapper[4768]: I0217 13:54:37.656292 4768 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/63f492b6-e295-4f78-9d73-0643188ffe1c-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 13:54:37 crc kubenswrapper[4768]: I0217 13:54:37.656349 4768 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/df03a5cd-6bf6-4275-bb4f-0310e49656fd-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 13:54:37 crc kubenswrapper[4768]: I0217 13:54:37.656401 4768 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/63f492b6-e295-4f78-9d73-0643188ffe1c-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 13:54:37 crc kubenswrapper[4768]: I0217 13:54:37.656452 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hxj92\" (UniqueName: \"kubernetes.io/projected/63f492b6-e295-4f78-9d73-0643188ffe1c-kube-api-access-hxj92\") on node \"crc\" DevicePath \"\"" Feb 17 13:54:37 crc kubenswrapper[4768]: I0217 13:54:37.658181 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/02492e8d-aeb8-48ae-b6a3-05bcbcf88eb1-kube-api-access-pgc8t" (OuterVolumeSpecName: "kube-api-access-pgc8t") pod "02492e8d-aeb8-48ae-b6a3-05bcbcf88eb1" (UID: "02492e8d-aeb8-48ae-b6a3-05bcbcf88eb1"). InnerVolumeSpecName "kube-api-access-pgc8t". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 13:54:37 crc kubenswrapper[4768]: I0217 13:54:37.658229 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d185e75d-9b91-415c-baba-1f5bea3b5ad1-kube-api-access-59ljb" (OuterVolumeSpecName: "kube-api-access-59ljb") pod "d185e75d-9b91-415c-baba-1f5bea3b5ad1" (UID: "d185e75d-9b91-415c-baba-1f5bea3b5ad1"). InnerVolumeSpecName "kube-api-access-59ljb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 13:54:37 crc kubenswrapper[4768]: I0217 13:54:37.689287 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d185e75d-9b91-415c-baba-1f5bea3b5ad1-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "d185e75d-9b91-415c-baba-1f5bea3b5ad1" (UID: "d185e75d-9b91-415c-baba-1f5bea3b5ad1"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 13:54:37 crc kubenswrapper[4768]: I0217 13:54:37.690432 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d185e75d-9b91-415c-baba-1f5bea3b5ad1-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "d185e75d-9b91-415c-baba-1f5bea3b5ad1" (UID: "d185e75d-9b91-415c-baba-1f5bea3b5ad1"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 13:54:37 crc kubenswrapper[4768]: I0217 13:54:37.691734 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d185e75d-9b91-415c-baba-1f5bea3b5ad1-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "d185e75d-9b91-415c-baba-1f5bea3b5ad1" (UID: "d185e75d-9b91-415c-baba-1f5bea3b5ad1"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 13:54:37 crc kubenswrapper[4768]: I0217 13:54:37.692742 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d185e75d-9b91-415c-baba-1f5bea3b5ad1-config" (OuterVolumeSpecName: "config") pod "d185e75d-9b91-415c-baba-1f5bea3b5ad1" (UID: "d185e75d-9b91-415c-baba-1f5bea3b5ad1"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 13:54:37 crc kubenswrapper[4768]: I0217 13:54:37.757708 4768 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/02492e8d-aeb8-48ae-b6a3-05bcbcf88eb1-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 13:54:37 crc kubenswrapper[4768]: I0217 13:54:37.757758 4768 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d185e75d-9b91-415c-baba-1f5bea3b5ad1-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 17 13:54:37 crc kubenswrapper[4768]: I0217 13:54:37.757820 4768 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d185e75d-9b91-415c-baba-1f5bea3b5ad1-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 17 13:54:37 crc kubenswrapper[4768]: I0217 13:54:37.757838 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-59ljb\" (UniqueName: \"kubernetes.io/projected/d185e75d-9b91-415c-baba-1f5bea3b5ad1-kube-api-access-59ljb\") on node \"crc\" DevicePath \"\"" Feb 17 13:54:37 crc kubenswrapper[4768]: I0217 13:54:37.757852 4768 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d185e75d-9b91-415c-baba-1f5bea3b5ad1-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 17 13:54:37 crc kubenswrapper[4768]: I0217 13:54:37.757864 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pgc8t\" (UniqueName: \"kubernetes.io/projected/02492e8d-aeb8-48ae-b6a3-05bcbcf88eb1-kube-api-access-pgc8t\") on node \"crc\" DevicePath \"\"" Feb 17 13:54:37 crc kubenswrapper[4768]: I0217 13:54:37.757877 4768 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d185e75d-9b91-415c-baba-1f5bea3b5ad1-config\") on node \"crc\" DevicePath \"\"" Feb 17 13:54:38 crc kubenswrapper[4768]: I0217 13:54:38.088336 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-6bgrn" event={"ID":"df03a5cd-6bf6-4275-bb4f-0310e49656fd","Type":"ContainerDied","Data":"f76211f7ede3cb77ac5677f0c54697393399bd560567dd11464db92c5640a333"} Feb 17 13:54:38 crc kubenswrapper[4768]: I0217 13:54:38.088387 4768 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f76211f7ede3cb77ac5677f0c54697393399bd560567dd11464db92c5640a333" Feb 17 13:54:38 crc kubenswrapper[4768]: I0217 13:54:38.088403 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-6bgrn" Feb 17 13:54:38 crc kubenswrapper[4768]: I0217 13:54:38.090799 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-d091-account-create-update-mmwqv" event={"ID":"02492e8d-aeb8-48ae-b6a3-05bcbcf88eb1","Type":"ContainerDied","Data":"0a6a0241d3b2a72b6820316dedd31e4b30b947204c7d8973716a9ab6d452b2ce"} Feb 17 13:54:38 crc kubenswrapper[4768]: I0217 13:54:38.090845 4768 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0a6a0241d3b2a72b6820316dedd31e4b30b947204c7d8973716a9ab6d452b2ce" Feb 17 13:54:38 crc kubenswrapper[4768]: I0217 13:54:38.090856 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-d091-account-create-update-mmwqv" Feb 17 13:54:38 crc kubenswrapper[4768]: I0217 13:54:38.093892 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-5ffzn" Feb 17 13:54:38 crc kubenswrapper[4768]: I0217 13:54:38.093875 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-5ffzn" event={"ID":"d185e75d-9b91-415c-baba-1f5bea3b5ad1","Type":"ContainerDied","Data":"3487555ab75b87af5b96377599846551b596f1a5e9440fe3ce7914d22b007ac8"} Feb 17 13:54:38 crc kubenswrapper[4768]: I0217 13:54:38.094284 4768 scope.go:117] "RemoveContainer" containerID="0436399dcb87e0d3cc204f1f09694009206de47fe76d3fb3e84716289331d201" Feb 17 13:54:38 crc kubenswrapper[4768]: I0217 13:54:38.095990 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-vkdnz" event={"ID":"2a6f0a6c-7ca0-4f3b-ab3b-d5e548d4874a","Type":"ContainerDied","Data":"c8bb6caa67865b9d899faef11edb9191dbbf458c007a90448051f226d940f9d3"} Feb 17 13:54:38 crc kubenswrapper[4768]: I0217 13:54:38.096031 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-vkdnz" Feb 17 13:54:38 crc kubenswrapper[4768]: I0217 13:54:38.096036 4768 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c8bb6caa67865b9d899faef11edb9191dbbf458c007a90448051f226d940f9d3" Feb 17 13:54:38 crc kubenswrapper[4768]: I0217 13:54:38.097871 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-eeff-account-create-update-d4h7z" event={"ID":"7f2dd794-fc73-4f82-a57d-e9d9314e8b7c","Type":"ContainerDied","Data":"fffac3e7e0498f289fdfd0105f9303f7d6bea06f00baef21d65fc31bcf83c532"} Feb 17 13:54:38 crc kubenswrapper[4768]: I0217 13:54:38.097897 4768 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fffac3e7e0498f289fdfd0105f9303f7d6bea06f00baef21d65fc31bcf83c532" Feb 17 13:54:38 crc kubenswrapper[4768]: I0217 13:54:38.098158 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-eeff-account-create-update-d4h7z" Feb 17 13:54:38 crc kubenswrapper[4768]: I0217 13:54:38.099447 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-76b7s" event={"ID":"9a664abb-3a1f-405e-830b-f3f2ad8c4d22","Type":"ContainerStarted","Data":"567bcfcd5b3a86cbacbf8fa49080f7399efeec0b42dd805f28787c6d2216a1a4"} Feb 17 13:54:38 crc kubenswrapper[4768]: I0217 13:54:38.102623 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-9cb6l" Feb 17 13:54:38 crc kubenswrapper[4768]: I0217 13:54:38.102657 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-9cb6l" event={"ID":"31314241-7fb9-41ba-811f-64a9a907f49a","Type":"ContainerDied","Data":"b7cbda1c78a11c1c1d437344068ceeaf9597976d28efbe80f286c3cff5bf8d1b"} Feb 17 13:54:38 crc kubenswrapper[4768]: I0217 13:54:38.102691 4768 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b7cbda1c78a11c1c1d437344068ceeaf9597976d28efbe80f286c3cff5bf8d1b" Feb 17 13:54:38 crc kubenswrapper[4768]: I0217 13:54:38.106990 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-1c1c-account-create-update-9knv8" event={"ID":"9aa6c65b-8d9c-4e2d-a8c6-fa9845261fde","Type":"ContainerDied","Data":"06d76b84c4a2a90b1ac8bb2ef6ad4c09f0118104561f1832cf9dd2e622a3c456"} Feb 17 13:54:38 crc kubenswrapper[4768]: I0217 13:54:38.107019 4768 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="06d76b84c4a2a90b1ac8bb2ef6ad4c09f0118104561f1832cf9dd2e622a3c456" Feb 17 13:54:38 crc kubenswrapper[4768]: I0217 13:54:38.107079 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-1c1c-account-create-update-9knv8" Feb 17 13:54:38 crc kubenswrapper[4768]: I0217 13:54:38.121414 4768 scope.go:117] "RemoveContainer" containerID="386761d8e1e005a4fcb8d495b6850d531a7d26770bd0d0b80c0edc1eaed5e545" Feb 17 13:54:38 crc kubenswrapper[4768]: I0217 13:54:38.121648 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-4xwgl" event={"ID":"63f492b6-e295-4f78-9d73-0643188ffe1c","Type":"ContainerDied","Data":"c870c84ffa9a05f10374095b21a7190d34986445247bb48152e9e310ad0927f6"} Feb 17 13:54:38 crc kubenswrapper[4768]: I0217 13:54:38.121685 4768 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c870c84ffa9a05f10374095b21a7190d34986445247bb48152e9e310ad0927f6" Feb 17 13:54:38 crc kubenswrapper[4768]: I0217 13:54:38.121768 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-4xwgl" Feb 17 13:54:38 crc kubenswrapper[4768]: I0217 13:54:38.150345 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-sync-76b7s" podStartSLOduration=5.783314208 podStartE2EDuration="10.150328679s" podCreationTimestamp="2026-02-17 13:54:28 +0000 UTC" firstStartedPulling="2026-02-17 13:54:32.908712312 +0000 UTC m=+1092.188098754" lastFinishedPulling="2026-02-17 13:54:37.275726783 +0000 UTC m=+1096.555113225" observedRunningTime="2026-02-17 13:54:38.128712005 +0000 UTC m=+1097.408098447" watchObservedRunningTime="2026-02-17 13:54:38.150328679 +0000 UTC m=+1097.429715111" Feb 17 13:54:38 crc kubenswrapper[4768]: I0217 13:54:38.151256 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-698758b865-5ffzn"] Feb 17 13:54:38 crc kubenswrapper[4768]: I0217 13:54:38.156956 4768 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-698758b865-5ffzn"] Feb 17 13:54:38 crc kubenswrapper[4768]: I0217 13:54:38.985467 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-74f6bcbc87-p8nmv"] Feb 17 13:54:38 crc kubenswrapper[4768]: E0217 13:54:38.985797 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9aa6c65b-8d9c-4e2d-a8c6-fa9845261fde" containerName="mariadb-account-create-update" Feb 17 13:54:38 crc kubenswrapper[4768]: I0217 13:54:38.985809 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="9aa6c65b-8d9c-4e2d-a8c6-fa9845261fde" containerName="mariadb-account-create-update" Feb 17 13:54:38 crc kubenswrapper[4768]: E0217 13:54:38.985820 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="df03a5cd-6bf6-4275-bb4f-0310e49656fd" containerName="mariadb-database-create" Feb 17 13:54:38 crc kubenswrapper[4768]: I0217 13:54:38.985825 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="df03a5cd-6bf6-4275-bb4f-0310e49656fd" containerName="mariadb-database-create" Feb 17 13:54:38 crc kubenswrapper[4768]: E0217 13:54:38.985840 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="31314241-7fb9-41ba-811f-64a9a907f49a" containerName="mariadb-database-create" Feb 17 13:54:38 crc kubenswrapper[4768]: I0217 13:54:38.985846 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="31314241-7fb9-41ba-811f-64a9a907f49a" containerName="mariadb-database-create" Feb 17 13:54:38 crc kubenswrapper[4768]: E0217 13:54:38.985854 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2a6f0a6c-7ca0-4f3b-ab3b-d5e548d4874a" containerName="mariadb-database-create" Feb 17 13:54:38 crc kubenswrapper[4768]: I0217 13:54:38.985860 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="2a6f0a6c-7ca0-4f3b-ab3b-d5e548d4874a" containerName="mariadb-database-create" Feb 17 13:54:38 crc kubenswrapper[4768]: E0217 13:54:38.985875 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d185e75d-9b91-415c-baba-1f5bea3b5ad1" containerName="dnsmasq-dns" Feb 17 13:54:38 crc kubenswrapper[4768]: I0217 13:54:38.985880 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="d185e75d-9b91-415c-baba-1f5bea3b5ad1" containerName="dnsmasq-dns" Feb 17 13:54:38 crc kubenswrapper[4768]: E0217 13:54:38.985895 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="63f492b6-e295-4f78-9d73-0643188ffe1c" containerName="glance-db-sync" Feb 17 13:54:38 crc kubenswrapper[4768]: I0217 13:54:38.985900 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="63f492b6-e295-4f78-9d73-0643188ffe1c" containerName="glance-db-sync" Feb 17 13:54:38 crc kubenswrapper[4768]: E0217 13:54:38.985908 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="02492e8d-aeb8-48ae-b6a3-05bcbcf88eb1" containerName="mariadb-account-create-update" Feb 17 13:54:38 crc kubenswrapper[4768]: I0217 13:54:38.985913 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="02492e8d-aeb8-48ae-b6a3-05bcbcf88eb1" containerName="mariadb-account-create-update" Feb 17 13:54:38 crc kubenswrapper[4768]: E0217 13:54:38.985923 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d185e75d-9b91-415c-baba-1f5bea3b5ad1" containerName="init" Feb 17 13:54:38 crc kubenswrapper[4768]: I0217 13:54:38.985929 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="d185e75d-9b91-415c-baba-1f5bea3b5ad1" containerName="init" Feb 17 13:54:38 crc kubenswrapper[4768]: E0217 13:54:38.985945 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7f2dd794-fc73-4f82-a57d-e9d9314e8b7c" containerName="mariadb-account-create-update" Feb 17 13:54:38 crc kubenswrapper[4768]: I0217 13:54:38.985951 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="7f2dd794-fc73-4f82-a57d-e9d9314e8b7c" containerName="mariadb-account-create-update" Feb 17 13:54:38 crc kubenswrapper[4768]: I0217 13:54:38.986114 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="7f2dd794-fc73-4f82-a57d-e9d9314e8b7c" containerName="mariadb-account-create-update" Feb 17 13:54:38 crc kubenswrapper[4768]: I0217 13:54:38.986124 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="d185e75d-9b91-415c-baba-1f5bea3b5ad1" containerName="dnsmasq-dns" Feb 17 13:54:38 crc kubenswrapper[4768]: I0217 13:54:38.986133 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="2a6f0a6c-7ca0-4f3b-ab3b-d5e548d4874a" containerName="mariadb-database-create" Feb 17 13:54:38 crc kubenswrapper[4768]: I0217 13:54:38.986146 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="df03a5cd-6bf6-4275-bb4f-0310e49656fd" containerName="mariadb-database-create" Feb 17 13:54:38 crc kubenswrapper[4768]: I0217 13:54:38.986160 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="9aa6c65b-8d9c-4e2d-a8c6-fa9845261fde" containerName="mariadb-account-create-update" Feb 17 13:54:38 crc kubenswrapper[4768]: I0217 13:54:38.986172 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="02492e8d-aeb8-48ae-b6a3-05bcbcf88eb1" containerName="mariadb-account-create-update" Feb 17 13:54:38 crc kubenswrapper[4768]: I0217 13:54:38.986184 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="63f492b6-e295-4f78-9d73-0643188ffe1c" containerName="glance-db-sync" Feb 17 13:54:38 crc kubenswrapper[4768]: I0217 13:54:38.986195 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="31314241-7fb9-41ba-811f-64a9a907f49a" containerName="mariadb-database-create" Feb 17 13:54:38 crc kubenswrapper[4768]: I0217 13:54:38.987246 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-74f6bcbc87-p8nmv" Feb 17 13:54:39 crc kubenswrapper[4768]: I0217 13:54:39.010741 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-74f6bcbc87-p8nmv"] Feb 17 13:54:39 crc kubenswrapper[4768]: I0217 13:54:39.077248 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-grj6g\" (UniqueName: \"kubernetes.io/projected/ce253312-d33a-42e0-8f1e-9163171fd75d-kube-api-access-grj6g\") pod \"dnsmasq-dns-74f6bcbc87-p8nmv\" (UID: \"ce253312-d33a-42e0-8f1e-9163171fd75d\") " pod="openstack/dnsmasq-dns-74f6bcbc87-p8nmv" Feb 17 13:54:39 crc kubenswrapper[4768]: I0217 13:54:39.077315 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ce253312-d33a-42e0-8f1e-9163171fd75d-dns-svc\") pod \"dnsmasq-dns-74f6bcbc87-p8nmv\" (UID: \"ce253312-d33a-42e0-8f1e-9163171fd75d\") " pod="openstack/dnsmasq-dns-74f6bcbc87-p8nmv" Feb 17 13:54:39 crc kubenswrapper[4768]: I0217 13:54:39.077339 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ce253312-d33a-42e0-8f1e-9163171fd75d-config\") pod \"dnsmasq-dns-74f6bcbc87-p8nmv\" (UID: \"ce253312-d33a-42e0-8f1e-9163171fd75d\") " pod="openstack/dnsmasq-dns-74f6bcbc87-p8nmv" Feb 17 13:54:39 crc kubenswrapper[4768]: I0217 13:54:39.077355 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ce253312-d33a-42e0-8f1e-9163171fd75d-ovsdbserver-nb\") pod \"dnsmasq-dns-74f6bcbc87-p8nmv\" (UID: \"ce253312-d33a-42e0-8f1e-9163171fd75d\") " pod="openstack/dnsmasq-dns-74f6bcbc87-p8nmv" Feb 17 13:54:39 crc kubenswrapper[4768]: I0217 13:54:39.077374 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ce253312-d33a-42e0-8f1e-9163171fd75d-dns-swift-storage-0\") pod \"dnsmasq-dns-74f6bcbc87-p8nmv\" (UID: \"ce253312-d33a-42e0-8f1e-9163171fd75d\") " pod="openstack/dnsmasq-dns-74f6bcbc87-p8nmv" Feb 17 13:54:39 crc kubenswrapper[4768]: I0217 13:54:39.077640 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ce253312-d33a-42e0-8f1e-9163171fd75d-ovsdbserver-sb\") pod \"dnsmasq-dns-74f6bcbc87-p8nmv\" (UID: \"ce253312-d33a-42e0-8f1e-9163171fd75d\") " pod="openstack/dnsmasq-dns-74f6bcbc87-p8nmv" Feb 17 13:54:39 crc kubenswrapper[4768]: I0217 13:54:39.179008 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ce253312-d33a-42e0-8f1e-9163171fd75d-ovsdbserver-sb\") pod \"dnsmasq-dns-74f6bcbc87-p8nmv\" (UID: \"ce253312-d33a-42e0-8f1e-9163171fd75d\") " pod="openstack/dnsmasq-dns-74f6bcbc87-p8nmv" Feb 17 13:54:39 crc kubenswrapper[4768]: I0217 13:54:39.179391 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-grj6g\" (UniqueName: \"kubernetes.io/projected/ce253312-d33a-42e0-8f1e-9163171fd75d-kube-api-access-grj6g\") pod \"dnsmasq-dns-74f6bcbc87-p8nmv\" (UID: \"ce253312-d33a-42e0-8f1e-9163171fd75d\") " pod="openstack/dnsmasq-dns-74f6bcbc87-p8nmv" Feb 17 13:54:39 crc kubenswrapper[4768]: I0217 13:54:39.179423 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ce253312-d33a-42e0-8f1e-9163171fd75d-dns-svc\") pod \"dnsmasq-dns-74f6bcbc87-p8nmv\" (UID: \"ce253312-d33a-42e0-8f1e-9163171fd75d\") " pod="openstack/dnsmasq-dns-74f6bcbc87-p8nmv" Feb 17 13:54:39 crc kubenswrapper[4768]: I0217 13:54:39.179450 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ce253312-d33a-42e0-8f1e-9163171fd75d-ovsdbserver-nb\") pod \"dnsmasq-dns-74f6bcbc87-p8nmv\" (UID: \"ce253312-d33a-42e0-8f1e-9163171fd75d\") " pod="openstack/dnsmasq-dns-74f6bcbc87-p8nmv" Feb 17 13:54:39 crc kubenswrapper[4768]: I0217 13:54:39.179472 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ce253312-d33a-42e0-8f1e-9163171fd75d-config\") pod \"dnsmasq-dns-74f6bcbc87-p8nmv\" (UID: \"ce253312-d33a-42e0-8f1e-9163171fd75d\") " pod="openstack/dnsmasq-dns-74f6bcbc87-p8nmv" Feb 17 13:54:39 crc kubenswrapper[4768]: I0217 13:54:39.179498 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ce253312-d33a-42e0-8f1e-9163171fd75d-dns-swift-storage-0\") pod \"dnsmasq-dns-74f6bcbc87-p8nmv\" (UID: \"ce253312-d33a-42e0-8f1e-9163171fd75d\") " pod="openstack/dnsmasq-dns-74f6bcbc87-p8nmv" Feb 17 13:54:39 crc kubenswrapper[4768]: I0217 13:54:39.180194 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ce253312-d33a-42e0-8f1e-9163171fd75d-ovsdbserver-sb\") pod \"dnsmasq-dns-74f6bcbc87-p8nmv\" (UID: \"ce253312-d33a-42e0-8f1e-9163171fd75d\") " pod="openstack/dnsmasq-dns-74f6bcbc87-p8nmv" Feb 17 13:54:39 crc kubenswrapper[4768]: I0217 13:54:39.180408 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ce253312-d33a-42e0-8f1e-9163171fd75d-dns-swift-storage-0\") pod \"dnsmasq-dns-74f6bcbc87-p8nmv\" (UID: \"ce253312-d33a-42e0-8f1e-9163171fd75d\") " pod="openstack/dnsmasq-dns-74f6bcbc87-p8nmv" Feb 17 13:54:39 crc kubenswrapper[4768]: I0217 13:54:39.180726 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ce253312-d33a-42e0-8f1e-9163171fd75d-ovsdbserver-nb\") pod \"dnsmasq-dns-74f6bcbc87-p8nmv\" (UID: \"ce253312-d33a-42e0-8f1e-9163171fd75d\") " pod="openstack/dnsmasq-dns-74f6bcbc87-p8nmv" Feb 17 13:54:39 crc kubenswrapper[4768]: I0217 13:54:39.181030 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ce253312-d33a-42e0-8f1e-9163171fd75d-dns-svc\") pod \"dnsmasq-dns-74f6bcbc87-p8nmv\" (UID: \"ce253312-d33a-42e0-8f1e-9163171fd75d\") " pod="openstack/dnsmasq-dns-74f6bcbc87-p8nmv" Feb 17 13:54:39 crc kubenswrapper[4768]: I0217 13:54:39.181095 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ce253312-d33a-42e0-8f1e-9163171fd75d-config\") pod \"dnsmasq-dns-74f6bcbc87-p8nmv\" (UID: \"ce253312-d33a-42e0-8f1e-9163171fd75d\") " pod="openstack/dnsmasq-dns-74f6bcbc87-p8nmv" Feb 17 13:54:39 crc kubenswrapper[4768]: I0217 13:54:39.220727 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-grj6g\" (UniqueName: \"kubernetes.io/projected/ce253312-d33a-42e0-8f1e-9163171fd75d-kube-api-access-grj6g\") pod \"dnsmasq-dns-74f6bcbc87-p8nmv\" (UID: \"ce253312-d33a-42e0-8f1e-9163171fd75d\") " pod="openstack/dnsmasq-dns-74f6bcbc87-p8nmv" Feb 17 13:54:39 crc kubenswrapper[4768]: I0217 13:54:39.306705 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-74f6bcbc87-p8nmv" Feb 17 13:54:39 crc kubenswrapper[4768]: I0217 13:54:39.551301 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d185e75d-9b91-415c-baba-1f5bea3b5ad1" path="/var/lib/kubelet/pods/d185e75d-9b91-415c-baba-1f5bea3b5ad1/volumes" Feb 17 13:54:39 crc kubenswrapper[4768]: I0217 13:54:39.785097 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-74f6bcbc87-p8nmv"] Feb 17 13:54:39 crc kubenswrapper[4768]: W0217 13:54:39.789686 4768 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podce253312_d33a_42e0_8f1e_9163171fd75d.slice/crio-49d91cbe7d19799883fa394c12eadce3ea37fb37a3bd255e262eed67b549b22e WatchSource:0}: Error finding container 49d91cbe7d19799883fa394c12eadce3ea37fb37a3bd255e262eed67b549b22e: Status 404 returned error can't find the container with id 49d91cbe7d19799883fa394c12eadce3ea37fb37a3bd255e262eed67b549b22e Feb 17 13:54:40 crc kubenswrapper[4768]: I0217 13:54:40.138785 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-74f6bcbc87-p8nmv" event={"ID":"ce253312-d33a-42e0-8f1e-9163171fd75d","Type":"ContainerStarted","Data":"49d91cbe7d19799883fa394c12eadce3ea37fb37a3bd255e262eed67b549b22e"} Feb 17 13:54:41 crc kubenswrapper[4768]: I0217 13:54:41.147095 4768 generic.go:334] "Generic (PLEG): container finished" podID="ce253312-d33a-42e0-8f1e-9163171fd75d" containerID="0aab15c6184155a093616cd2edd6c20089afc8b48768c70dff173c372bd14ddf" exitCode=0 Feb 17 13:54:41 crc kubenswrapper[4768]: I0217 13:54:41.147309 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-74f6bcbc87-p8nmv" event={"ID":"ce253312-d33a-42e0-8f1e-9163171fd75d","Type":"ContainerDied","Data":"0aab15c6184155a093616cd2edd6c20089afc8b48768c70dff173c372bd14ddf"} Feb 17 13:54:43 crc kubenswrapper[4768]: I0217 13:54:43.168551 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-74f6bcbc87-p8nmv" event={"ID":"ce253312-d33a-42e0-8f1e-9163171fd75d","Type":"ContainerStarted","Data":"570bdd2ae042a9972b97dcfc7093b86adbdca7b3d56ec0da1b5ca282d1c054f6"} Feb 17 13:54:43 crc kubenswrapper[4768]: I0217 13:54:43.168797 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-74f6bcbc87-p8nmv" Feb 17 13:54:43 crc kubenswrapper[4768]: I0217 13:54:43.170774 4768 generic.go:334] "Generic (PLEG): container finished" podID="9a664abb-3a1f-405e-830b-f3f2ad8c4d22" containerID="567bcfcd5b3a86cbacbf8fa49080f7399efeec0b42dd805f28787c6d2216a1a4" exitCode=0 Feb 17 13:54:43 crc kubenswrapper[4768]: I0217 13:54:43.170816 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-76b7s" event={"ID":"9a664abb-3a1f-405e-830b-f3f2ad8c4d22","Type":"ContainerDied","Data":"567bcfcd5b3a86cbacbf8fa49080f7399efeec0b42dd805f28787c6d2216a1a4"} Feb 17 13:54:43 crc kubenswrapper[4768]: I0217 13:54:43.205074 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-74f6bcbc87-p8nmv" podStartSLOduration=5.205044179 podStartE2EDuration="5.205044179s" podCreationTimestamp="2026-02-17 13:54:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 13:54:43.200867855 +0000 UTC m=+1102.480254307" watchObservedRunningTime="2026-02-17 13:54:43.205044179 +0000 UTC m=+1102.484430661" Feb 17 13:54:44 crc kubenswrapper[4768]: I0217 13:54:44.532775 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-76b7s" Feb 17 13:54:44 crc kubenswrapper[4768]: I0217 13:54:44.694498 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9a664abb-3a1f-405e-830b-f3f2ad8c4d22-combined-ca-bundle\") pod \"9a664abb-3a1f-405e-830b-f3f2ad8c4d22\" (UID: \"9a664abb-3a1f-405e-830b-f3f2ad8c4d22\") " Feb 17 13:54:44 crc kubenswrapper[4768]: I0217 13:54:44.694852 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2xnsn\" (UniqueName: \"kubernetes.io/projected/9a664abb-3a1f-405e-830b-f3f2ad8c4d22-kube-api-access-2xnsn\") pod \"9a664abb-3a1f-405e-830b-f3f2ad8c4d22\" (UID: \"9a664abb-3a1f-405e-830b-f3f2ad8c4d22\") " Feb 17 13:54:44 crc kubenswrapper[4768]: I0217 13:54:44.694893 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9a664abb-3a1f-405e-830b-f3f2ad8c4d22-config-data\") pod \"9a664abb-3a1f-405e-830b-f3f2ad8c4d22\" (UID: \"9a664abb-3a1f-405e-830b-f3f2ad8c4d22\") " Feb 17 13:54:44 crc kubenswrapper[4768]: I0217 13:54:44.699961 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9a664abb-3a1f-405e-830b-f3f2ad8c4d22-kube-api-access-2xnsn" (OuterVolumeSpecName: "kube-api-access-2xnsn") pod "9a664abb-3a1f-405e-830b-f3f2ad8c4d22" (UID: "9a664abb-3a1f-405e-830b-f3f2ad8c4d22"). InnerVolumeSpecName "kube-api-access-2xnsn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 13:54:44 crc kubenswrapper[4768]: I0217 13:54:44.718221 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9a664abb-3a1f-405e-830b-f3f2ad8c4d22-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9a664abb-3a1f-405e-830b-f3f2ad8c4d22" (UID: "9a664abb-3a1f-405e-830b-f3f2ad8c4d22"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 13:54:44 crc kubenswrapper[4768]: I0217 13:54:44.743841 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9a664abb-3a1f-405e-830b-f3f2ad8c4d22-config-data" (OuterVolumeSpecName: "config-data") pod "9a664abb-3a1f-405e-830b-f3f2ad8c4d22" (UID: "9a664abb-3a1f-405e-830b-f3f2ad8c4d22"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 13:54:44 crc kubenswrapper[4768]: I0217 13:54:44.796179 4768 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9a664abb-3a1f-405e-830b-f3f2ad8c4d22-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 13:54:44 crc kubenswrapper[4768]: I0217 13:54:44.796210 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2xnsn\" (UniqueName: \"kubernetes.io/projected/9a664abb-3a1f-405e-830b-f3f2ad8c4d22-kube-api-access-2xnsn\") on node \"crc\" DevicePath \"\"" Feb 17 13:54:44 crc kubenswrapper[4768]: I0217 13:54:44.796222 4768 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9a664abb-3a1f-405e-830b-f3f2ad8c4d22-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 13:54:45 crc kubenswrapper[4768]: I0217 13:54:45.188710 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-76b7s" event={"ID":"9a664abb-3a1f-405e-830b-f3f2ad8c4d22","Type":"ContainerDied","Data":"0288557dfc6b0f199ecd708cc5c588911b1626068cc380c30a19befcecc3fab7"} Feb 17 13:54:45 crc kubenswrapper[4768]: I0217 13:54:45.188750 4768 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0288557dfc6b0f199ecd708cc5c588911b1626068cc380c30a19befcecc3fab7" Feb 17 13:54:45 crc kubenswrapper[4768]: I0217 13:54:45.189139 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-76b7s" Feb 17 13:54:45 crc kubenswrapper[4768]: I0217 13:54:45.375050 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-74f6bcbc87-p8nmv"] Feb 17 13:54:45 crc kubenswrapper[4768]: I0217 13:54:45.375266 4768 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-74f6bcbc87-p8nmv" podUID="ce253312-d33a-42e0-8f1e-9163171fd75d" containerName="dnsmasq-dns" containerID="cri-o://570bdd2ae042a9972b97dcfc7093b86adbdca7b3d56ec0da1b5ca282d1c054f6" gracePeriod=10 Feb 17 13:54:45 crc kubenswrapper[4768]: I0217 13:54:45.413689 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-847c4cc679-h4ww6"] Feb 17 13:54:45 crc kubenswrapper[4768]: E0217 13:54:45.414074 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9a664abb-3a1f-405e-830b-f3f2ad8c4d22" containerName="keystone-db-sync" Feb 17 13:54:45 crc kubenswrapper[4768]: I0217 13:54:45.414093 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="9a664abb-3a1f-405e-830b-f3f2ad8c4d22" containerName="keystone-db-sync" Feb 17 13:54:45 crc kubenswrapper[4768]: I0217 13:54:45.414291 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="9a664abb-3a1f-405e-830b-f3f2ad8c4d22" containerName="keystone-db-sync" Feb 17 13:54:45 crc kubenswrapper[4768]: I0217 13:54:45.420357 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-847c4cc679-h4ww6" Feb 17 13:54:45 crc kubenswrapper[4768]: I0217 13:54:45.429970 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-847c4cc679-h4ww6"] Feb 17 13:54:45 crc kubenswrapper[4768]: I0217 13:54:45.481475 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-t6h7l"] Feb 17 13:54:45 crc kubenswrapper[4768]: I0217 13:54:45.483301 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-t6h7l" Feb 17 13:54:45 crc kubenswrapper[4768]: I0217 13:54:45.493573 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Feb 17 13:54:45 crc kubenswrapper[4768]: I0217 13:54:45.493820 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Feb 17 13:54:45 crc kubenswrapper[4768]: I0217 13:54:45.493927 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Feb 17 13:54:45 crc kubenswrapper[4768]: I0217 13:54:45.501393 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-xwvsr" Feb 17 13:54:45 crc kubenswrapper[4768]: I0217 13:54:45.509313 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Feb 17 13:54:45 crc kubenswrapper[4768]: I0217 13:54:45.511430 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-t6h7l"] Feb 17 13:54:45 crc kubenswrapper[4768]: I0217 13:54:45.512989 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/83c7e8b0-a496-489a-b66d-230261a68227-config-data\") pod \"keystone-bootstrap-t6h7l\" (UID: \"83c7e8b0-a496-489a-b66d-230261a68227\") " pod="openstack/keystone-bootstrap-t6h7l" Feb 17 13:54:45 crc kubenswrapper[4768]: I0217 13:54:45.513050 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/43ba8a04-014d-4289-ad5e-9f883e9b2d69-ovsdbserver-sb\") pod \"dnsmasq-dns-847c4cc679-h4ww6\" (UID: \"43ba8a04-014d-4289-ad5e-9f883e9b2d69\") " pod="openstack/dnsmasq-dns-847c4cc679-h4ww6" Feb 17 13:54:45 crc kubenswrapper[4768]: I0217 13:54:45.513074 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/43ba8a04-014d-4289-ad5e-9f883e9b2d69-ovsdbserver-nb\") pod \"dnsmasq-dns-847c4cc679-h4ww6\" (UID: \"43ba8a04-014d-4289-ad5e-9f883e9b2d69\") " pod="openstack/dnsmasq-dns-847c4cc679-h4ww6" Feb 17 13:54:45 crc kubenswrapper[4768]: I0217 13:54:45.513119 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/43ba8a04-014d-4289-ad5e-9f883e9b2d69-config\") pod \"dnsmasq-dns-847c4cc679-h4ww6\" (UID: \"43ba8a04-014d-4289-ad5e-9f883e9b2d69\") " pod="openstack/dnsmasq-dns-847c4cc679-h4ww6" Feb 17 13:54:45 crc kubenswrapper[4768]: I0217 13:54:45.513145 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/83c7e8b0-a496-489a-b66d-230261a68227-combined-ca-bundle\") pod \"keystone-bootstrap-t6h7l\" (UID: \"83c7e8b0-a496-489a-b66d-230261a68227\") " pod="openstack/keystone-bootstrap-t6h7l" Feb 17 13:54:45 crc kubenswrapper[4768]: I0217 13:54:45.513207 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/43ba8a04-014d-4289-ad5e-9f883e9b2d69-dns-swift-storage-0\") pod \"dnsmasq-dns-847c4cc679-h4ww6\" (UID: \"43ba8a04-014d-4289-ad5e-9f883e9b2d69\") " pod="openstack/dnsmasq-dns-847c4cc679-h4ww6" Feb 17 13:54:45 crc kubenswrapper[4768]: I0217 13:54:45.513298 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/83c7e8b0-a496-489a-b66d-230261a68227-fernet-keys\") pod \"keystone-bootstrap-t6h7l\" (UID: \"83c7e8b0-a496-489a-b66d-230261a68227\") " pod="openstack/keystone-bootstrap-t6h7l" Feb 17 13:54:45 crc kubenswrapper[4768]: I0217 13:54:45.513330 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hr4bd\" (UniqueName: \"kubernetes.io/projected/83c7e8b0-a496-489a-b66d-230261a68227-kube-api-access-hr4bd\") pod \"keystone-bootstrap-t6h7l\" (UID: \"83c7e8b0-a496-489a-b66d-230261a68227\") " pod="openstack/keystone-bootstrap-t6h7l" Feb 17 13:54:45 crc kubenswrapper[4768]: I0217 13:54:45.513351 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/83c7e8b0-a496-489a-b66d-230261a68227-scripts\") pod \"keystone-bootstrap-t6h7l\" (UID: \"83c7e8b0-a496-489a-b66d-230261a68227\") " pod="openstack/keystone-bootstrap-t6h7l" Feb 17 13:54:45 crc kubenswrapper[4768]: I0217 13:54:45.513376 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/43ba8a04-014d-4289-ad5e-9f883e9b2d69-dns-svc\") pod \"dnsmasq-dns-847c4cc679-h4ww6\" (UID: \"43ba8a04-014d-4289-ad5e-9f883e9b2d69\") " pod="openstack/dnsmasq-dns-847c4cc679-h4ww6" Feb 17 13:54:45 crc kubenswrapper[4768]: I0217 13:54:45.513418 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fnkvl\" (UniqueName: \"kubernetes.io/projected/43ba8a04-014d-4289-ad5e-9f883e9b2d69-kube-api-access-fnkvl\") pod \"dnsmasq-dns-847c4cc679-h4ww6\" (UID: \"43ba8a04-014d-4289-ad5e-9f883e9b2d69\") " pod="openstack/dnsmasq-dns-847c4cc679-h4ww6" Feb 17 13:54:45 crc kubenswrapper[4768]: I0217 13:54:45.513448 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/83c7e8b0-a496-489a-b66d-230261a68227-credential-keys\") pod \"keystone-bootstrap-t6h7l\" (UID: \"83c7e8b0-a496-489a-b66d-230261a68227\") " pod="openstack/keystone-bootstrap-t6h7l" Feb 17 13:54:45 crc kubenswrapper[4768]: I0217 13:54:45.614449 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/43ba8a04-014d-4289-ad5e-9f883e9b2d69-dns-swift-storage-0\") pod \"dnsmasq-dns-847c4cc679-h4ww6\" (UID: \"43ba8a04-014d-4289-ad5e-9f883e9b2d69\") " pod="openstack/dnsmasq-dns-847c4cc679-h4ww6" Feb 17 13:54:45 crc kubenswrapper[4768]: I0217 13:54:45.614556 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/83c7e8b0-a496-489a-b66d-230261a68227-fernet-keys\") pod \"keystone-bootstrap-t6h7l\" (UID: \"83c7e8b0-a496-489a-b66d-230261a68227\") " pod="openstack/keystone-bootstrap-t6h7l" Feb 17 13:54:45 crc kubenswrapper[4768]: I0217 13:54:45.614583 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hr4bd\" (UniqueName: \"kubernetes.io/projected/83c7e8b0-a496-489a-b66d-230261a68227-kube-api-access-hr4bd\") pod \"keystone-bootstrap-t6h7l\" (UID: \"83c7e8b0-a496-489a-b66d-230261a68227\") " pod="openstack/keystone-bootstrap-t6h7l" Feb 17 13:54:45 crc kubenswrapper[4768]: I0217 13:54:45.614601 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/83c7e8b0-a496-489a-b66d-230261a68227-scripts\") pod \"keystone-bootstrap-t6h7l\" (UID: \"83c7e8b0-a496-489a-b66d-230261a68227\") " pod="openstack/keystone-bootstrap-t6h7l" Feb 17 13:54:45 crc kubenswrapper[4768]: I0217 13:54:45.614625 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/43ba8a04-014d-4289-ad5e-9f883e9b2d69-dns-svc\") pod \"dnsmasq-dns-847c4cc679-h4ww6\" (UID: \"43ba8a04-014d-4289-ad5e-9f883e9b2d69\") " pod="openstack/dnsmasq-dns-847c4cc679-h4ww6" Feb 17 13:54:45 crc kubenswrapper[4768]: I0217 13:54:45.614663 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fnkvl\" (UniqueName: \"kubernetes.io/projected/43ba8a04-014d-4289-ad5e-9f883e9b2d69-kube-api-access-fnkvl\") pod \"dnsmasq-dns-847c4cc679-h4ww6\" (UID: \"43ba8a04-014d-4289-ad5e-9f883e9b2d69\") " pod="openstack/dnsmasq-dns-847c4cc679-h4ww6" Feb 17 13:54:45 crc kubenswrapper[4768]: I0217 13:54:45.614691 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/83c7e8b0-a496-489a-b66d-230261a68227-credential-keys\") pod \"keystone-bootstrap-t6h7l\" (UID: \"83c7e8b0-a496-489a-b66d-230261a68227\") " pod="openstack/keystone-bootstrap-t6h7l" Feb 17 13:54:45 crc kubenswrapper[4768]: I0217 13:54:45.614752 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/83c7e8b0-a496-489a-b66d-230261a68227-config-data\") pod \"keystone-bootstrap-t6h7l\" (UID: \"83c7e8b0-a496-489a-b66d-230261a68227\") " pod="openstack/keystone-bootstrap-t6h7l" Feb 17 13:54:45 crc kubenswrapper[4768]: I0217 13:54:45.614783 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/43ba8a04-014d-4289-ad5e-9f883e9b2d69-ovsdbserver-sb\") pod \"dnsmasq-dns-847c4cc679-h4ww6\" (UID: \"43ba8a04-014d-4289-ad5e-9f883e9b2d69\") " pod="openstack/dnsmasq-dns-847c4cc679-h4ww6" Feb 17 13:54:45 crc kubenswrapper[4768]: I0217 13:54:45.614802 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/43ba8a04-014d-4289-ad5e-9f883e9b2d69-ovsdbserver-nb\") pod \"dnsmasq-dns-847c4cc679-h4ww6\" (UID: \"43ba8a04-014d-4289-ad5e-9f883e9b2d69\") " pod="openstack/dnsmasq-dns-847c4cc679-h4ww6" Feb 17 13:54:45 crc kubenswrapper[4768]: I0217 13:54:45.614827 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/43ba8a04-014d-4289-ad5e-9f883e9b2d69-config\") pod \"dnsmasq-dns-847c4cc679-h4ww6\" (UID: \"43ba8a04-014d-4289-ad5e-9f883e9b2d69\") " pod="openstack/dnsmasq-dns-847c4cc679-h4ww6" Feb 17 13:54:45 crc kubenswrapper[4768]: I0217 13:54:45.614852 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/83c7e8b0-a496-489a-b66d-230261a68227-combined-ca-bundle\") pod \"keystone-bootstrap-t6h7l\" (UID: \"83c7e8b0-a496-489a-b66d-230261a68227\") " pod="openstack/keystone-bootstrap-t6h7l" Feb 17 13:54:45 crc kubenswrapper[4768]: I0217 13:54:45.617880 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/43ba8a04-014d-4289-ad5e-9f883e9b2d69-dns-swift-storage-0\") pod \"dnsmasq-dns-847c4cc679-h4ww6\" (UID: \"43ba8a04-014d-4289-ad5e-9f883e9b2d69\") " pod="openstack/dnsmasq-dns-847c4cc679-h4ww6" Feb 17 13:54:45 crc kubenswrapper[4768]: I0217 13:54:45.621066 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/83c7e8b0-a496-489a-b66d-230261a68227-fernet-keys\") pod \"keystone-bootstrap-t6h7l\" (UID: \"83c7e8b0-a496-489a-b66d-230261a68227\") " pod="openstack/keystone-bootstrap-t6h7l" Feb 17 13:54:45 crc kubenswrapper[4768]: I0217 13:54:45.622126 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/43ba8a04-014d-4289-ad5e-9f883e9b2d69-ovsdbserver-sb\") pod \"dnsmasq-dns-847c4cc679-h4ww6\" (UID: \"43ba8a04-014d-4289-ad5e-9f883e9b2d69\") " pod="openstack/dnsmasq-dns-847c4cc679-h4ww6" Feb 17 13:54:45 crc kubenswrapper[4768]: I0217 13:54:45.625177 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/43ba8a04-014d-4289-ad5e-9f883e9b2d69-ovsdbserver-nb\") pod \"dnsmasq-dns-847c4cc679-h4ww6\" (UID: \"43ba8a04-014d-4289-ad5e-9f883e9b2d69\") " pod="openstack/dnsmasq-dns-847c4cc679-h4ww6" Feb 17 13:54:45 crc kubenswrapper[4768]: I0217 13:54:45.625637 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/83c7e8b0-a496-489a-b66d-230261a68227-credential-keys\") pod \"keystone-bootstrap-t6h7l\" (UID: \"83c7e8b0-a496-489a-b66d-230261a68227\") " pod="openstack/keystone-bootstrap-t6h7l" Feb 17 13:54:45 crc kubenswrapper[4768]: I0217 13:54:45.625730 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/43ba8a04-014d-4289-ad5e-9f883e9b2d69-config\") pod \"dnsmasq-dns-847c4cc679-h4ww6\" (UID: \"43ba8a04-014d-4289-ad5e-9f883e9b2d69\") " pod="openstack/dnsmasq-dns-847c4cc679-h4ww6" Feb 17 13:54:45 crc kubenswrapper[4768]: I0217 13:54:45.626277 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/43ba8a04-014d-4289-ad5e-9f883e9b2d69-dns-svc\") pod \"dnsmasq-dns-847c4cc679-h4ww6\" (UID: \"43ba8a04-014d-4289-ad5e-9f883e9b2d69\") " pod="openstack/dnsmasq-dns-847c4cc679-h4ww6" Feb 17 13:54:45 crc kubenswrapper[4768]: I0217 13:54:45.626321 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/83c7e8b0-a496-489a-b66d-230261a68227-combined-ca-bundle\") pod \"keystone-bootstrap-t6h7l\" (UID: \"83c7e8b0-a496-489a-b66d-230261a68227\") " pod="openstack/keystone-bootstrap-t6h7l" Feb 17 13:54:45 crc kubenswrapper[4768]: I0217 13:54:45.637875 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/83c7e8b0-a496-489a-b66d-230261a68227-config-data\") pod \"keystone-bootstrap-t6h7l\" (UID: \"83c7e8b0-a496-489a-b66d-230261a68227\") " pod="openstack/keystone-bootstrap-t6h7l" Feb 17 13:54:45 crc kubenswrapper[4768]: I0217 13:54:45.645468 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/83c7e8b0-a496-489a-b66d-230261a68227-scripts\") pod \"keystone-bootstrap-t6h7l\" (UID: \"83c7e8b0-a496-489a-b66d-230261a68227\") " pod="openstack/keystone-bootstrap-t6h7l" Feb 17 13:54:45 crc kubenswrapper[4768]: I0217 13:54:45.659231 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hr4bd\" (UniqueName: \"kubernetes.io/projected/83c7e8b0-a496-489a-b66d-230261a68227-kube-api-access-hr4bd\") pod \"keystone-bootstrap-t6h7l\" (UID: \"83c7e8b0-a496-489a-b66d-230261a68227\") " pod="openstack/keystone-bootstrap-t6h7l" Feb 17 13:54:45 crc kubenswrapper[4768]: I0217 13:54:45.678686 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fnkvl\" (UniqueName: \"kubernetes.io/projected/43ba8a04-014d-4289-ad5e-9f883e9b2d69-kube-api-access-fnkvl\") pod \"dnsmasq-dns-847c4cc679-h4ww6\" (UID: \"43ba8a04-014d-4289-ad5e-9f883e9b2d69\") " pod="openstack/dnsmasq-dns-847c4cc679-h4ww6" Feb 17 13:54:45 crc kubenswrapper[4768]: I0217 13:54:45.699930 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-55d496646f-4tq7c"] Feb 17 13:54:45 crc kubenswrapper[4768]: I0217 13:54:45.701740 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-55d496646f-4tq7c" Feb 17 13:54:45 crc kubenswrapper[4768]: I0217 13:54:45.716701 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"horizon" Feb 17 13:54:45 crc kubenswrapper[4768]: I0217 13:54:45.716985 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"horizon-config-data" Feb 17 13:54:45 crc kubenswrapper[4768]: I0217 13:54:45.717122 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"horizon-horizon-dockercfg-dfz84" Feb 17 13:54:45 crc kubenswrapper[4768]: I0217 13:54:45.730349 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"horizon-scripts" Feb 17 13:54:45 crc kubenswrapper[4768]: I0217 13:54:45.765254 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-55d496646f-4tq7c"] Feb 17 13:54:45 crc kubenswrapper[4768]: I0217 13:54:45.822896 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/23e8824d-9b92-4724-971e-c807d48d8229-logs\") pod \"horizon-55d496646f-4tq7c\" (UID: \"23e8824d-9b92-4724-971e-c807d48d8229\") " pod="openstack/horizon-55d496646f-4tq7c" Feb 17 13:54:45 crc kubenswrapper[4768]: I0217 13:54:45.822958 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/23e8824d-9b92-4724-971e-c807d48d8229-config-data\") pod \"horizon-55d496646f-4tq7c\" (UID: \"23e8824d-9b92-4724-971e-c807d48d8229\") " pod="openstack/horizon-55d496646f-4tq7c" Feb 17 13:54:45 crc kubenswrapper[4768]: I0217 13:54:45.822986 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/23e8824d-9b92-4724-971e-c807d48d8229-scripts\") pod \"horizon-55d496646f-4tq7c\" (UID: \"23e8824d-9b92-4724-971e-c807d48d8229\") " pod="openstack/horizon-55d496646f-4tq7c" Feb 17 13:54:45 crc kubenswrapper[4768]: I0217 13:54:45.823020 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/23e8824d-9b92-4724-971e-c807d48d8229-horizon-secret-key\") pod \"horizon-55d496646f-4tq7c\" (UID: \"23e8824d-9b92-4724-971e-c807d48d8229\") " pod="openstack/horizon-55d496646f-4tq7c" Feb 17 13:54:45 crc kubenswrapper[4768]: I0217 13:54:45.823066 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x7n6m\" (UniqueName: \"kubernetes.io/projected/23e8824d-9b92-4724-971e-c807d48d8229-kube-api-access-x7n6m\") pod \"horizon-55d496646f-4tq7c\" (UID: \"23e8824d-9b92-4724-971e-c807d48d8229\") " pod="openstack/horizon-55d496646f-4tq7c" Feb 17 13:54:45 crc kubenswrapper[4768]: I0217 13:54:45.845303 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-sync-4vs6w"] Feb 17 13:54:45 crc kubenswrapper[4768]: I0217 13:54:45.846735 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-4vs6w" Feb 17 13:54:45 crc kubenswrapper[4768]: I0217 13:54:45.861482 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Feb 17 13:54:45 crc kubenswrapper[4768]: I0217 13:54:45.864375 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-gwhsg" Feb 17 13:54:45 crc kubenswrapper[4768]: I0217 13:54:45.873088 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Feb 17 13:54:45 crc kubenswrapper[4768]: I0217 13:54:45.875374 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-4vs6w"] Feb 17 13:54:45 crc kubenswrapper[4768]: I0217 13:54:45.879024 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-847c4cc679-h4ww6" Feb 17 13:54:45 crc kubenswrapper[4768]: I0217 13:54:45.893444 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-t6h7l" Feb 17 13:54:45 crc kubenswrapper[4768]: I0217 13:54:45.904398 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 17 13:54:45 crc kubenswrapper[4768]: I0217 13:54:45.907773 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 17 13:54:45 crc kubenswrapper[4768]: I0217 13:54:45.938523 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 17 13:54:45 crc kubenswrapper[4768]: I0217 13:54:45.938740 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 17 13:54:45 crc kubenswrapper[4768]: I0217 13:54:45.942010 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x7n6m\" (UniqueName: \"kubernetes.io/projected/23e8824d-9b92-4724-971e-c807d48d8229-kube-api-access-x7n6m\") pod \"horizon-55d496646f-4tq7c\" (UID: \"23e8824d-9b92-4724-971e-c807d48d8229\") " pod="openstack/horizon-55d496646f-4tq7c" Feb 17 13:54:45 crc kubenswrapper[4768]: I0217 13:54:45.942140 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/23e8824d-9b92-4724-971e-c807d48d8229-logs\") pod \"horizon-55d496646f-4tq7c\" (UID: \"23e8824d-9b92-4724-971e-c807d48d8229\") " pod="openstack/horizon-55d496646f-4tq7c" Feb 17 13:54:45 crc kubenswrapper[4768]: I0217 13:54:45.942189 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/23e8824d-9b92-4724-971e-c807d48d8229-config-data\") pod \"horizon-55d496646f-4tq7c\" (UID: \"23e8824d-9b92-4724-971e-c807d48d8229\") " pod="openstack/horizon-55d496646f-4tq7c" Feb 17 13:54:45 crc kubenswrapper[4768]: I0217 13:54:45.942230 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/23e8824d-9b92-4724-971e-c807d48d8229-scripts\") pod \"horizon-55d496646f-4tq7c\" (UID: \"23e8824d-9b92-4724-971e-c807d48d8229\") " pod="openstack/horizon-55d496646f-4tq7c" Feb 17 13:54:45 crc kubenswrapper[4768]: I0217 13:54:45.942269 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/23e8824d-9b92-4724-971e-c807d48d8229-horizon-secret-key\") pod \"horizon-55d496646f-4tq7c\" (UID: \"23e8824d-9b92-4724-971e-c807d48d8229\") " pod="openstack/horizon-55d496646f-4tq7c" Feb 17 13:54:45 crc kubenswrapper[4768]: I0217 13:54:45.944965 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/23e8824d-9b92-4724-971e-c807d48d8229-logs\") pod \"horizon-55d496646f-4tq7c\" (UID: \"23e8824d-9b92-4724-971e-c807d48d8229\") " pod="openstack/horizon-55d496646f-4tq7c" Feb 17 13:54:45 crc kubenswrapper[4768]: I0217 13:54:45.946046 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/23e8824d-9b92-4724-971e-c807d48d8229-config-data\") pod \"horizon-55d496646f-4tq7c\" (UID: \"23e8824d-9b92-4724-971e-c807d48d8229\") " pod="openstack/horizon-55d496646f-4tq7c" Feb 17 13:54:45 crc kubenswrapper[4768]: I0217 13:54:45.950136 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/23e8824d-9b92-4724-971e-c807d48d8229-scripts\") pod \"horizon-55d496646f-4tq7c\" (UID: \"23e8824d-9b92-4724-971e-c807d48d8229\") " pod="openstack/horizon-55d496646f-4tq7c" Feb 17 13:54:45 crc kubenswrapper[4768]: I0217 13:54:45.957601 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/23e8824d-9b92-4724-971e-c807d48d8229-horizon-secret-key\") pod \"horizon-55d496646f-4tq7c\" (UID: \"23e8824d-9b92-4724-971e-c807d48d8229\") " pod="openstack/horizon-55d496646f-4tq7c" Feb 17 13:54:46 crc kubenswrapper[4768]: I0217 13:54:46.004684 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 17 13:54:46 crc kubenswrapper[4768]: I0217 13:54:46.050350 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jqxc7\" (UniqueName: \"kubernetes.io/projected/1c0e296a-80f3-4efe-bb28-17fdfd153397-kube-api-access-jqxc7\") pod \"ceilometer-0\" (UID: \"1c0e296a-80f3-4efe-bb28-17fdfd153397\") " pod="openstack/ceilometer-0" Feb 17 13:54:46 crc kubenswrapper[4768]: I0217 13:54:46.050798 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1c0e296a-80f3-4efe-bb28-17fdfd153397-run-httpd\") pod \"ceilometer-0\" (UID: \"1c0e296a-80f3-4efe-bb28-17fdfd153397\") " pod="openstack/ceilometer-0" Feb 17 13:54:46 crc kubenswrapper[4768]: I0217 13:54:46.050846 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/33f4a984-4d42-469e-8eda-c49264f0e4d9-combined-ca-bundle\") pod \"neutron-db-sync-4vs6w\" (UID: \"33f4a984-4d42-469e-8eda-c49264f0e4d9\") " pod="openstack/neutron-db-sync-4vs6w" Feb 17 13:54:46 crc kubenswrapper[4768]: I0217 13:54:46.050868 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1c0e296a-80f3-4efe-bb28-17fdfd153397-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"1c0e296a-80f3-4efe-bb28-17fdfd153397\") " pod="openstack/ceilometer-0" Feb 17 13:54:46 crc kubenswrapper[4768]: I0217 13:54:46.050908 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1c0e296a-80f3-4efe-bb28-17fdfd153397-config-data\") pod \"ceilometer-0\" (UID: \"1c0e296a-80f3-4efe-bb28-17fdfd153397\") " pod="openstack/ceilometer-0" Feb 17 13:54:46 crc kubenswrapper[4768]: I0217 13:54:46.050958 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/33f4a984-4d42-469e-8eda-c49264f0e4d9-config\") pod \"neutron-db-sync-4vs6w\" (UID: \"33f4a984-4d42-469e-8eda-c49264f0e4d9\") " pod="openstack/neutron-db-sync-4vs6w" Feb 17 13:54:46 crc kubenswrapper[4768]: I0217 13:54:46.050973 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1c0e296a-80f3-4efe-bb28-17fdfd153397-scripts\") pod \"ceilometer-0\" (UID: \"1c0e296a-80f3-4efe-bb28-17fdfd153397\") " pod="openstack/ceilometer-0" Feb 17 13:54:46 crc kubenswrapper[4768]: I0217 13:54:46.051013 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1c0e296a-80f3-4efe-bb28-17fdfd153397-log-httpd\") pod \"ceilometer-0\" (UID: \"1c0e296a-80f3-4efe-bb28-17fdfd153397\") " pod="openstack/ceilometer-0" Feb 17 13:54:46 crc kubenswrapper[4768]: I0217 13:54:46.051036 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4bkq2\" (UniqueName: \"kubernetes.io/projected/33f4a984-4d42-469e-8eda-c49264f0e4d9-kube-api-access-4bkq2\") pod \"neutron-db-sync-4vs6w\" (UID: \"33f4a984-4d42-469e-8eda-c49264f0e4d9\") " pod="openstack/neutron-db-sync-4vs6w" Feb 17 13:54:46 crc kubenswrapper[4768]: I0217 13:54:46.051088 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/1c0e296a-80f3-4efe-bb28-17fdfd153397-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"1c0e296a-80f3-4efe-bb28-17fdfd153397\") " pod="openstack/ceilometer-0" Feb 17 13:54:46 crc kubenswrapper[4768]: I0217 13:54:46.072801 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x7n6m\" (UniqueName: \"kubernetes.io/projected/23e8824d-9b92-4724-971e-c807d48d8229-kube-api-access-x7n6m\") pod \"horizon-55d496646f-4tq7c\" (UID: \"23e8824d-9b92-4724-971e-c807d48d8229\") " pod="openstack/horizon-55d496646f-4tq7c" Feb 17 13:54:46 crc kubenswrapper[4768]: I0217 13:54:46.084214 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-sync-5c8md"] Feb 17 13:54:46 crc kubenswrapper[4768]: I0217 13:54:46.085633 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-5c8md" Feb 17 13:54:46 crc kubenswrapper[4768]: I0217 13:54:46.087522 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-55d496646f-4tq7c" Feb 17 13:54:46 crc kubenswrapper[4768]: I0217 13:54:46.109576 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Feb 17 13:54:46 crc kubenswrapper[4768]: I0217 13:54:46.110417 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-rp4mv" Feb 17 13:54:46 crc kubenswrapper[4768]: I0217 13:54:46.113903 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Feb 17 13:54:46 crc kubenswrapper[4768]: I0217 13:54:46.146237 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-5c8md"] Feb 17 13:54:46 crc kubenswrapper[4768]: I0217 13:54:46.157222 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jqxc7\" (UniqueName: \"kubernetes.io/projected/1c0e296a-80f3-4efe-bb28-17fdfd153397-kube-api-access-jqxc7\") pod \"ceilometer-0\" (UID: \"1c0e296a-80f3-4efe-bb28-17fdfd153397\") " pod="openstack/ceilometer-0" Feb 17 13:54:46 crc kubenswrapper[4768]: I0217 13:54:46.157316 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1c0e296a-80f3-4efe-bb28-17fdfd153397-run-httpd\") pod \"ceilometer-0\" (UID: \"1c0e296a-80f3-4efe-bb28-17fdfd153397\") " pod="openstack/ceilometer-0" Feb 17 13:54:46 crc kubenswrapper[4768]: I0217 13:54:46.157361 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/33f4a984-4d42-469e-8eda-c49264f0e4d9-combined-ca-bundle\") pod \"neutron-db-sync-4vs6w\" (UID: \"33f4a984-4d42-469e-8eda-c49264f0e4d9\") " pod="openstack/neutron-db-sync-4vs6w" Feb 17 13:54:46 crc kubenswrapper[4768]: I0217 13:54:46.157397 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1c0e296a-80f3-4efe-bb28-17fdfd153397-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"1c0e296a-80f3-4efe-bb28-17fdfd153397\") " pod="openstack/ceilometer-0" Feb 17 13:54:46 crc kubenswrapper[4768]: I0217 13:54:46.157434 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1c0e296a-80f3-4efe-bb28-17fdfd153397-config-data\") pod \"ceilometer-0\" (UID: \"1c0e296a-80f3-4efe-bb28-17fdfd153397\") " pod="openstack/ceilometer-0" Feb 17 13:54:46 crc kubenswrapper[4768]: I0217 13:54:46.157483 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/33f4a984-4d42-469e-8eda-c49264f0e4d9-config\") pod \"neutron-db-sync-4vs6w\" (UID: \"33f4a984-4d42-469e-8eda-c49264f0e4d9\") " pod="openstack/neutron-db-sync-4vs6w" Feb 17 13:54:46 crc kubenswrapper[4768]: I0217 13:54:46.157501 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1c0e296a-80f3-4efe-bb28-17fdfd153397-scripts\") pod \"ceilometer-0\" (UID: \"1c0e296a-80f3-4efe-bb28-17fdfd153397\") " pod="openstack/ceilometer-0" Feb 17 13:54:46 crc kubenswrapper[4768]: I0217 13:54:46.157537 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1c0e296a-80f3-4efe-bb28-17fdfd153397-log-httpd\") pod \"ceilometer-0\" (UID: \"1c0e296a-80f3-4efe-bb28-17fdfd153397\") " pod="openstack/ceilometer-0" Feb 17 13:54:46 crc kubenswrapper[4768]: I0217 13:54:46.157564 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4bkq2\" (UniqueName: \"kubernetes.io/projected/33f4a984-4d42-469e-8eda-c49264f0e4d9-kube-api-access-4bkq2\") pod \"neutron-db-sync-4vs6w\" (UID: \"33f4a984-4d42-469e-8eda-c49264f0e4d9\") " pod="openstack/neutron-db-sync-4vs6w" Feb 17 13:54:46 crc kubenswrapper[4768]: I0217 13:54:46.157609 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/1c0e296a-80f3-4efe-bb28-17fdfd153397-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"1c0e296a-80f3-4efe-bb28-17fdfd153397\") " pod="openstack/ceilometer-0" Feb 17 13:54:46 crc kubenswrapper[4768]: I0217 13:54:46.165714 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1c0e296a-80f3-4efe-bb28-17fdfd153397-run-httpd\") pod \"ceilometer-0\" (UID: \"1c0e296a-80f3-4efe-bb28-17fdfd153397\") " pod="openstack/ceilometer-0" Feb 17 13:54:46 crc kubenswrapper[4768]: I0217 13:54:46.166372 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1c0e296a-80f3-4efe-bb28-17fdfd153397-log-httpd\") pod \"ceilometer-0\" (UID: \"1c0e296a-80f3-4efe-bb28-17fdfd153397\") " pod="openstack/ceilometer-0" Feb 17 13:54:46 crc kubenswrapper[4768]: I0217 13:54:46.173754 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1c0e296a-80f3-4efe-bb28-17fdfd153397-scripts\") pod \"ceilometer-0\" (UID: \"1c0e296a-80f3-4efe-bb28-17fdfd153397\") " pod="openstack/ceilometer-0" Feb 17 13:54:46 crc kubenswrapper[4768]: I0217 13:54:46.187178 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1c0e296a-80f3-4efe-bb28-17fdfd153397-config-data\") pod \"ceilometer-0\" (UID: \"1c0e296a-80f3-4efe-bb28-17fdfd153397\") " pod="openstack/ceilometer-0" Feb 17 13:54:46 crc kubenswrapper[4768]: I0217 13:54:46.190134 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1c0e296a-80f3-4efe-bb28-17fdfd153397-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"1c0e296a-80f3-4efe-bb28-17fdfd153397\") " pod="openstack/ceilometer-0" Feb 17 13:54:46 crc kubenswrapper[4768]: I0217 13:54:46.195758 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/1c0e296a-80f3-4efe-bb28-17fdfd153397-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"1c0e296a-80f3-4efe-bb28-17fdfd153397\") " pod="openstack/ceilometer-0" Feb 17 13:54:46 crc kubenswrapper[4768]: I0217 13:54:46.255266 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4bkq2\" (UniqueName: \"kubernetes.io/projected/33f4a984-4d42-469e-8eda-c49264f0e4d9-kube-api-access-4bkq2\") pod \"neutron-db-sync-4vs6w\" (UID: \"33f4a984-4d42-469e-8eda-c49264f0e4d9\") " pod="openstack/neutron-db-sync-4vs6w" Feb 17 13:54:46 crc kubenswrapper[4768]: I0217 13:54:46.255287 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/33f4a984-4d42-469e-8eda-c49264f0e4d9-config\") pod \"neutron-db-sync-4vs6w\" (UID: \"33f4a984-4d42-469e-8eda-c49264f0e4d9\") " pod="openstack/neutron-db-sync-4vs6w" Feb 17 13:54:46 crc kubenswrapper[4768]: I0217 13:54:46.256777 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/33f4a984-4d42-469e-8eda-c49264f0e4d9-combined-ca-bundle\") pod \"neutron-db-sync-4vs6w\" (UID: \"33f4a984-4d42-469e-8eda-c49264f0e4d9\") " pod="openstack/neutron-db-sync-4vs6w" Feb 17 13:54:46 crc kubenswrapper[4768]: I0217 13:54:46.264166 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jqxc7\" (UniqueName: \"kubernetes.io/projected/1c0e296a-80f3-4efe-bb28-17fdfd153397-kube-api-access-jqxc7\") pod \"ceilometer-0\" (UID: \"1c0e296a-80f3-4efe-bb28-17fdfd153397\") " pod="openstack/ceilometer-0" Feb 17 13:54:46 crc kubenswrapper[4768]: I0217 13:54:46.300507 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/df7e53d7-b63b-41b4-b909-c6effd0dab0c-scripts\") pod \"cinder-db-sync-5c8md\" (UID: \"df7e53d7-b63b-41b4-b909-c6effd0dab0c\") " pod="openstack/cinder-db-sync-5c8md" Feb 17 13:54:46 crc kubenswrapper[4768]: I0217 13:54:46.300629 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/df7e53d7-b63b-41b4-b909-c6effd0dab0c-etc-machine-id\") pod \"cinder-db-sync-5c8md\" (UID: \"df7e53d7-b63b-41b4-b909-c6effd0dab0c\") " pod="openstack/cinder-db-sync-5c8md" Feb 17 13:54:46 crc kubenswrapper[4768]: I0217 13:54:46.300747 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l2kmg\" (UniqueName: \"kubernetes.io/projected/df7e53d7-b63b-41b4-b909-c6effd0dab0c-kube-api-access-l2kmg\") pod \"cinder-db-sync-5c8md\" (UID: \"df7e53d7-b63b-41b4-b909-c6effd0dab0c\") " pod="openstack/cinder-db-sync-5c8md" Feb 17 13:54:46 crc kubenswrapper[4768]: I0217 13:54:46.300797 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/df7e53d7-b63b-41b4-b909-c6effd0dab0c-config-data\") pod \"cinder-db-sync-5c8md\" (UID: \"df7e53d7-b63b-41b4-b909-c6effd0dab0c\") " pod="openstack/cinder-db-sync-5c8md" Feb 17 13:54:46 crc kubenswrapper[4768]: I0217 13:54:46.300861 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/df7e53d7-b63b-41b4-b909-c6effd0dab0c-combined-ca-bundle\") pod \"cinder-db-sync-5c8md\" (UID: \"df7e53d7-b63b-41b4-b909-c6effd0dab0c\") " pod="openstack/cinder-db-sync-5c8md" Feb 17 13:54:46 crc kubenswrapper[4768]: I0217 13:54:46.300912 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/df7e53d7-b63b-41b4-b909-c6effd0dab0c-db-sync-config-data\") pod \"cinder-db-sync-5c8md\" (UID: \"df7e53d7-b63b-41b4-b909-c6effd0dab0c\") " pod="openstack/cinder-db-sync-5c8md" Feb 17 13:54:46 crc kubenswrapper[4768]: I0217 13:54:46.338614 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 17 13:54:46 crc kubenswrapper[4768]: I0217 13:54:46.388567 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-74f6bcbc87-p8nmv" Feb 17 13:54:46 crc kubenswrapper[4768]: I0217 13:54:46.403989 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l2kmg\" (UniqueName: \"kubernetes.io/projected/df7e53d7-b63b-41b4-b909-c6effd0dab0c-kube-api-access-l2kmg\") pod \"cinder-db-sync-5c8md\" (UID: \"df7e53d7-b63b-41b4-b909-c6effd0dab0c\") " pod="openstack/cinder-db-sync-5c8md" Feb 17 13:54:46 crc kubenswrapper[4768]: I0217 13:54:46.404048 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/df7e53d7-b63b-41b4-b909-c6effd0dab0c-config-data\") pod \"cinder-db-sync-5c8md\" (UID: \"df7e53d7-b63b-41b4-b909-c6effd0dab0c\") " pod="openstack/cinder-db-sync-5c8md" Feb 17 13:54:46 crc kubenswrapper[4768]: I0217 13:54:46.404088 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/df7e53d7-b63b-41b4-b909-c6effd0dab0c-combined-ca-bundle\") pod \"cinder-db-sync-5c8md\" (UID: \"df7e53d7-b63b-41b4-b909-c6effd0dab0c\") " pod="openstack/cinder-db-sync-5c8md" Feb 17 13:54:46 crc kubenswrapper[4768]: I0217 13:54:46.404177 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/df7e53d7-b63b-41b4-b909-c6effd0dab0c-db-sync-config-data\") pod \"cinder-db-sync-5c8md\" (UID: \"df7e53d7-b63b-41b4-b909-c6effd0dab0c\") " pod="openstack/cinder-db-sync-5c8md" Feb 17 13:54:46 crc kubenswrapper[4768]: I0217 13:54:46.404237 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/df7e53d7-b63b-41b4-b909-c6effd0dab0c-scripts\") pod \"cinder-db-sync-5c8md\" (UID: \"df7e53d7-b63b-41b4-b909-c6effd0dab0c\") " pod="openstack/cinder-db-sync-5c8md" Feb 17 13:54:46 crc kubenswrapper[4768]: I0217 13:54:46.404281 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/df7e53d7-b63b-41b4-b909-c6effd0dab0c-etc-machine-id\") pod \"cinder-db-sync-5c8md\" (UID: \"df7e53d7-b63b-41b4-b909-c6effd0dab0c\") " pod="openstack/cinder-db-sync-5c8md" Feb 17 13:54:46 crc kubenswrapper[4768]: I0217 13:54:46.404400 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/df7e53d7-b63b-41b4-b909-c6effd0dab0c-etc-machine-id\") pod \"cinder-db-sync-5c8md\" (UID: \"df7e53d7-b63b-41b4-b909-c6effd0dab0c\") " pod="openstack/cinder-db-sync-5c8md" Feb 17 13:54:46 crc kubenswrapper[4768]: I0217 13:54:46.411919 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/df7e53d7-b63b-41b4-b909-c6effd0dab0c-db-sync-config-data\") pod \"cinder-db-sync-5c8md\" (UID: \"df7e53d7-b63b-41b4-b909-c6effd0dab0c\") " pod="openstack/cinder-db-sync-5c8md" Feb 17 13:54:46 crc kubenswrapper[4768]: I0217 13:54:46.436162 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/df7e53d7-b63b-41b4-b909-c6effd0dab0c-scripts\") pod \"cinder-db-sync-5c8md\" (UID: \"df7e53d7-b63b-41b4-b909-c6effd0dab0c\") " pod="openstack/cinder-db-sync-5c8md" Feb 17 13:54:46 crc kubenswrapper[4768]: I0217 13:54:46.436493 4768 generic.go:334] "Generic (PLEG): container finished" podID="ce253312-d33a-42e0-8f1e-9163171fd75d" containerID="570bdd2ae042a9972b97dcfc7093b86adbdca7b3d56ec0da1b5ca282d1c054f6" exitCode=0 Feb 17 13:54:46 crc kubenswrapper[4768]: I0217 13:54:46.436601 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-74f6bcbc87-p8nmv" event={"ID":"ce253312-d33a-42e0-8f1e-9163171fd75d","Type":"ContainerDied","Data":"570bdd2ae042a9972b97dcfc7093b86adbdca7b3d56ec0da1b5ca282d1c054f6"} Feb 17 13:54:46 crc kubenswrapper[4768]: I0217 13:54:46.436654 4768 scope.go:117] "RemoveContainer" containerID="570bdd2ae042a9972b97dcfc7093b86adbdca7b3d56ec0da1b5ca282d1c054f6" Feb 17 13:54:46 crc kubenswrapper[4768]: I0217 13:54:46.436722 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/df7e53d7-b63b-41b4-b909-c6effd0dab0c-config-data\") pod \"cinder-db-sync-5c8md\" (UID: \"df7e53d7-b63b-41b4-b909-c6effd0dab0c\") " pod="openstack/cinder-db-sync-5c8md" Feb 17 13:54:46 crc kubenswrapper[4768]: I0217 13:54:46.462711 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-sync-jmq2h"] Feb 17 13:54:46 crc kubenswrapper[4768]: E0217 13:54:46.463258 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ce253312-d33a-42e0-8f1e-9163171fd75d" containerName="init" Feb 17 13:54:46 crc kubenswrapper[4768]: I0217 13:54:46.463284 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="ce253312-d33a-42e0-8f1e-9163171fd75d" containerName="init" Feb 17 13:54:46 crc kubenswrapper[4768]: E0217 13:54:46.463309 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ce253312-d33a-42e0-8f1e-9163171fd75d" containerName="dnsmasq-dns" Feb 17 13:54:46 crc kubenswrapper[4768]: I0217 13:54:46.463316 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="ce253312-d33a-42e0-8f1e-9163171fd75d" containerName="dnsmasq-dns" Feb 17 13:54:46 crc kubenswrapper[4768]: I0217 13:54:46.463520 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="ce253312-d33a-42e0-8f1e-9163171fd75d" containerName="dnsmasq-dns" Feb 17 13:54:46 crc kubenswrapper[4768]: I0217 13:54:46.464311 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-jmq2h" Feb 17 13:54:46 crc kubenswrapper[4768]: I0217 13:54:46.467684 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Feb 17 13:54:46 crc kubenswrapper[4768]: I0217 13:54:46.468054 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-9nbg2" Feb 17 13:54:46 crc kubenswrapper[4768]: I0217 13:54:46.485710 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-jmq2h"] Feb 17 13:54:46 crc kubenswrapper[4768]: I0217 13:54:46.493236 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-4vs6w" Feb 17 13:54:46 crc kubenswrapper[4768]: I0217 13:54:46.500774 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-sync-xfpzr"] Feb 17 13:54:46 crc kubenswrapper[4768]: I0217 13:54:46.502279 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-xfpzr" Feb 17 13:54:46 crc kubenswrapper[4768]: I0217 13:54:46.504878 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/df7e53d7-b63b-41b4-b909-c6effd0dab0c-combined-ca-bundle\") pod \"cinder-db-sync-5c8md\" (UID: \"df7e53d7-b63b-41b4-b909-c6effd0dab0c\") " pod="openstack/cinder-db-sync-5c8md" Feb 17 13:54:46 crc kubenswrapper[4768]: I0217 13:54:46.505026 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ce253312-d33a-42e0-8f1e-9163171fd75d-dns-swift-storage-0\") pod \"ce253312-d33a-42e0-8f1e-9163171fd75d\" (UID: \"ce253312-d33a-42e0-8f1e-9163171fd75d\") " Feb 17 13:54:46 crc kubenswrapper[4768]: I0217 13:54:46.505183 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ce253312-d33a-42e0-8f1e-9163171fd75d-dns-svc\") pod \"ce253312-d33a-42e0-8f1e-9163171fd75d\" (UID: \"ce253312-d33a-42e0-8f1e-9163171fd75d\") " Feb 17 13:54:46 crc kubenswrapper[4768]: I0217 13:54:46.505255 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ce253312-d33a-42e0-8f1e-9163171fd75d-ovsdbserver-nb\") pod \"ce253312-d33a-42e0-8f1e-9163171fd75d\" (UID: \"ce253312-d33a-42e0-8f1e-9163171fd75d\") " Feb 17 13:54:46 crc kubenswrapper[4768]: I0217 13:54:46.505282 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-grj6g\" (UniqueName: \"kubernetes.io/projected/ce253312-d33a-42e0-8f1e-9163171fd75d-kube-api-access-grj6g\") pod \"ce253312-d33a-42e0-8f1e-9163171fd75d\" (UID: \"ce253312-d33a-42e0-8f1e-9163171fd75d\") " Feb 17 13:54:46 crc kubenswrapper[4768]: I0217 13:54:46.505304 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ce253312-d33a-42e0-8f1e-9163171fd75d-ovsdbserver-sb\") pod \"ce253312-d33a-42e0-8f1e-9163171fd75d\" (UID: \"ce253312-d33a-42e0-8f1e-9163171fd75d\") " Feb 17 13:54:46 crc kubenswrapper[4768]: I0217 13:54:46.505337 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ce253312-d33a-42e0-8f1e-9163171fd75d-config\") pod \"ce253312-d33a-42e0-8f1e-9163171fd75d\" (UID: \"ce253312-d33a-42e0-8f1e-9163171fd75d\") " Feb 17 13:54:46 crc kubenswrapper[4768]: I0217 13:54:46.512191 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Feb 17 13:54:46 crc kubenswrapper[4768]: I0217 13:54:46.512395 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-mbkz8" Feb 17 13:54:46 crc kubenswrapper[4768]: I0217 13:54:46.512574 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Feb 17 13:54:46 crc kubenswrapper[4768]: I0217 13:54:46.521110 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l2kmg\" (UniqueName: \"kubernetes.io/projected/df7e53d7-b63b-41b4-b909-c6effd0dab0c-kube-api-access-l2kmg\") pod \"cinder-db-sync-5c8md\" (UID: \"df7e53d7-b63b-41b4-b909-c6effd0dab0c\") " pod="openstack/cinder-db-sync-5c8md" Feb 17 13:54:46 crc kubenswrapper[4768]: I0217 13:54:46.538342 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-847c4cc679-h4ww6"] Feb 17 13:54:46 crc kubenswrapper[4768]: I0217 13:54:46.550046 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ce253312-d33a-42e0-8f1e-9163171fd75d-kube-api-access-grj6g" (OuterVolumeSpecName: "kube-api-access-grj6g") pod "ce253312-d33a-42e0-8f1e-9163171fd75d" (UID: "ce253312-d33a-42e0-8f1e-9163171fd75d"). InnerVolumeSpecName "kube-api-access-grj6g". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 13:54:46 crc kubenswrapper[4768]: I0217 13:54:46.570138 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-xfpzr"] Feb 17 13:54:46 crc kubenswrapper[4768]: I0217 13:54:46.585537 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-7645d5bbd9-t6l64"] Feb 17 13:54:46 crc kubenswrapper[4768]: I0217 13:54:46.587555 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-7645d5bbd9-t6l64" Feb 17 13:54:46 crc kubenswrapper[4768]: I0217 13:54:46.611784 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tzfsj\" (UniqueName: \"kubernetes.io/projected/e23e418f-2c16-4fa8-94fb-5e575affd61b-kube-api-access-tzfsj\") pod \"barbican-db-sync-jmq2h\" (UID: \"e23e418f-2c16-4fa8-94fb-5e575affd61b\") " pod="openstack/barbican-db-sync-jmq2h" Feb 17 13:54:46 crc kubenswrapper[4768]: I0217 13:54:46.611837 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bqgvw\" (UniqueName: \"kubernetes.io/projected/7ff099b6-c514-40c8-aa19-370d7f8dfbaf-kube-api-access-bqgvw\") pod \"placement-db-sync-xfpzr\" (UID: \"7ff099b6-c514-40c8-aa19-370d7f8dfbaf\") " pod="openstack/placement-db-sync-xfpzr" Feb 17 13:54:46 crc kubenswrapper[4768]: I0217 13:54:46.611872 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7ff099b6-c514-40c8-aa19-370d7f8dfbaf-scripts\") pod \"placement-db-sync-xfpzr\" (UID: \"7ff099b6-c514-40c8-aa19-370d7f8dfbaf\") " pod="openstack/placement-db-sync-xfpzr" Feb 17 13:54:46 crc kubenswrapper[4768]: I0217 13:54:46.611891 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7ff099b6-c514-40c8-aa19-370d7f8dfbaf-combined-ca-bundle\") pod \"placement-db-sync-xfpzr\" (UID: \"7ff099b6-c514-40c8-aa19-370d7f8dfbaf\") " pod="openstack/placement-db-sync-xfpzr" Feb 17 13:54:46 crc kubenswrapper[4768]: I0217 13:54:46.611910 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7ff099b6-c514-40c8-aa19-370d7f8dfbaf-config-data\") pod \"placement-db-sync-xfpzr\" (UID: \"7ff099b6-c514-40c8-aa19-370d7f8dfbaf\") " pod="openstack/placement-db-sync-xfpzr" Feb 17 13:54:46 crc kubenswrapper[4768]: I0217 13:54:46.611945 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7ff099b6-c514-40c8-aa19-370d7f8dfbaf-logs\") pod \"placement-db-sync-xfpzr\" (UID: \"7ff099b6-c514-40c8-aa19-370d7f8dfbaf\") " pod="openstack/placement-db-sync-xfpzr" Feb 17 13:54:46 crc kubenswrapper[4768]: I0217 13:54:46.611987 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e23e418f-2c16-4fa8-94fb-5e575affd61b-combined-ca-bundle\") pod \"barbican-db-sync-jmq2h\" (UID: \"e23e418f-2c16-4fa8-94fb-5e575affd61b\") " pod="openstack/barbican-db-sync-jmq2h" Feb 17 13:54:46 crc kubenswrapper[4768]: I0217 13:54:46.612043 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/e23e418f-2c16-4fa8-94fb-5e575affd61b-db-sync-config-data\") pod \"barbican-db-sync-jmq2h\" (UID: \"e23e418f-2c16-4fa8-94fb-5e575affd61b\") " pod="openstack/barbican-db-sync-jmq2h" Feb 17 13:54:46 crc kubenswrapper[4768]: I0217 13:54:46.612129 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-grj6g\" (UniqueName: \"kubernetes.io/projected/ce253312-d33a-42e0-8f1e-9163171fd75d-kube-api-access-grj6g\") on node \"crc\" DevicePath \"\"" Feb 17 13:54:46 crc kubenswrapper[4768]: I0217 13:54:46.638873 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ce253312-d33a-42e0-8f1e-9163171fd75d-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "ce253312-d33a-42e0-8f1e-9163171fd75d" (UID: "ce253312-d33a-42e0-8f1e-9163171fd75d"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 13:54:46 crc kubenswrapper[4768]: I0217 13:54:46.642529 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-7645d5bbd9-t6l64"] Feb 17 13:54:46 crc kubenswrapper[4768]: I0217 13:54:46.663778 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-785d8bcb8c-98n58"] Feb 17 13:54:46 crc kubenswrapper[4768]: I0217 13:54:46.673071 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ce253312-d33a-42e0-8f1e-9163171fd75d-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "ce253312-d33a-42e0-8f1e-9163171fd75d" (UID: "ce253312-d33a-42e0-8f1e-9163171fd75d"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 13:54:46 crc kubenswrapper[4768]: I0217 13:54:46.675700 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-785d8bcb8c-98n58" Feb 17 13:54:46 crc kubenswrapper[4768]: I0217 13:54:46.684474 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ce253312-d33a-42e0-8f1e-9163171fd75d-config" (OuterVolumeSpecName: "config") pod "ce253312-d33a-42e0-8f1e-9163171fd75d" (UID: "ce253312-d33a-42e0-8f1e-9163171fd75d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 13:54:46 crc kubenswrapper[4768]: I0217 13:54:46.707170 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-785d8bcb8c-98n58"] Feb 17 13:54:46 crc kubenswrapper[4768]: I0217 13:54:46.714896 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cr8zp\" (UniqueName: \"kubernetes.io/projected/90fa85b0-3a62-4df2-835f-ca176c602f7b-kube-api-access-cr8zp\") pod \"horizon-7645d5bbd9-t6l64\" (UID: \"90fa85b0-3a62-4df2-835f-ca176c602f7b\") " pod="openstack/horizon-7645d5bbd9-t6l64" Feb 17 13:54:46 crc kubenswrapper[4768]: I0217 13:54:46.714951 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/90fa85b0-3a62-4df2-835f-ca176c602f7b-config-data\") pod \"horizon-7645d5bbd9-t6l64\" (UID: \"90fa85b0-3a62-4df2-835f-ca176c602f7b\") " pod="openstack/horizon-7645d5bbd9-t6l64" Feb 17 13:54:46 crc kubenswrapper[4768]: I0217 13:54:46.714993 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tzfsj\" (UniqueName: \"kubernetes.io/projected/e23e418f-2c16-4fa8-94fb-5e575affd61b-kube-api-access-tzfsj\") pod \"barbican-db-sync-jmq2h\" (UID: \"e23e418f-2c16-4fa8-94fb-5e575affd61b\") " pod="openstack/barbican-db-sync-jmq2h" Feb 17 13:54:46 crc kubenswrapper[4768]: I0217 13:54:46.715021 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bqgvw\" (UniqueName: \"kubernetes.io/projected/7ff099b6-c514-40c8-aa19-370d7f8dfbaf-kube-api-access-bqgvw\") pod \"placement-db-sync-xfpzr\" (UID: \"7ff099b6-c514-40c8-aa19-370d7f8dfbaf\") " pod="openstack/placement-db-sync-xfpzr" Feb 17 13:54:46 crc kubenswrapper[4768]: I0217 13:54:46.715052 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7ff099b6-c514-40c8-aa19-370d7f8dfbaf-scripts\") pod \"placement-db-sync-xfpzr\" (UID: \"7ff099b6-c514-40c8-aa19-370d7f8dfbaf\") " pod="openstack/placement-db-sync-xfpzr" Feb 17 13:54:46 crc kubenswrapper[4768]: I0217 13:54:46.715068 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7ff099b6-c514-40c8-aa19-370d7f8dfbaf-combined-ca-bundle\") pod \"placement-db-sync-xfpzr\" (UID: \"7ff099b6-c514-40c8-aa19-370d7f8dfbaf\") " pod="openstack/placement-db-sync-xfpzr" Feb 17 13:54:46 crc kubenswrapper[4768]: I0217 13:54:46.715085 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/90fa85b0-3a62-4df2-835f-ca176c602f7b-horizon-secret-key\") pod \"horizon-7645d5bbd9-t6l64\" (UID: \"90fa85b0-3a62-4df2-835f-ca176c602f7b\") " pod="openstack/horizon-7645d5bbd9-t6l64" Feb 17 13:54:46 crc kubenswrapper[4768]: I0217 13:54:46.715155 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7ff099b6-c514-40c8-aa19-370d7f8dfbaf-config-data\") pod \"placement-db-sync-xfpzr\" (UID: \"7ff099b6-c514-40c8-aa19-370d7f8dfbaf\") " pod="openstack/placement-db-sync-xfpzr" Feb 17 13:54:46 crc kubenswrapper[4768]: I0217 13:54:46.715184 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7ff099b6-c514-40c8-aa19-370d7f8dfbaf-logs\") pod \"placement-db-sync-xfpzr\" (UID: \"7ff099b6-c514-40c8-aa19-370d7f8dfbaf\") " pod="openstack/placement-db-sync-xfpzr" Feb 17 13:54:46 crc kubenswrapper[4768]: I0217 13:54:46.715213 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/90fa85b0-3a62-4df2-835f-ca176c602f7b-logs\") pod \"horizon-7645d5bbd9-t6l64\" (UID: \"90fa85b0-3a62-4df2-835f-ca176c602f7b\") " pod="openstack/horizon-7645d5bbd9-t6l64" Feb 17 13:54:46 crc kubenswrapper[4768]: I0217 13:54:46.715244 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e23e418f-2c16-4fa8-94fb-5e575affd61b-combined-ca-bundle\") pod \"barbican-db-sync-jmq2h\" (UID: \"e23e418f-2c16-4fa8-94fb-5e575affd61b\") " pod="openstack/barbican-db-sync-jmq2h" Feb 17 13:54:46 crc kubenswrapper[4768]: I0217 13:54:46.715297 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/90fa85b0-3a62-4df2-835f-ca176c602f7b-scripts\") pod \"horizon-7645d5bbd9-t6l64\" (UID: \"90fa85b0-3a62-4df2-835f-ca176c602f7b\") " pod="openstack/horizon-7645d5bbd9-t6l64" Feb 17 13:54:46 crc kubenswrapper[4768]: I0217 13:54:46.715325 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/e23e418f-2c16-4fa8-94fb-5e575affd61b-db-sync-config-data\") pod \"barbican-db-sync-jmq2h\" (UID: \"e23e418f-2c16-4fa8-94fb-5e575affd61b\") " pod="openstack/barbican-db-sync-jmq2h" Feb 17 13:54:46 crc kubenswrapper[4768]: I0217 13:54:46.715373 4768 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ce253312-d33a-42e0-8f1e-9163171fd75d-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 17 13:54:46 crc kubenswrapper[4768]: I0217 13:54:46.715387 4768 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ce253312-d33a-42e0-8f1e-9163171fd75d-config\") on node \"crc\" DevicePath \"\"" Feb 17 13:54:46 crc kubenswrapper[4768]: I0217 13:54:46.715397 4768 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ce253312-d33a-42e0-8f1e-9163171fd75d-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 17 13:54:46 crc kubenswrapper[4768]: I0217 13:54:46.717743 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7ff099b6-c514-40c8-aa19-370d7f8dfbaf-logs\") pod \"placement-db-sync-xfpzr\" (UID: \"7ff099b6-c514-40c8-aa19-370d7f8dfbaf\") " pod="openstack/placement-db-sync-xfpzr" Feb 17 13:54:46 crc kubenswrapper[4768]: I0217 13:54:46.721468 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ce253312-d33a-42e0-8f1e-9163171fd75d-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "ce253312-d33a-42e0-8f1e-9163171fd75d" (UID: "ce253312-d33a-42e0-8f1e-9163171fd75d"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 13:54:46 crc kubenswrapper[4768]: I0217 13:54:46.723338 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7ff099b6-c514-40c8-aa19-370d7f8dfbaf-combined-ca-bundle\") pod \"placement-db-sync-xfpzr\" (UID: \"7ff099b6-c514-40c8-aa19-370d7f8dfbaf\") " pod="openstack/placement-db-sync-xfpzr" Feb 17 13:54:46 crc kubenswrapper[4768]: I0217 13:54:46.726814 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/e23e418f-2c16-4fa8-94fb-5e575affd61b-db-sync-config-data\") pod \"barbican-db-sync-jmq2h\" (UID: \"e23e418f-2c16-4fa8-94fb-5e575affd61b\") " pod="openstack/barbican-db-sync-jmq2h" Feb 17 13:54:46 crc kubenswrapper[4768]: I0217 13:54:46.727034 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e23e418f-2c16-4fa8-94fb-5e575affd61b-combined-ca-bundle\") pod \"barbican-db-sync-jmq2h\" (UID: \"e23e418f-2c16-4fa8-94fb-5e575affd61b\") " pod="openstack/barbican-db-sync-jmq2h" Feb 17 13:54:46 crc kubenswrapper[4768]: I0217 13:54:46.729597 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7ff099b6-c514-40c8-aa19-370d7f8dfbaf-scripts\") pod \"placement-db-sync-xfpzr\" (UID: \"7ff099b6-c514-40c8-aa19-370d7f8dfbaf\") " pod="openstack/placement-db-sync-xfpzr" Feb 17 13:54:46 crc kubenswrapper[4768]: I0217 13:54:46.734020 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7ff099b6-c514-40c8-aa19-370d7f8dfbaf-config-data\") pod \"placement-db-sync-xfpzr\" (UID: \"7ff099b6-c514-40c8-aa19-370d7f8dfbaf\") " pod="openstack/placement-db-sync-xfpzr" Feb 17 13:54:46 crc kubenswrapper[4768]: I0217 13:54:46.748603 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Feb 17 13:54:46 crc kubenswrapper[4768]: I0217 13:54:46.750228 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 17 13:54:46 crc kubenswrapper[4768]: I0217 13:54:46.751573 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tzfsj\" (UniqueName: \"kubernetes.io/projected/e23e418f-2c16-4fa8-94fb-5e575affd61b-kube-api-access-tzfsj\") pod \"barbican-db-sync-jmq2h\" (UID: \"e23e418f-2c16-4fa8-94fb-5e575affd61b\") " pod="openstack/barbican-db-sync-jmq2h" Feb 17 13:54:46 crc kubenswrapper[4768]: I0217 13:54:46.755328 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bqgvw\" (UniqueName: \"kubernetes.io/projected/7ff099b6-c514-40c8-aa19-370d7f8dfbaf-kube-api-access-bqgvw\") pod \"placement-db-sync-xfpzr\" (UID: \"7ff099b6-c514-40c8-aa19-370d7f8dfbaf\") " pod="openstack/placement-db-sync-xfpzr" Feb 17 13:54:46 crc kubenswrapper[4768]: I0217 13:54:46.760201 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-5n7xj" Feb 17 13:54:46 crc kubenswrapper[4768]: I0217 13:54:46.760497 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Feb 17 13:54:46 crc kubenswrapper[4768]: I0217 13:54:46.760664 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Feb 17 13:54:46 crc kubenswrapper[4768]: I0217 13:54:46.760815 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Feb 17 13:54:46 crc kubenswrapper[4768]: I0217 13:54:46.767586 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 17 13:54:46 crc kubenswrapper[4768]: I0217 13:54:46.771847 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ce253312-d33a-42e0-8f1e-9163171fd75d-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "ce253312-d33a-42e0-8f1e-9163171fd75d" (UID: "ce253312-d33a-42e0-8f1e-9163171fd75d"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 13:54:46 crc kubenswrapper[4768]: I0217 13:54:46.788764 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-5c8md" Feb 17 13:54:46 crc kubenswrapper[4768]: I0217 13:54:46.808672 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 17 13:54:46 crc kubenswrapper[4768]: I0217 13:54:46.811638 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 17 13:54:46 crc kubenswrapper[4768]: I0217 13:54:46.815402 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 17 13:54:46 crc kubenswrapper[4768]: I0217 13:54:46.816856 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/90fa85b0-3a62-4df2-835f-ca176c602f7b-logs\") pod \"horizon-7645d5bbd9-t6l64\" (UID: \"90fa85b0-3a62-4df2-835f-ca176c602f7b\") " pod="openstack/horizon-7645d5bbd9-t6l64" Feb 17 13:54:46 crc kubenswrapper[4768]: I0217 13:54:46.816907 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jvsjv\" (UniqueName: \"kubernetes.io/projected/ff892b26-a158-4942-85e5-6a657ffe4d4d-kube-api-access-jvsjv\") pod \"dnsmasq-dns-785d8bcb8c-98n58\" (UID: \"ff892b26-a158-4942-85e5-6a657ffe4d4d\") " pod="openstack/dnsmasq-dns-785d8bcb8c-98n58" Feb 17 13:54:46 crc kubenswrapper[4768]: I0217 13:54:46.816943 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ff892b26-a158-4942-85e5-6a657ffe4d4d-ovsdbserver-nb\") pod \"dnsmasq-dns-785d8bcb8c-98n58\" (UID: \"ff892b26-a158-4942-85e5-6a657ffe4d4d\") " pod="openstack/dnsmasq-dns-785d8bcb8c-98n58" Feb 17 13:54:46 crc kubenswrapper[4768]: I0217 13:54:46.816965 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ff892b26-a158-4942-85e5-6a657ffe4d4d-dns-swift-storage-0\") pod \"dnsmasq-dns-785d8bcb8c-98n58\" (UID: \"ff892b26-a158-4942-85e5-6a657ffe4d4d\") " pod="openstack/dnsmasq-dns-785d8bcb8c-98n58" Feb 17 13:54:46 crc kubenswrapper[4768]: I0217 13:54:46.816983 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ff892b26-a158-4942-85e5-6a657ffe4d4d-dns-svc\") pod \"dnsmasq-dns-785d8bcb8c-98n58\" (UID: \"ff892b26-a158-4942-85e5-6a657ffe4d4d\") " pod="openstack/dnsmasq-dns-785d8bcb8c-98n58" Feb 17 13:54:46 crc kubenswrapper[4768]: I0217 13:54:46.817010 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/90fa85b0-3a62-4df2-835f-ca176c602f7b-scripts\") pod \"horizon-7645d5bbd9-t6l64\" (UID: \"90fa85b0-3a62-4df2-835f-ca176c602f7b\") " pod="openstack/horizon-7645d5bbd9-t6l64" Feb 17 13:54:46 crc kubenswrapper[4768]: I0217 13:54:46.817025 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ff892b26-a158-4942-85e5-6a657ffe4d4d-config\") pod \"dnsmasq-dns-785d8bcb8c-98n58\" (UID: \"ff892b26-a158-4942-85e5-6a657ffe4d4d\") " pod="openstack/dnsmasq-dns-785d8bcb8c-98n58" Feb 17 13:54:46 crc kubenswrapper[4768]: I0217 13:54:46.817058 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ff892b26-a158-4942-85e5-6a657ffe4d4d-ovsdbserver-sb\") pod \"dnsmasq-dns-785d8bcb8c-98n58\" (UID: \"ff892b26-a158-4942-85e5-6a657ffe4d4d\") " pod="openstack/dnsmasq-dns-785d8bcb8c-98n58" Feb 17 13:54:46 crc kubenswrapper[4768]: I0217 13:54:46.817083 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cr8zp\" (UniqueName: \"kubernetes.io/projected/90fa85b0-3a62-4df2-835f-ca176c602f7b-kube-api-access-cr8zp\") pod \"horizon-7645d5bbd9-t6l64\" (UID: \"90fa85b0-3a62-4df2-835f-ca176c602f7b\") " pod="openstack/horizon-7645d5bbd9-t6l64" Feb 17 13:54:46 crc kubenswrapper[4768]: I0217 13:54:46.817122 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/90fa85b0-3a62-4df2-835f-ca176c602f7b-config-data\") pod \"horizon-7645d5bbd9-t6l64\" (UID: \"90fa85b0-3a62-4df2-835f-ca176c602f7b\") " pod="openstack/horizon-7645d5bbd9-t6l64" Feb 17 13:54:46 crc kubenswrapper[4768]: I0217 13:54:46.817172 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/90fa85b0-3a62-4df2-835f-ca176c602f7b-horizon-secret-key\") pod \"horizon-7645d5bbd9-t6l64\" (UID: \"90fa85b0-3a62-4df2-835f-ca176c602f7b\") " pod="openstack/horizon-7645d5bbd9-t6l64" Feb 17 13:54:46 crc kubenswrapper[4768]: I0217 13:54:46.817213 4768 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ce253312-d33a-42e0-8f1e-9163171fd75d-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 17 13:54:46 crc kubenswrapper[4768]: I0217 13:54:46.817223 4768 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ce253312-d33a-42e0-8f1e-9163171fd75d-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 17 13:54:46 crc kubenswrapper[4768]: I0217 13:54:46.818401 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/90fa85b0-3a62-4df2-835f-ca176c602f7b-logs\") pod \"horizon-7645d5bbd9-t6l64\" (UID: \"90fa85b0-3a62-4df2-835f-ca176c602f7b\") " pod="openstack/horizon-7645d5bbd9-t6l64" Feb 17 13:54:46 crc kubenswrapper[4768]: I0217 13:54:46.819024 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/90fa85b0-3a62-4df2-835f-ca176c602f7b-scripts\") pod \"horizon-7645d5bbd9-t6l64\" (UID: \"90fa85b0-3a62-4df2-835f-ca176c602f7b\") " pod="openstack/horizon-7645d5bbd9-t6l64" Feb 17 13:54:46 crc kubenswrapper[4768]: I0217 13:54:46.819288 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Feb 17 13:54:46 crc kubenswrapper[4768]: I0217 13:54:46.819990 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/90fa85b0-3a62-4df2-835f-ca176c602f7b-config-data\") pod \"horizon-7645d5bbd9-t6l64\" (UID: \"90fa85b0-3a62-4df2-835f-ca176c602f7b\") " pod="openstack/horizon-7645d5bbd9-t6l64" Feb 17 13:54:46 crc kubenswrapper[4768]: I0217 13:54:46.826589 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/90fa85b0-3a62-4df2-835f-ca176c602f7b-horizon-secret-key\") pod \"horizon-7645d5bbd9-t6l64\" (UID: \"90fa85b0-3a62-4df2-835f-ca176c602f7b\") " pod="openstack/horizon-7645d5bbd9-t6l64" Feb 17 13:54:46 crc kubenswrapper[4768]: I0217 13:54:46.827770 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Feb 17 13:54:46 crc kubenswrapper[4768]: I0217 13:54:46.873797 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cr8zp\" (UniqueName: \"kubernetes.io/projected/90fa85b0-3a62-4df2-835f-ca176c602f7b-kube-api-access-cr8zp\") pod \"horizon-7645d5bbd9-t6l64\" (UID: \"90fa85b0-3a62-4df2-835f-ca176c602f7b\") " pod="openstack/horizon-7645d5bbd9-t6l64" Feb 17 13:54:46 crc kubenswrapper[4768]: I0217 13:54:46.919118 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6a9e0b83-c5d7-4bfe-a90a-ab36b26a8aaa-logs\") pod \"glance-default-external-api-0\" (UID: \"6a9e0b83-c5d7-4bfe-a90a-ab36b26a8aaa\") " pod="openstack/glance-default-external-api-0" Feb 17 13:54:46 crc kubenswrapper[4768]: I0217 13:54:46.919153 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-external-api-0\" (UID: \"6a9e0b83-c5d7-4bfe-a90a-ab36b26a8aaa\") " pod="openstack/glance-default-external-api-0" Feb 17 13:54:46 crc kubenswrapper[4768]: I0217 13:54:46.919190 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jvsjv\" (UniqueName: \"kubernetes.io/projected/ff892b26-a158-4942-85e5-6a657ffe4d4d-kube-api-access-jvsjv\") pod \"dnsmasq-dns-785d8bcb8c-98n58\" (UID: \"ff892b26-a158-4942-85e5-6a657ffe4d4d\") " pod="openstack/dnsmasq-dns-785d8bcb8c-98n58" Feb 17 13:54:46 crc kubenswrapper[4768]: I0217 13:54:46.919219 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-internal-api-0\" (UID: \"60befae2-547b-4b2e-a117-6c181de8c29d\") " pod="openstack/glance-default-internal-api-0" Feb 17 13:54:46 crc kubenswrapper[4768]: I0217 13:54:46.919254 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ff892b26-a158-4942-85e5-6a657ffe4d4d-ovsdbserver-nb\") pod \"dnsmasq-dns-785d8bcb8c-98n58\" (UID: \"ff892b26-a158-4942-85e5-6a657ffe4d4d\") " pod="openstack/dnsmasq-dns-785d8bcb8c-98n58" Feb 17 13:54:46 crc kubenswrapper[4768]: I0217 13:54:46.919308 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/60befae2-547b-4b2e-a117-6c181de8c29d-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"60befae2-547b-4b2e-a117-6c181de8c29d\") " pod="openstack/glance-default-internal-api-0" Feb 17 13:54:46 crc kubenswrapper[4768]: I0217 13:54:46.919335 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ff892b26-a158-4942-85e5-6a657ffe4d4d-dns-swift-storage-0\") pod \"dnsmasq-dns-785d8bcb8c-98n58\" (UID: \"ff892b26-a158-4942-85e5-6a657ffe4d4d\") " pod="openstack/dnsmasq-dns-785d8bcb8c-98n58" Feb 17 13:54:46 crc kubenswrapper[4768]: I0217 13:54:46.919359 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ff892b26-a158-4942-85e5-6a657ffe4d4d-dns-svc\") pod \"dnsmasq-dns-785d8bcb8c-98n58\" (UID: \"ff892b26-a158-4942-85e5-6a657ffe4d4d\") " pod="openstack/dnsmasq-dns-785d8bcb8c-98n58" Feb 17 13:54:46 crc kubenswrapper[4768]: I0217 13:54:46.919380 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6a9e0b83-c5d7-4bfe-a90a-ab36b26a8aaa-config-data\") pod \"glance-default-external-api-0\" (UID: \"6a9e0b83-c5d7-4bfe-a90a-ab36b26a8aaa\") " pod="openstack/glance-default-external-api-0" Feb 17 13:54:46 crc kubenswrapper[4768]: I0217 13:54:46.919403 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ff892b26-a158-4942-85e5-6a657ffe4d4d-config\") pod \"dnsmasq-dns-785d8bcb8c-98n58\" (UID: \"ff892b26-a158-4942-85e5-6a657ffe4d4d\") " pod="openstack/dnsmasq-dns-785d8bcb8c-98n58" Feb 17 13:54:46 crc kubenswrapper[4768]: I0217 13:54:46.919424 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/60befae2-547b-4b2e-a117-6c181de8c29d-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"60befae2-547b-4b2e-a117-6c181de8c29d\") " pod="openstack/glance-default-internal-api-0" Feb 17 13:54:46 crc kubenswrapper[4768]: I0217 13:54:46.919444 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/6a9e0b83-c5d7-4bfe-a90a-ab36b26a8aaa-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"6a9e0b83-c5d7-4bfe-a90a-ab36b26a8aaa\") " pod="openstack/glance-default-external-api-0" Feb 17 13:54:46 crc kubenswrapper[4768]: I0217 13:54:46.919470 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6a9e0b83-c5d7-4bfe-a90a-ab36b26a8aaa-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"6a9e0b83-c5d7-4bfe-a90a-ab36b26a8aaa\") " pod="openstack/glance-default-external-api-0" Feb 17 13:54:46 crc kubenswrapper[4768]: I0217 13:54:46.919486 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/60befae2-547b-4b2e-a117-6c181de8c29d-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"60befae2-547b-4b2e-a117-6c181de8c29d\") " pod="openstack/glance-default-internal-api-0" Feb 17 13:54:46 crc kubenswrapper[4768]: I0217 13:54:46.919505 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ff892b26-a158-4942-85e5-6a657ffe4d4d-ovsdbserver-sb\") pod \"dnsmasq-dns-785d8bcb8c-98n58\" (UID: \"ff892b26-a158-4942-85e5-6a657ffe4d4d\") " pod="openstack/dnsmasq-dns-785d8bcb8c-98n58" Feb 17 13:54:46 crc kubenswrapper[4768]: I0217 13:54:46.919525 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/60befae2-547b-4b2e-a117-6c181de8c29d-scripts\") pod \"glance-default-internal-api-0\" (UID: \"60befae2-547b-4b2e-a117-6c181de8c29d\") " pod="openstack/glance-default-internal-api-0" Feb 17 13:54:46 crc kubenswrapper[4768]: I0217 13:54:46.919554 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6a9e0b83-c5d7-4bfe-a90a-ab36b26a8aaa-scripts\") pod \"glance-default-external-api-0\" (UID: \"6a9e0b83-c5d7-4bfe-a90a-ab36b26a8aaa\") " pod="openstack/glance-default-external-api-0" Feb 17 13:54:46 crc kubenswrapper[4768]: I0217 13:54:46.919570 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/60befae2-547b-4b2e-a117-6c181de8c29d-config-data\") pod \"glance-default-internal-api-0\" (UID: \"60befae2-547b-4b2e-a117-6c181de8c29d\") " pod="openstack/glance-default-internal-api-0" Feb 17 13:54:46 crc kubenswrapper[4768]: I0217 13:54:46.919587 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4s5nm\" (UniqueName: \"kubernetes.io/projected/6a9e0b83-c5d7-4bfe-a90a-ab36b26a8aaa-kube-api-access-4s5nm\") pod \"glance-default-external-api-0\" (UID: \"6a9e0b83-c5d7-4bfe-a90a-ab36b26a8aaa\") " pod="openstack/glance-default-external-api-0" Feb 17 13:54:46 crc kubenswrapper[4768]: I0217 13:54:46.919616 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kp62p\" (UniqueName: \"kubernetes.io/projected/60befae2-547b-4b2e-a117-6c181de8c29d-kube-api-access-kp62p\") pod \"glance-default-internal-api-0\" (UID: \"60befae2-547b-4b2e-a117-6c181de8c29d\") " pod="openstack/glance-default-internal-api-0" Feb 17 13:54:46 crc kubenswrapper[4768]: I0217 13:54:46.919634 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6a9e0b83-c5d7-4bfe-a90a-ab36b26a8aaa-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"6a9e0b83-c5d7-4bfe-a90a-ab36b26a8aaa\") " pod="openstack/glance-default-external-api-0" Feb 17 13:54:46 crc kubenswrapper[4768]: I0217 13:54:46.919652 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/60befae2-547b-4b2e-a117-6c181de8c29d-logs\") pod \"glance-default-internal-api-0\" (UID: \"60befae2-547b-4b2e-a117-6c181de8c29d\") " pod="openstack/glance-default-internal-api-0" Feb 17 13:54:46 crc kubenswrapper[4768]: I0217 13:54:46.922918 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ff892b26-a158-4942-85e5-6a657ffe4d4d-ovsdbserver-sb\") pod \"dnsmasq-dns-785d8bcb8c-98n58\" (UID: \"ff892b26-a158-4942-85e5-6a657ffe4d4d\") " pod="openstack/dnsmasq-dns-785d8bcb8c-98n58" Feb 17 13:54:46 crc kubenswrapper[4768]: I0217 13:54:46.922926 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ff892b26-a158-4942-85e5-6a657ffe4d4d-dns-svc\") pod \"dnsmasq-dns-785d8bcb8c-98n58\" (UID: \"ff892b26-a158-4942-85e5-6a657ffe4d4d\") " pod="openstack/dnsmasq-dns-785d8bcb8c-98n58" Feb 17 13:54:46 crc kubenswrapper[4768]: I0217 13:54:46.924406 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ff892b26-a158-4942-85e5-6a657ffe4d4d-config\") pod \"dnsmasq-dns-785d8bcb8c-98n58\" (UID: \"ff892b26-a158-4942-85e5-6a657ffe4d4d\") " pod="openstack/dnsmasq-dns-785d8bcb8c-98n58" Feb 17 13:54:46 crc kubenswrapper[4768]: I0217 13:54:46.924829 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ff892b26-a158-4942-85e5-6a657ffe4d4d-ovsdbserver-nb\") pod \"dnsmasq-dns-785d8bcb8c-98n58\" (UID: \"ff892b26-a158-4942-85e5-6a657ffe4d4d\") " pod="openstack/dnsmasq-dns-785d8bcb8c-98n58" Feb 17 13:54:46 crc kubenswrapper[4768]: I0217 13:54:46.926435 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ff892b26-a158-4942-85e5-6a657ffe4d4d-dns-swift-storage-0\") pod \"dnsmasq-dns-785d8bcb8c-98n58\" (UID: \"ff892b26-a158-4942-85e5-6a657ffe4d4d\") " pod="openstack/dnsmasq-dns-785d8bcb8c-98n58" Feb 17 13:54:46 crc kubenswrapper[4768]: I0217 13:54:46.932685 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-jmq2h" Feb 17 13:54:46 crc kubenswrapper[4768]: I0217 13:54:46.942847 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-xfpzr" Feb 17 13:54:46 crc kubenswrapper[4768]: I0217 13:54:46.953542 4768 scope.go:117] "RemoveContainer" containerID="0aab15c6184155a093616cd2edd6c20089afc8b48768c70dff173c372bd14ddf" Feb 17 13:54:46 crc kubenswrapper[4768]: I0217 13:54:46.961374 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jvsjv\" (UniqueName: \"kubernetes.io/projected/ff892b26-a158-4942-85e5-6a657ffe4d4d-kube-api-access-jvsjv\") pod \"dnsmasq-dns-785d8bcb8c-98n58\" (UID: \"ff892b26-a158-4942-85e5-6a657ffe4d4d\") " pod="openstack/dnsmasq-dns-785d8bcb8c-98n58" Feb 17 13:54:46 crc kubenswrapper[4768]: I0217 13:54:46.961939 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-7645d5bbd9-t6l64" Feb 17 13:54:47 crc kubenswrapper[4768]: I0217 13:54:47.019039 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-t6h7l"] Feb 17 13:54:47 crc kubenswrapper[4768]: I0217 13:54:47.021896 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-785d8bcb8c-98n58" Feb 17 13:54:47 crc kubenswrapper[4768]: I0217 13:54:47.023048 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6a9e0b83-c5d7-4bfe-a90a-ab36b26a8aaa-logs\") pod \"glance-default-external-api-0\" (UID: \"6a9e0b83-c5d7-4bfe-a90a-ab36b26a8aaa\") " pod="openstack/glance-default-external-api-0" Feb 17 13:54:47 crc kubenswrapper[4768]: I0217 13:54:47.023093 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-external-api-0\" (UID: \"6a9e0b83-c5d7-4bfe-a90a-ab36b26a8aaa\") " pod="openstack/glance-default-external-api-0" Feb 17 13:54:47 crc kubenswrapper[4768]: I0217 13:54:47.023311 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-internal-api-0\" (UID: \"60befae2-547b-4b2e-a117-6c181de8c29d\") " pod="openstack/glance-default-internal-api-0" Feb 17 13:54:47 crc kubenswrapper[4768]: I0217 13:54:47.023349 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/60befae2-547b-4b2e-a117-6c181de8c29d-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"60befae2-547b-4b2e-a117-6c181de8c29d\") " pod="openstack/glance-default-internal-api-0" Feb 17 13:54:47 crc kubenswrapper[4768]: I0217 13:54:47.023378 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6a9e0b83-c5d7-4bfe-a90a-ab36b26a8aaa-config-data\") pod \"glance-default-external-api-0\" (UID: \"6a9e0b83-c5d7-4bfe-a90a-ab36b26a8aaa\") " pod="openstack/glance-default-external-api-0" Feb 17 13:54:47 crc kubenswrapper[4768]: I0217 13:54:47.023420 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/60befae2-547b-4b2e-a117-6c181de8c29d-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"60befae2-547b-4b2e-a117-6c181de8c29d\") " pod="openstack/glance-default-internal-api-0" Feb 17 13:54:47 crc kubenswrapper[4768]: I0217 13:54:47.023438 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/6a9e0b83-c5d7-4bfe-a90a-ab36b26a8aaa-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"6a9e0b83-c5d7-4bfe-a90a-ab36b26a8aaa\") " pod="openstack/glance-default-external-api-0" Feb 17 13:54:47 crc kubenswrapper[4768]: I0217 13:54:47.023467 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6a9e0b83-c5d7-4bfe-a90a-ab36b26a8aaa-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"6a9e0b83-c5d7-4bfe-a90a-ab36b26a8aaa\") " pod="openstack/glance-default-external-api-0" Feb 17 13:54:47 crc kubenswrapper[4768]: I0217 13:54:47.023487 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/60befae2-547b-4b2e-a117-6c181de8c29d-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"60befae2-547b-4b2e-a117-6c181de8c29d\") " pod="openstack/glance-default-internal-api-0" Feb 17 13:54:47 crc kubenswrapper[4768]: I0217 13:54:47.023511 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/60befae2-547b-4b2e-a117-6c181de8c29d-scripts\") pod \"glance-default-internal-api-0\" (UID: \"60befae2-547b-4b2e-a117-6c181de8c29d\") " pod="openstack/glance-default-internal-api-0" Feb 17 13:54:47 crc kubenswrapper[4768]: I0217 13:54:47.023665 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6a9e0b83-c5d7-4bfe-a90a-ab36b26a8aaa-scripts\") pod \"glance-default-external-api-0\" (UID: \"6a9e0b83-c5d7-4bfe-a90a-ab36b26a8aaa\") " pod="openstack/glance-default-external-api-0" Feb 17 13:54:47 crc kubenswrapper[4768]: I0217 13:54:47.023685 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/60befae2-547b-4b2e-a117-6c181de8c29d-config-data\") pod \"glance-default-internal-api-0\" (UID: \"60befae2-547b-4b2e-a117-6c181de8c29d\") " pod="openstack/glance-default-internal-api-0" Feb 17 13:54:47 crc kubenswrapper[4768]: I0217 13:54:47.023703 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4s5nm\" (UniqueName: \"kubernetes.io/projected/6a9e0b83-c5d7-4bfe-a90a-ab36b26a8aaa-kube-api-access-4s5nm\") pod \"glance-default-external-api-0\" (UID: \"6a9e0b83-c5d7-4bfe-a90a-ab36b26a8aaa\") " pod="openstack/glance-default-external-api-0" Feb 17 13:54:47 crc kubenswrapper[4768]: I0217 13:54:47.023740 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kp62p\" (UniqueName: \"kubernetes.io/projected/60befae2-547b-4b2e-a117-6c181de8c29d-kube-api-access-kp62p\") pod \"glance-default-internal-api-0\" (UID: \"60befae2-547b-4b2e-a117-6c181de8c29d\") " pod="openstack/glance-default-internal-api-0" Feb 17 13:54:47 crc kubenswrapper[4768]: I0217 13:54:47.023757 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6a9e0b83-c5d7-4bfe-a90a-ab36b26a8aaa-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"6a9e0b83-c5d7-4bfe-a90a-ab36b26a8aaa\") " pod="openstack/glance-default-external-api-0" Feb 17 13:54:47 crc kubenswrapper[4768]: I0217 13:54:47.023770 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/60befae2-547b-4b2e-a117-6c181de8c29d-logs\") pod \"glance-default-internal-api-0\" (UID: \"60befae2-547b-4b2e-a117-6c181de8c29d\") " pod="openstack/glance-default-internal-api-0" Feb 17 13:54:47 crc kubenswrapper[4768]: I0217 13:54:47.024776 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6a9e0b83-c5d7-4bfe-a90a-ab36b26a8aaa-logs\") pod \"glance-default-external-api-0\" (UID: \"6a9e0b83-c5d7-4bfe-a90a-ab36b26a8aaa\") " pod="openstack/glance-default-external-api-0" Feb 17 13:54:47 crc kubenswrapper[4768]: I0217 13:54:47.025044 4768 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-external-api-0\" (UID: \"6a9e0b83-c5d7-4bfe-a90a-ab36b26a8aaa\") device mount path \"/mnt/openstack/pv11\"" pod="openstack/glance-default-external-api-0" Feb 17 13:54:47 crc kubenswrapper[4768]: I0217 13:54:47.031870 4768 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-internal-api-0\" (UID: \"60befae2-547b-4b2e-a117-6c181de8c29d\") device mount path \"/mnt/openstack/pv10\"" pod="openstack/glance-default-internal-api-0" Feb 17 13:54:47 crc kubenswrapper[4768]: I0217 13:54:47.033619 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/6a9e0b83-c5d7-4bfe-a90a-ab36b26a8aaa-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"6a9e0b83-c5d7-4bfe-a90a-ab36b26a8aaa\") " pod="openstack/glance-default-external-api-0" Feb 17 13:54:47 crc kubenswrapper[4768]: I0217 13:54:47.036217 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/60befae2-547b-4b2e-a117-6c181de8c29d-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"60befae2-547b-4b2e-a117-6c181de8c29d\") " pod="openstack/glance-default-internal-api-0" Feb 17 13:54:47 crc kubenswrapper[4768]: I0217 13:54:47.042655 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/60befae2-547b-4b2e-a117-6c181de8c29d-logs\") pod \"glance-default-internal-api-0\" (UID: \"60befae2-547b-4b2e-a117-6c181de8c29d\") " pod="openstack/glance-default-internal-api-0" Feb 17 13:54:47 crc kubenswrapper[4768]: I0217 13:54:47.048312 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6a9e0b83-c5d7-4bfe-a90a-ab36b26a8aaa-scripts\") pod \"glance-default-external-api-0\" (UID: \"6a9e0b83-c5d7-4bfe-a90a-ab36b26a8aaa\") " pod="openstack/glance-default-external-api-0" Feb 17 13:54:47 crc kubenswrapper[4768]: I0217 13:54:47.050365 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/60befae2-547b-4b2e-a117-6c181de8c29d-scripts\") pod \"glance-default-internal-api-0\" (UID: \"60befae2-547b-4b2e-a117-6c181de8c29d\") " pod="openstack/glance-default-internal-api-0" Feb 17 13:54:47 crc kubenswrapper[4768]: I0217 13:54:47.056197 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/60befae2-547b-4b2e-a117-6c181de8c29d-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"60befae2-547b-4b2e-a117-6c181de8c29d\") " pod="openstack/glance-default-internal-api-0" Feb 17 13:54:47 crc kubenswrapper[4768]: I0217 13:54:47.056304 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6a9e0b83-c5d7-4bfe-a90a-ab36b26a8aaa-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"6a9e0b83-c5d7-4bfe-a90a-ab36b26a8aaa\") " pod="openstack/glance-default-external-api-0" Feb 17 13:54:47 crc kubenswrapper[4768]: I0217 13:54:47.057387 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/60befae2-547b-4b2e-a117-6c181de8c29d-config-data\") pod \"glance-default-internal-api-0\" (UID: \"60befae2-547b-4b2e-a117-6c181de8c29d\") " pod="openstack/glance-default-internal-api-0" Feb 17 13:54:47 crc kubenswrapper[4768]: I0217 13:54:47.058842 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6a9e0b83-c5d7-4bfe-a90a-ab36b26a8aaa-config-data\") pod \"glance-default-external-api-0\" (UID: \"6a9e0b83-c5d7-4bfe-a90a-ab36b26a8aaa\") " pod="openstack/glance-default-external-api-0" Feb 17 13:54:47 crc kubenswrapper[4768]: I0217 13:54:47.062755 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/60befae2-547b-4b2e-a117-6c181de8c29d-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"60befae2-547b-4b2e-a117-6c181de8c29d\") " pod="openstack/glance-default-internal-api-0" Feb 17 13:54:47 crc kubenswrapper[4768]: I0217 13:54:47.066037 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6a9e0b83-c5d7-4bfe-a90a-ab36b26a8aaa-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"6a9e0b83-c5d7-4bfe-a90a-ab36b26a8aaa\") " pod="openstack/glance-default-external-api-0" Feb 17 13:54:47 crc kubenswrapper[4768]: I0217 13:54:47.085612 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kp62p\" (UniqueName: \"kubernetes.io/projected/60befae2-547b-4b2e-a117-6c181de8c29d-kube-api-access-kp62p\") pod \"glance-default-internal-api-0\" (UID: \"60befae2-547b-4b2e-a117-6c181de8c29d\") " pod="openstack/glance-default-internal-api-0" Feb 17 13:54:47 crc kubenswrapper[4768]: I0217 13:54:47.101922 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4s5nm\" (UniqueName: \"kubernetes.io/projected/6a9e0b83-c5d7-4bfe-a90a-ab36b26a8aaa-kube-api-access-4s5nm\") pod \"glance-default-external-api-0\" (UID: \"6a9e0b83-c5d7-4bfe-a90a-ab36b26a8aaa\") " pod="openstack/glance-default-external-api-0" Feb 17 13:54:47 crc kubenswrapper[4768]: I0217 13:54:47.151900 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-external-api-0\" (UID: \"6a9e0b83-c5d7-4bfe-a90a-ab36b26a8aaa\") " pod="openstack/glance-default-external-api-0" Feb 17 13:54:47 crc kubenswrapper[4768]: I0217 13:54:47.192890 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-internal-api-0\" (UID: \"60befae2-547b-4b2e-a117-6c181de8c29d\") " pod="openstack/glance-default-internal-api-0" Feb 17 13:54:47 crc kubenswrapper[4768]: I0217 13:54:47.281277 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-55d496646f-4tq7c"] Feb 17 13:54:47 crc kubenswrapper[4768]: W0217 13:54:47.287802 4768 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod23e8824d_9b92_4724_971e_c807d48d8229.slice/crio-30a9df22abc456a7e1bc28304fb6ef77802d88c961a23692d448f2e131af72fb WatchSource:0}: Error finding container 30a9df22abc456a7e1bc28304fb6ef77802d88c961a23692d448f2e131af72fb: Status 404 returned error can't find the container with id 30a9df22abc456a7e1bc28304fb6ef77802d88c961a23692d448f2e131af72fb Feb 17 13:54:47 crc kubenswrapper[4768]: I0217 13:54:47.307655 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-847c4cc679-h4ww6"] Feb 17 13:54:47 crc kubenswrapper[4768]: I0217 13:54:47.343865 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 17 13:54:47 crc kubenswrapper[4768]: I0217 13:54:47.364818 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 17 13:54:47 crc kubenswrapper[4768]: I0217 13:54:47.424171 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 17 13:54:47 crc kubenswrapper[4768]: I0217 13:54:47.464023 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-847c4cc679-h4ww6" event={"ID":"43ba8a04-014d-4289-ad5e-9f883e9b2d69","Type":"ContainerStarted","Data":"769f988423ade28ec9bf3740627d23a1b2dd40dcc2bbe3ff25e5457ec52351e6"} Feb 17 13:54:47 crc kubenswrapper[4768]: I0217 13:54:47.472865 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-55d496646f-4tq7c" event={"ID":"23e8824d-9b92-4724-971e-c807d48d8229","Type":"ContainerStarted","Data":"30a9df22abc456a7e1bc28304fb6ef77802d88c961a23692d448f2e131af72fb"} Feb 17 13:54:47 crc kubenswrapper[4768]: I0217 13:54:47.478553 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1c0e296a-80f3-4efe-bb28-17fdfd153397","Type":"ContainerStarted","Data":"02908c2a342a5a281b0209631f62c4cf5bc9e21166549d9dff4194a298c6a659"} Feb 17 13:54:47 crc kubenswrapper[4768]: I0217 13:54:47.481883 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-74f6bcbc87-p8nmv" event={"ID":"ce253312-d33a-42e0-8f1e-9163171fd75d","Type":"ContainerDied","Data":"49d91cbe7d19799883fa394c12eadce3ea37fb37a3bd255e262eed67b549b22e"} Feb 17 13:54:47 crc kubenswrapper[4768]: I0217 13:54:47.481965 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-74f6bcbc87-p8nmv" Feb 17 13:54:47 crc kubenswrapper[4768]: I0217 13:54:47.514868 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-t6h7l" event={"ID":"83c7e8b0-a496-489a-b66d-230261a68227","Type":"ContainerStarted","Data":"2d89ea7f309323fdd5a114791bdbbe2048b5bd8a04c542fbe771ff94d7bbc295"} Feb 17 13:54:47 crc kubenswrapper[4768]: I0217 13:54:47.558933 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-t6h7l" podStartSLOduration=2.55891203 podStartE2EDuration="2.55891203s" podCreationTimestamp="2026-02-17 13:54:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 13:54:47.557805709 +0000 UTC m=+1106.837192161" watchObservedRunningTime="2026-02-17 13:54:47.55891203 +0000 UTC m=+1106.838298472" Feb 17 13:54:47 crc kubenswrapper[4768]: I0217 13:54:47.650800 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-4vs6w"] Feb 17 13:54:47 crc kubenswrapper[4768]: I0217 13:54:47.666252 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-74f6bcbc87-p8nmv"] Feb 17 13:54:47 crc kubenswrapper[4768]: I0217 13:54:47.678147 4768 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-74f6bcbc87-p8nmv"] Feb 17 13:54:47 crc kubenswrapper[4768]: I0217 13:54:47.710982 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-5c8md"] Feb 17 13:54:47 crc kubenswrapper[4768]: I0217 13:54:47.757504 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-xfpzr"] Feb 17 13:54:47 crc kubenswrapper[4768]: I0217 13:54:47.869550 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-7645d5bbd9-t6l64"] Feb 17 13:54:47 crc kubenswrapper[4768]: I0217 13:54:47.908985 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-jmq2h"] Feb 17 13:54:47 crc kubenswrapper[4768]: I0217 13:54:47.918072 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-785d8bcb8c-98n58"] Feb 17 13:54:47 crc kubenswrapper[4768]: W0217 13:54:47.924403 4768 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode23e418f_2c16_4fa8_94fb_5e575affd61b.slice/crio-9e9636fc8b31b07ad1150758abaf2fe9fbb51a72dd958c4d5c56c48933b24f68 WatchSource:0}: Error finding container 9e9636fc8b31b07ad1150758abaf2fe9fbb51a72dd958c4d5c56c48933b24f68: Status 404 returned error can't find the container with id 9e9636fc8b31b07ad1150758abaf2fe9fbb51a72dd958c4d5c56c48933b24f68 Feb 17 13:54:47 crc kubenswrapper[4768]: W0217 13:54:47.925045 4768 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podff892b26_a158_4942_85e5_6a657ffe4d4d.slice/crio-9bb22b73f81198838da26a14d821336c98bc033857f4325f311e0c5aab723c83 WatchSource:0}: Error finding container 9bb22b73f81198838da26a14d821336c98bc033857f4325f311e0c5aab723c83: Status 404 returned error can't find the container with id 9bb22b73f81198838da26a14d821336c98bc033857f4325f311e0c5aab723c83 Feb 17 13:54:48 crc kubenswrapper[4768]: I0217 13:54:48.251748 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 17 13:54:48 crc kubenswrapper[4768]: I0217 13:54:48.441280 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 17 13:54:48 crc kubenswrapper[4768]: I0217 13:54:48.501125 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-55d496646f-4tq7c"] Feb 17 13:54:48 crc kubenswrapper[4768]: I0217 13:54:48.589262 4768 generic.go:334] "Generic (PLEG): container finished" podID="43ba8a04-014d-4289-ad5e-9f883e9b2d69" containerID="00d70823518f47f4be0681f2badb6143b053616604e8cf22f53cdffe8d842f95" exitCode=0 Feb 17 13:54:48 crc kubenswrapper[4768]: I0217 13:54:48.589361 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-847c4cc679-h4ww6" event={"ID":"43ba8a04-014d-4289-ad5e-9f883e9b2d69","Type":"ContainerDied","Data":"00d70823518f47f4be0681f2badb6143b053616604e8cf22f53cdffe8d842f95"} Feb 17 13:54:48 crc kubenswrapper[4768]: I0217 13:54:48.620244 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 17 13:54:48 crc kubenswrapper[4768]: I0217 13:54:48.624521 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-4vs6w" event={"ID":"33f4a984-4d42-469e-8eda-c49264f0e4d9","Type":"ContainerStarted","Data":"1025948318c7e5bdae6ba53f7f34d7fc4f909f69d14dc541130d510c4e0b05c6"} Feb 17 13:54:48 crc kubenswrapper[4768]: I0217 13:54:48.624562 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-4vs6w" event={"ID":"33f4a984-4d42-469e-8eda-c49264f0e4d9","Type":"ContainerStarted","Data":"5fd0d71010b75e3c237c0de8e25f1154dfacd4f1e91f0d0d4c373780e9be1cdd"} Feb 17 13:54:48 crc kubenswrapper[4768]: I0217 13:54:48.645279 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"6a9e0b83-c5d7-4bfe-a90a-ab36b26a8aaa","Type":"ContainerStarted","Data":"9dde5c5edaf83743191b280232ffb5172ef53b2a3e12bbc0c3bb18fc67d6b655"} Feb 17 13:54:48 crc kubenswrapper[4768]: I0217 13:54:48.648690 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7645d5bbd9-t6l64" event={"ID":"90fa85b0-3a62-4df2-835f-ca176c602f7b","Type":"ContainerStarted","Data":"eb0f353174c91cb882e7c2a9f0100ad0cf029a76b1bf9e5c74f8fd3ad7a717ec"} Feb 17 13:54:48 crc kubenswrapper[4768]: I0217 13:54:48.653356 4768 generic.go:334] "Generic (PLEG): container finished" podID="ff892b26-a158-4942-85e5-6a657ffe4d4d" containerID="749990bad12c40693f551fb747fcf632e5d190b264bd9ed0e9c495c87e405369" exitCode=0 Feb 17 13:54:48 crc kubenswrapper[4768]: I0217 13:54:48.653406 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-785d8bcb8c-98n58" event={"ID":"ff892b26-a158-4942-85e5-6a657ffe4d4d","Type":"ContainerDied","Data":"749990bad12c40693f551fb747fcf632e5d190b264bd9ed0e9c495c87e405369"} Feb 17 13:54:48 crc kubenswrapper[4768]: I0217 13:54:48.653429 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-785d8bcb8c-98n58" event={"ID":"ff892b26-a158-4942-85e5-6a657ffe4d4d","Type":"ContainerStarted","Data":"9bb22b73f81198838da26a14d821336c98bc033857f4325f311e0c5aab723c83"} Feb 17 13:54:48 crc kubenswrapper[4768]: I0217 13:54:48.654172 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-66c5b78857-sdk9f"] Feb 17 13:54:48 crc kubenswrapper[4768]: I0217 13:54:48.655394 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-66c5b78857-sdk9f" Feb 17 13:54:48 crc kubenswrapper[4768]: I0217 13:54:48.673001 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-66c5b78857-sdk9f"] Feb 17 13:54:48 crc kubenswrapper[4768]: I0217 13:54:48.674092 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-jmq2h" event={"ID":"e23e418f-2c16-4fa8-94fb-5e575affd61b","Type":"ContainerStarted","Data":"9e9636fc8b31b07ad1150758abaf2fe9fbb51a72dd958c4d5c56c48933b24f68"} Feb 17 13:54:48 crc kubenswrapper[4768]: I0217 13:54:48.676037 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-5c8md" event={"ID":"df7e53d7-b63b-41b4-b909-c6effd0dab0c","Type":"ContainerStarted","Data":"4769e51a18129497bb9e0b8bf6197904947b6c69bee9e91c741476f9a28892c5"} Feb 17 13:54:48 crc kubenswrapper[4768]: I0217 13:54:48.695626 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 17 13:54:48 crc kubenswrapper[4768]: I0217 13:54:48.754619 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-xfpzr" event={"ID":"7ff099b6-c514-40c8-aa19-370d7f8dfbaf","Type":"ContainerStarted","Data":"604d11ec5eb14e3b30e1c76a35ca9f2a356ea5b1f90986a96134dd99b940906d"} Feb 17 13:54:48 crc kubenswrapper[4768]: I0217 13:54:48.773130 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-sync-4vs6w" podStartSLOduration=3.7730940349999997 podStartE2EDuration="3.773094035s" podCreationTimestamp="2026-02-17 13:54:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 13:54:48.754805008 +0000 UTC m=+1108.034191450" watchObservedRunningTime="2026-02-17 13:54:48.773094035 +0000 UTC m=+1108.052480467" Feb 17 13:54:48 crc kubenswrapper[4768]: I0217 13:54:48.838824 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/30f56441-0814-409d-98c2-f51795b60a80-scripts\") pod \"horizon-66c5b78857-sdk9f\" (UID: \"30f56441-0814-409d-98c2-f51795b60a80\") " pod="openstack/horizon-66c5b78857-sdk9f" Feb 17 13:54:48 crc kubenswrapper[4768]: I0217 13:54:48.838887 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/30f56441-0814-409d-98c2-f51795b60a80-horizon-secret-key\") pod \"horizon-66c5b78857-sdk9f\" (UID: \"30f56441-0814-409d-98c2-f51795b60a80\") " pod="openstack/horizon-66c5b78857-sdk9f" Feb 17 13:54:48 crc kubenswrapper[4768]: I0217 13:54:48.838946 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/30f56441-0814-409d-98c2-f51795b60a80-config-data\") pod \"horizon-66c5b78857-sdk9f\" (UID: \"30f56441-0814-409d-98c2-f51795b60a80\") " pod="openstack/horizon-66c5b78857-sdk9f" Feb 17 13:54:48 crc kubenswrapper[4768]: I0217 13:54:48.839047 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/30f56441-0814-409d-98c2-f51795b60a80-logs\") pod \"horizon-66c5b78857-sdk9f\" (UID: \"30f56441-0814-409d-98c2-f51795b60a80\") " pod="openstack/horizon-66c5b78857-sdk9f" Feb 17 13:54:48 crc kubenswrapper[4768]: I0217 13:54:48.839135 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xz8jg\" (UniqueName: \"kubernetes.io/projected/30f56441-0814-409d-98c2-f51795b60a80-kube-api-access-xz8jg\") pod \"horizon-66c5b78857-sdk9f\" (UID: \"30f56441-0814-409d-98c2-f51795b60a80\") " pod="openstack/horizon-66c5b78857-sdk9f" Feb 17 13:54:48 crc kubenswrapper[4768]: I0217 13:54:48.959329 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-t6h7l" event={"ID":"83c7e8b0-a496-489a-b66d-230261a68227","Type":"ContainerStarted","Data":"98248cbba561747411a17c66400e90ec2d76ff114d9555c928df29a6b7d4d6a1"} Feb 17 13:54:48 crc kubenswrapper[4768]: I0217 13:54:48.963799 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/30f56441-0814-409d-98c2-f51795b60a80-logs\") pod \"horizon-66c5b78857-sdk9f\" (UID: \"30f56441-0814-409d-98c2-f51795b60a80\") " pod="openstack/horizon-66c5b78857-sdk9f" Feb 17 13:54:48 crc kubenswrapper[4768]: I0217 13:54:48.963876 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xz8jg\" (UniqueName: \"kubernetes.io/projected/30f56441-0814-409d-98c2-f51795b60a80-kube-api-access-xz8jg\") pod \"horizon-66c5b78857-sdk9f\" (UID: \"30f56441-0814-409d-98c2-f51795b60a80\") " pod="openstack/horizon-66c5b78857-sdk9f" Feb 17 13:54:48 crc kubenswrapper[4768]: I0217 13:54:48.963953 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/30f56441-0814-409d-98c2-f51795b60a80-scripts\") pod \"horizon-66c5b78857-sdk9f\" (UID: \"30f56441-0814-409d-98c2-f51795b60a80\") " pod="openstack/horizon-66c5b78857-sdk9f" Feb 17 13:54:48 crc kubenswrapper[4768]: I0217 13:54:48.963989 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/30f56441-0814-409d-98c2-f51795b60a80-horizon-secret-key\") pod \"horizon-66c5b78857-sdk9f\" (UID: \"30f56441-0814-409d-98c2-f51795b60a80\") " pod="openstack/horizon-66c5b78857-sdk9f" Feb 17 13:54:48 crc kubenswrapper[4768]: I0217 13:54:48.964035 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/30f56441-0814-409d-98c2-f51795b60a80-config-data\") pod \"horizon-66c5b78857-sdk9f\" (UID: \"30f56441-0814-409d-98c2-f51795b60a80\") " pod="openstack/horizon-66c5b78857-sdk9f" Feb 17 13:54:48 crc kubenswrapper[4768]: I0217 13:54:48.971940 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/30f56441-0814-409d-98c2-f51795b60a80-config-data\") pod \"horizon-66c5b78857-sdk9f\" (UID: \"30f56441-0814-409d-98c2-f51795b60a80\") " pod="openstack/horizon-66c5b78857-sdk9f" Feb 17 13:54:48 crc kubenswrapper[4768]: I0217 13:54:48.973121 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/30f56441-0814-409d-98c2-f51795b60a80-logs\") pod \"horizon-66c5b78857-sdk9f\" (UID: \"30f56441-0814-409d-98c2-f51795b60a80\") " pod="openstack/horizon-66c5b78857-sdk9f" Feb 17 13:54:48 crc kubenswrapper[4768]: I0217 13:54:48.974365 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/30f56441-0814-409d-98c2-f51795b60a80-scripts\") pod \"horizon-66c5b78857-sdk9f\" (UID: \"30f56441-0814-409d-98c2-f51795b60a80\") " pod="openstack/horizon-66c5b78857-sdk9f" Feb 17 13:54:48 crc kubenswrapper[4768]: I0217 13:54:48.995757 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/30f56441-0814-409d-98c2-f51795b60a80-horizon-secret-key\") pod \"horizon-66c5b78857-sdk9f\" (UID: \"30f56441-0814-409d-98c2-f51795b60a80\") " pod="openstack/horizon-66c5b78857-sdk9f" Feb 17 13:54:49 crc kubenswrapper[4768]: I0217 13:54:49.032473 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xz8jg\" (UniqueName: \"kubernetes.io/projected/30f56441-0814-409d-98c2-f51795b60a80-kube-api-access-xz8jg\") pod \"horizon-66c5b78857-sdk9f\" (UID: \"30f56441-0814-409d-98c2-f51795b60a80\") " pod="openstack/horizon-66c5b78857-sdk9f" Feb 17 13:54:49 crc kubenswrapper[4768]: I0217 13:54:49.277648 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-66c5b78857-sdk9f" Feb 17 13:54:49 crc kubenswrapper[4768]: I0217 13:54:49.317870 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 17 13:54:49 crc kubenswrapper[4768]: I0217 13:54:49.374675 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-847c4cc679-h4ww6" Feb 17 13:54:49 crc kubenswrapper[4768]: I0217 13:54:49.491159 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/43ba8a04-014d-4289-ad5e-9f883e9b2d69-ovsdbserver-nb\") pod \"43ba8a04-014d-4289-ad5e-9f883e9b2d69\" (UID: \"43ba8a04-014d-4289-ad5e-9f883e9b2d69\") " Feb 17 13:54:49 crc kubenswrapper[4768]: I0217 13:54:49.491328 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/43ba8a04-014d-4289-ad5e-9f883e9b2d69-ovsdbserver-sb\") pod \"43ba8a04-014d-4289-ad5e-9f883e9b2d69\" (UID: \"43ba8a04-014d-4289-ad5e-9f883e9b2d69\") " Feb 17 13:54:49 crc kubenswrapper[4768]: I0217 13:54:49.491497 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fnkvl\" (UniqueName: \"kubernetes.io/projected/43ba8a04-014d-4289-ad5e-9f883e9b2d69-kube-api-access-fnkvl\") pod \"43ba8a04-014d-4289-ad5e-9f883e9b2d69\" (UID: \"43ba8a04-014d-4289-ad5e-9f883e9b2d69\") " Feb 17 13:54:49 crc kubenswrapper[4768]: I0217 13:54:49.491628 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/43ba8a04-014d-4289-ad5e-9f883e9b2d69-dns-svc\") pod \"43ba8a04-014d-4289-ad5e-9f883e9b2d69\" (UID: \"43ba8a04-014d-4289-ad5e-9f883e9b2d69\") " Feb 17 13:54:49 crc kubenswrapper[4768]: I0217 13:54:49.491699 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/43ba8a04-014d-4289-ad5e-9f883e9b2d69-config\") pod \"43ba8a04-014d-4289-ad5e-9f883e9b2d69\" (UID: \"43ba8a04-014d-4289-ad5e-9f883e9b2d69\") " Feb 17 13:54:49 crc kubenswrapper[4768]: I0217 13:54:49.491724 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/43ba8a04-014d-4289-ad5e-9f883e9b2d69-dns-swift-storage-0\") pod \"43ba8a04-014d-4289-ad5e-9f883e9b2d69\" (UID: \"43ba8a04-014d-4289-ad5e-9f883e9b2d69\") " Feb 17 13:54:49 crc kubenswrapper[4768]: I0217 13:54:49.509918 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43ba8a04-014d-4289-ad5e-9f883e9b2d69-kube-api-access-fnkvl" (OuterVolumeSpecName: "kube-api-access-fnkvl") pod "43ba8a04-014d-4289-ad5e-9f883e9b2d69" (UID: "43ba8a04-014d-4289-ad5e-9f883e9b2d69"). InnerVolumeSpecName "kube-api-access-fnkvl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 13:54:49 crc kubenswrapper[4768]: I0217 13:54:49.594863 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fnkvl\" (UniqueName: \"kubernetes.io/projected/43ba8a04-014d-4289-ad5e-9f883e9b2d69-kube-api-access-fnkvl\") on node \"crc\" DevicePath \"\"" Feb 17 13:54:49 crc kubenswrapper[4768]: I0217 13:54:49.610860 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43ba8a04-014d-4289-ad5e-9f883e9b2d69-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "43ba8a04-014d-4289-ad5e-9f883e9b2d69" (UID: "43ba8a04-014d-4289-ad5e-9f883e9b2d69"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 13:54:49 crc kubenswrapper[4768]: I0217 13:54:49.618708 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ce253312-d33a-42e0-8f1e-9163171fd75d" path="/var/lib/kubelet/pods/ce253312-d33a-42e0-8f1e-9163171fd75d/volumes" Feb 17 13:54:49 crc kubenswrapper[4768]: I0217 13:54:49.621061 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43ba8a04-014d-4289-ad5e-9f883e9b2d69-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "43ba8a04-014d-4289-ad5e-9f883e9b2d69" (UID: "43ba8a04-014d-4289-ad5e-9f883e9b2d69"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 13:54:49 crc kubenswrapper[4768]: I0217 13:54:49.646845 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43ba8a04-014d-4289-ad5e-9f883e9b2d69-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "43ba8a04-014d-4289-ad5e-9f883e9b2d69" (UID: "43ba8a04-014d-4289-ad5e-9f883e9b2d69"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 13:54:49 crc kubenswrapper[4768]: I0217 13:54:49.670650 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43ba8a04-014d-4289-ad5e-9f883e9b2d69-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "43ba8a04-014d-4289-ad5e-9f883e9b2d69" (UID: "43ba8a04-014d-4289-ad5e-9f883e9b2d69"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 13:54:49 crc kubenswrapper[4768]: I0217 13:54:49.673204 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43ba8a04-014d-4289-ad5e-9f883e9b2d69-config" (OuterVolumeSpecName: "config") pod "43ba8a04-014d-4289-ad5e-9f883e9b2d69" (UID: "43ba8a04-014d-4289-ad5e-9f883e9b2d69"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 13:54:49 crc kubenswrapper[4768]: I0217 13:54:49.696911 4768 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/43ba8a04-014d-4289-ad5e-9f883e9b2d69-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 17 13:54:49 crc kubenswrapper[4768]: I0217 13:54:49.696964 4768 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/43ba8a04-014d-4289-ad5e-9f883e9b2d69-config\") on node \"crc\" DevicePath \"\"" Feb 17 13:54:49 crc kubenswrapper[4768]: I0217 13:54:49.696975 4768 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/43ba8a04-014d-4289-ad5e-9f883e9b2d69-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 17 13:54:49 crc kubenswrapper[4768]: I0217 13:54:49.696986 4768 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/43ba8a04-014d-4289-ad5e-9f883e9b2d69-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 17 13:54:49 crc kubenswrapper[4768]: I0217 13:54:49.696997 4768 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/43ba8a04-014d-4289-ad5e-9f883e9b2d69-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 17 13:54:49 crc kubenswrapper[4768]: I0217 13:54:49.879573 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-66c5b78857-sdk9f"] Feb 17 13:54:49 crc kubenswrapper[4768]: I0217 13:54:49.996672 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"6a9e0b83-c5d7-4bfe-a90a-ab36b26a8aaa","Type":"ContainerStarted","Data":"b2479a34d313047903415a4f7edd287b784efe4120635b39b516fcb4ae0f1e43"} Feb 17 13:54:50 crc kubenswrapper[4768]: I0217 13:54:50.006421 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-785d8bcb8c-98n58" event={"ID":"ff892b26-a158-4942-85e5-6a657ffe4d4d","Type":"ContainerStarted","Data":"7542cb62164edc829a2341e05c3597d2591cf13f9fc3242f4775fab96d07162d"} Feb 17 13:54:50 crc kubenswrapper[4768]: I0217 13:54:50.006509 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-785d8bcb8c-98n58" Feb 17 13:54:50 crc kubenswrapper[4768]: I0217 13:54:50.016223 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-66c5b78857-sdk9f" event={"ID":"30f56441-0814-409d-98c2-f51795b60a80","Type":"ContainerStarted","Data":"df8ea5c2b2725dd5f342b485c3f416dd19a956f873112c676225d48fde0c4837"} Feb 17 13:54:50 crc kubenswrapper[4768]: I0217 13:54:50.026860 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-847c4cc679-h4ww6" event={"ID":"43ba8a04-014d-4289-ad5e-9f883e9b2d69","Type":"ContainerDied","Data":"769f988423ade28ec9bf3740627d23a1b2dd40dcc2bbe3ff25e5457ec52351e6"} Feb 17 13:54:50 crc kubenswrapper[4768]: I0217 13:54:50.026944 4768 scope.go:117] "RemoveContainer" containerID="00d70823518f47f4be0681f2badb6143b053616604e8cf22f53cdffe8d842f95" Feb 17 13:54:50 crc kubenswrapper[4768]: I0217 13:54:50.026883 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-847c4cc679-h4ww6" Feb 17 13:54:50 crc kubenswrapper[4768]: I0217 13:54:50.028858 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"60befae2-547b-4b2e-a117-6c181de8c29d","Type":"ContainerStarted","Data":"423745d4f64c1195d779d169d2643c4327848b5134e74bd81a08753511f9b427"} Feb 17 13:54:50 crc kubenswrapper[4768]: I0217 13:54:50.056907 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-785d8bcb8c-98n58" podStartSLOduration=4.056883688 podStartE2EDuration="4.056883688s" podCreationTimestamp="2026-02-17 13:54:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 13:54:50.0306299 +0000 UTC m=+1109.310016362" watchObservedRunningTime="2026-02-17 13:54:50.056883688 +0000 UTC m=+1109.336270130" Feb 17 13:54:50 crc kubenswrapper[4768]: I0217 13:54:50.101242 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-847c4cc679-h4ww6"] Feb 17 13:54:50 crc kubenswrapper[4768]: I0217 13:54:50.108854 4768 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-847c4cc679-h4ww6"] Feb 17 13:54:51 crc kubenswrapper[4768]: I0217 13:54:51.050523 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"60befae2-547b-4b2e-a117-6c181de8c29d","Type":"ContainerStarted","Data":"08cc5bc6f875fb0b38b26674a4928dd368487de9487bba4965941f52ab0edd2c"} Feb 17 13:54:51 crc kubenswrapper[4768]: I0217 13:54:51.077810 4768 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="6a9e0b83-c5d7-4bfe-a90a-ab36b26a8aaa" containerName="glance-log" containerID="cri-o://b2479a34d313047903415a4f7edd287b784efe4120635b39b516fcb4ae0f1e43" gracePeriod=30 Feb 17 13:54:51 crc kubenswrapper[4768]: I0217 13:54:51.077879 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"6a9e0b83-c5d7-4bfe-a90a-ab36b26a8aaa","Type":"ContainerStarted","Data":"3c563a20f3a627ef1f017d8001d0c2378d6646195926301fe1fd91776a2feb58"} Feb 17 13:54:51 crc kubenswrapper[4768]: I0217 13:54:51.078222 4768 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="6a9e0b83-c5d7-4bfe-a90a-ab36b26a8aaa" containerName="glance-httpd" containerID="cri-o://3c563a20f3a627ef1f017d8001d0c2378d6646195926301fe1fd91776a2feb58" gracePeriod=30 Feb 17 13:54:51 crc kubenswrapper[4768]: I0217 13:54:51.107368 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=5.107350448 podStartE2EDuration="5.107350448s" podCreationTimestamp="2026-02-17 13:54:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 13:54:51.103956004 +0000 UTC m=+1110.383342446" watchObservedRunningTime="2026-02-17 13:54:51.107350448 +0000 UTC m=+1110.386736890" Feb 17 13:54:51 crc kubenswrapper[4768]: I0217 13:54:51.549525 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43ba8a04-014d-4289-ad5e-9f883e9b2d69" path="/var/lib/kubelet/pods/43ba8a04-014d-4289-ad5e-9f883e9b2d69/volumes" Feb 17 13:54:52 crc kubenswrapper[4768]: I0217 13:54:52.126551 4768 generic.go:334] "Generic (PLEG): container finished" podID="6a9e0b83-c5d7-4bfe-a90a-ab36b26a8aaa" containerID="3c563a20f3a627ef1f017d8001d0c2378d6646195926301fe1fd91776a2feb58" exitCode=0 Feb 17 13:54:52 crc kubenswrapper[4768]: I0217 13:54:52.126827 4768 generic.go:334] "Generic (PLEG): container finished" podID="6a9e0b83-c5d7-4bfe-a90a-ab36b26a8aaa" containerID="b2479a34d313047903415a4f7edd287b784efe4120635b39b516fcb4ae0f1e43" exitCode=143 Feb 17 13:54:52 crc kubenswrapper[4768]: I0217 13:54:52.126789 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"6a9e0b83-c5d7-4bfe-a90a-ab36b26a8aaa","Type":"ContainerDied","Data":"3c563a20f3a627ef1f017d8001d0c2378d6646195926301fe1fd91776a2feb58"} Feb 17 13:54:52 crc kubenswrapper[4768]: I0217 13:54:52.126911 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"6a9e0b83-c5d7-4bfe-a90a-ab36b26a8aaa","Type":"ContainerDied","Data":"b2479a34d313047903415a4f7edd287b784efe4120635b39b516fcb4ae0f1e43"} Feb 17 13:54:52 crc kubenswrapper[4768]: I0217 13:54:52.133047 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"60befae2-547b-4b2e-a117-6c181de8c29d","Type":"ContainerStarted","Data":"b7da8f33eceb01f55d44ef32bc96249591916ed6f39df5da15e27f61550a8fbc"} Feb 17 13:54:52 crc kubenswrapper[4768]: I0217 13:54:52.133273 4768 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="60befae2-547b-4b2e-a117-6c181de8c29d" containerName="glance-log" containerID="cri-o://08cc5bc6f875fb0b38b26674a4928dd368487de9487bba4965941f52ab0edd2c" gracePeriod=30 Feb 17 13:54:52 crc kubenswrapper[4768]: I0217 13:54:52.133662 4768 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="60befae2-547b-4b2e-a117-6c181de8c29d" containerName="glance-httpd" containerID="cri-o://b7da8f33eceb01f55d44ef32bc96249591916ed6f39df5da15e27f61550a8fbc" gracePeriod=30 Feb 17 13:54:52 crc kubenswrapper[4768]: I0217 13:54:52.167602 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=6.167564897 podStartE2EDuration="6.167564897s" podCreationTimestamp="2026-02-17 13:54:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 13:54:52.160357528 +0000 UTC m=+1111.439743970" watchObservedRunningTime="2026-02-17 13:54:52.167564897 +0000 UTC m=+1111.446951339" Feb 17 13:54:52 crc kubenswrapper[4768]: I0217 13:54:52.342702 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 17 13:54:52 crc kubenswrapper[4768]: I0217 13:54:52.364393 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/6a9e0b83-c5d7-4bfe-a90a-ab36b26a8aaa-httpd-run\") pod \"6a9e0b83-c5d7-4bfe-a90a-ab36b26a8aaa\" (UID: \"6a9e0b83-c5d7-4bfe-a90a-ab36b26a8aaa\") " Feb 17 13:54:52 crc kubenswrapper[4768]: I0217 13:54:52.364464 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4s5nm\" (UniqueName: \"kubernetes.io/projected/6a9e0b83-c5d7-4bfe-a90a-ab36b26a8aaa-kube-api-access-4s5nm\") pod \"6a9e0b83-c5d7-4bfe-a90a-ab36b26a8aaa\" (UID: \"6a9e0b83-c5d7-4bfe-a90a-ab36b26a8aaa\") " Feb 17 13:54:52 crc kubenswrapper[4768]: I0217 13:54:52.364608 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6a9e0b83-c5d7-4bfe-a90a-ab36b26a8aaa-public-tls-certs\") pod \"6a9e0b83-c5d7-4bfe-a90a-ab36b26a8aaa\" (UID: \"6a9e0b83-c5d7-4bfe-a90a-ab36b26a8aaa\") " Feb 17 13:54:52 crc kubenswrapper[4768]: I0217 13:54:52.364644 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"6a9e0b83-c5d7-4bfe-a90a-ab36b26a8aaa\" (UID: \"6a9e0b83-c5d7-4bfe-a90a-ab36b26a8aaa\") " Feb 17 13:54:52 crc kubenswrapper[4768]: I0217 13:54:52.364678 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6a9e0b83-c5d7-4bfe-a90a-ab36b26a8aaa-config-data\") pod \"6a9e0b83-c5d7-4bfe-a90a-ab36b26a8aaa\" (UID: \"6a9e0b83-c5d7-4bfe-a90a-ab36b26a8aaa\") " Feb 17 13:54:52 crc kubenswrapper[4768]: I0217 13:54:52.364697 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6a9e0b83-c5d7-4bfe-a90a-ab36b26a8aaa-scripts\") pod \"6a9e0b83-c5d7-4bfe-a90a-ab36b26a8aaa\" (UID: \"6a9e0b83-c5d7-4bfe-a90a-ab36b26a8aaa\") " Feb 17 13:54:52 crc kubenswrapper[4768]: I0217 13:54:52.364757 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6a9e0b83-c5d7-4bfe-a90a-ab36b26a8aaa-logs\") pod \"6a9e0b83-c5d7-4bfe-a90a-ab36b26a8aaa\" (UID: \"6a9e0b83-c5d7-4bfe-a90a-ab36b26a8aaa\") " Feb 17 13:54:52 crc kubenswrapper[4768]: I0217 13:54:52.364782 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6a9e0b83-c5d7-4bfe-a90a-ab36b26a8aaa-combined-ca-bundle\") pod \"6a9e0b83-c5d7-4bfe-a90a-ab36b26a8aaa\" (UID: \"6a9e0b83-c5d7-4bfe-a90a-ab36b26a8aaa\") " Feb 17 13:54:52 crc kubenswrapper[4768]: I0217 13:54:52.366564 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6a9e0b83-c5d7-4bfe-a90a-ab36b26a8aaa-logs" (OuterVolumeSpecName: "logs") pod "6a9e0b83-c5d7-4bfe-a90a-ab36b26a8aaa" (UID: "6a9e0b83-c5d7-4bfe-a90a-ab36b26a8aaa"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 13:54:52 crc kubenswrapper[4768]: I0217 13:54:52.366782 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6a9e0b83-c5d7-4bfe-a90a-ab36b26a8aaa-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "6a9e0b83-c5d7-4bfe-a90a-ab36b26a8aaa" (UID: "6a9e0b83-c5d7-4bfe-a90a-ab36b26a8aaa"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 13:54:52 crc kubenswrapper[4768]: I0217 13:54:52.372877 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage11-crc" (OuterVolumeSpecName: "glance") pod "6a9e0b83-c5d7-4bfe-a90a-ab36b26a8aaa" (UID: "6a9e0b83-c5d7-4bfe-a90a-ab36b26a8aaa"). InnerVolumeSpecName "local-storage11-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Feb 17 13:54:52 crc kubenswrapper[4768]: I0217 13:54:52.384416 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6a9e0b83-c5d7-4bfe-a90a-ab36b26a8aaa-kube-api-access-4s5nm" (OuterVolumeSpecName: "kube-api-access-4s5nm") pod "6a9e0b83-c5d7-4bfe-a90a-ab36b26a8aaa" (UID: "6a9e0b83-c5d7-4bfe-a90a-ab36b26a8aaa"). InnerVolumeSpecName "kube-api-access-4s5nm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 13:54:52 crc kubenswrapper[4768]: I0217 13:54:52.403382 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6a9e0b83-c5d7-4bfe-a90a-ab36b26a8aaa-scripts" (OuterVolumeSpecName: "scripts") pod "6a9e0b83-c5d7-4bfe-a90a-ab36b26a8aaa" (UID: "6a9e0b83-c5d7-4bfe-a90a-ab36b26a8aaa"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 13:54:52 crc kubenswrapper[4768]: I0217 13:54:52.403710 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6a9e0b83-c5d7-4bfe-a90a-ab36b26a8aaa-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6a9e0b83-c5d7-4bfe-a90a-ab36b26a8aaa" (UID: "6a9e0b83-c5d7-4bfe-a90a-ab36b26a8aaa"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 13:54:52 crc kubenswrapper[4768]: I0217 13:54:52.430989 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6a9e0b83-c5d7-4bfe-a90a-ab36b26a8aaa-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "6a9e0b83-c5d7-4bfe-a90a-ab36b26a8aaa" (UID: "6a9e0b83-c5d7-4bfe-a90a-ab36b26a8aaa"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 13:54:52 crc kubenswrapper[4768]: I0217 13:54:52.447729 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6a9e0b83-c5d7-4bfe-a90a-ab36b26a8aaa-config-data" (OuterVolumeSpecName: "config-data") pod "6a9e0b83-c5d7-4bfe-a90a-ab36b26a8aaa" (UID: "6a9e0b83-c5d7-4bfe-a90a-ab36b26a8aaa"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 13:54:52 crc kubenswrapper[4768]: I0217 13:54:52.467039 4768 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6a9e0b83-c5d7-4bfe-a90a-ab36b26a8aaa-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 13:54:52 crc kubenswrapper[4768]: I0217 13:54:52.467066 4768 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6a9e0b83-c5d7-4bfe-a90a-ab36b26a8aaa-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 13:54:52 crc kubenswrapper[4768]: I0217 13:54:52.467075 4768 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6a9e0b83-c5d7-4bfe-a90a-ab36b26a8aaa-logs\") on node \"crc\" DevicePath \"\"" Feb 17 13:54:52 crc kubenswrapper[4768]: I0217 13:54:52.467083 4768 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6a9e0b83-c5d7-4bfe-a90a-ab36b26a8aaa-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 13:54:52 crc kubenswrapper[4768]: I0217 13:54:52.467094 4768 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/6a9e0b83-c5d7-4bfe-a90a-ab36b26a8aaa-httpd-run\") on node \"crc\" DevicePath \"\"" Feb 17 13:54:52 crc kubenswrapper[4768]: I0217 13:54:52.467114 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4s5nm\" (UniqueName: \"kubernetes.io/projected/6a9e0b83-c5d7-4bfe-a90a-ab36b26a8aaa-kube-api-access-4s5nm\") on node \"crc\" DevicePath \"\"" Feb 17 13:54:52 crc kubenswrapper[4768]: I0217 13:54:52.467124 4768 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6a9e0b83-c5d7-4bfe-a90a-ab36b26a8aaa-public-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 17 13:54:52 crc kubenswrapper[4768]: I0217 13:54:52.467147 4768 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") on node \"crc\" " Feb 17 13:54:52 crc kubenswrapper[4768]: I0217 13:54:52.494839 4768 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage11-crc" (UniqueName: "kubernetes.io/local-volume/local-storage11-crc") on node "crc" Feb 17 13:54:52 crc kubenswrapper[4768]: I0217 13:54:52.568538 4768 reconciler_common.go:293] "Volume detached for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") on node \"crc\" DevicePath \"\"" Feb 17 13:54:53 crc kubenswrapper[4768]: I0217 13:54:53.143964 4768 generic.go:334] "Generic (PLEG): container finished" podID="83c7e8b0-a496-489a-b66d-230261a68227" containerID="98248cbba561747411a17c66400e90ec2d76ff114d9555c928df29a6b7d4d6a1" exitCode=0 Feb 17 13:54:53 crc kubenswrapper[4768]: I0217 13:54:53.144018 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-t6h7l" event={"ID":"83c7e8b0-a496-489a-b66d-230261a68227","Type":"ContainerDied","Data":"98248cbba561747411a17c66400e90ec2d76ff114d9555c928df29a6b7d4d6a1"} Feb 17 13:54:53 crc kubenswrapper[4768]: I0217 13:54:53.146348 4768 generic.go:334] "Generic (PLEG): container finished" podID="60befae2-547b-4b2e-a117-6c181de8c29d" containerID="b7da8f33eceb01f55d44ef32bc96249591916ed6f39df5da15e27f61550a8fbc" exitCode=0 Feb 17 13:54:53 crc kubenswrapper[4768]: I0217 13:54:53.146366 4768 generic.go:334] "Generic (PLEG): container finished" podID="60befae2-547b-4b2e-a117-6c181de8c29d" containerID="08cc5bc6f875fb0b38b26674a4928dd368487de9487bba4965941f52ab0edd2c" exitCode=143 Feb 17 13:54:53 crc kubenswrapper[4768]: I0217 13:54:53.146398 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"60befae2-547b-4b2e-a117-6c181de8c29d","Type":"ContainerDied","Data":"b7da8f33eceb01f55d44ef32bc96249591916ed6f39df5da15e27f61550a8fbc"} Feb 17 13:54:53 crc kubenswrapper[4768]: I0217 13:54:53.146413 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"60befae2-547b-4b2e-a117-6c181de8c29d","Type":"ContainerDied","Data":"08cc5bc6f875fb0b38b26674a4928dd368487de9487bba4965941f52ab0edd2c"} Feb 17 13:54:53 crc kubenswrapper[4768]: I0217 13:54:53.150717 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"6a9e0b83-c5d7-4bfe-a90a-ab36b26a8aaa","Type":"ContainerDied","Data":"9dde5c5edaf83743191b280232ffb5172ef53b2a3e12bbc0c3bb18fc67d6b655"} Feb 17 13:54:53 crc kubenswrapper[4768]: I0217 13:54:53.150744 4768 scope.go:117] "RemoveContainer" containerID="3c563a20f3a627ef1f017d8001d0c2378d6646195926301fe1fd91776a2feb58" Feb 17 13:54:53 crc kubenswrapper[4768]: I0217 13:54:53.150869 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 17 13:54:53 crc kubenswrapper[4768]: I0217 13:54:53.194240 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 17 13:54:53 crc kubenswrapper[4768]: I0217 13:54:53.198588 4768 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 17 13:54:53 crc kubenswrapper[4768]: I0217 13:54:53.214416 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Feb 17 13:54:53 crc kubenswrapper[4768]: E0217 13:54:53.215009 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6a9e0b83-c5d7-4bfe-a90a-ab36b26a8aaa" containerName="glance-httpd" Feb 17 13:54:53 crc kubenswrapper[4768]: I0217 13:54:53.215121 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="6a9e0b83-c5d7-4bfe-a90a-ab36b26a8aaa" containerName="glance-httpd" Feb 17 13:54:53 crc kubenswrapper[4768]: E0217 13:54:53.215289 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="43ba8a04-014d-4289-ad5e-9f883e9b2d69" containerName="init" Feb 17 13:54:53 crc kubenswrapper[4768]: I0217 13:54:53.215365 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="43ba8a04-014d-4289-ad5e-9f883e9b2d69" containerName="init" Feb 17 13:54:53 crc kubenswrapper[4768]: E0217 13:54:53.215437 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6a9e0b83-c5d7-4bfe-a90a-ab36b26a8aaa" containerName="glance-log" Feb 17 13:54:53 crc kubenswrapper[4768]: I0217 13:54:53.215495 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="6a9e0b83-c5d7-4bfe-a90a-ab36b26a8aaa" containerName="glance-log" Feb 17 13:54:53 crc kubenswrapper[4768]: I0217 13:54:53.217163 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="6a9e0b83-c5d7-4bfe-a90a-ab36b26a8aaa" containerName="glance-httpd" Feb 17 13:54:53 crc kubenswrapper[4768]: I0217 13:54:53.217286 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="43ba8a04-014d-4289-ad5e-9f883e9b2d69" containerName="init" Feb 17 13:54:53 crc kubenswrapper[4768]: I0217 13:54:53.217360 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="6a9e0b83-c5d7-4bfe-a90a-ab36b26a8aaa" containerName="glance-log" Feb 17 13:54:53 crc kubenswrapper[4768]: I0217 13:54:53.218284 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 17 13:54:53 crc kubenswrapper[4768]: I0217 13:54:53.221541 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Feb 17 13:54:53 crc kubenswrapper[4768]: I0217 13:54:53.221699 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Feb 17 13:54:53 crc kubenswrapper[4768]: I0217 13:54:53.235163 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 17 13:54:53 crc kubenswrapper[4768]: I0217 13:54:53.283769 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b4f4a20b-94a7-4b16-bdca-a99d9440b74e-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"b4f4a20b-94a7-4b16-bdca-a99d9440b74e\") " pod="openstack/glance-default-external-api-0" Feb 17 13:54:53 crc kubenswrapper[4768]: I0217 13:54:53.283828 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b4f4a20b-94a7-4b16-bdca-a99d9440b74e-config-data\") pod \"glance-default-external-api-0\" (UID: \"b4f4a20b-94a7-4b16-bdca-a99d9440b74e\") " pod="openstack/glance-default-external-api-0" Feb 17 13:54:53 crc kubenswrapper[4768]: I0217 13:54:53.283932 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ncnxk\" (UniqueName: \"kubernetes.io/projected/b4f4a20b-94a7-4b16-bdca-a99d9440b74e-kube-api-access-ncnxk\") pod \"glance-default-external-api-0\" (UID: \"b4f4a20b-94a7-4b16-bdca-a99d9440b74e\") " pod="openstack/glance-default-external-api-0" Feb 17 13:54:53 crc kubenswrapper[4768]: I0217 13:54:53.283956 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b4f4a20b-94a7-4b16-bdca-a99d9440b74e-scripts\") pod \"glance-default-external-api-0\" (UID: \"b4f4a20b-94a7-4b16-bdca-a99d9440b74e\") " pod="openstack/glance-default-external-api-0" Feb 17 13:54:53 crc kubenswrapper[4768]: I0217 13:54:53.283975 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b4f4a20b-94a7-4b16-bdca-a99d9440b74e-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"b4f4a20b-94a7-4b16-bdca-a99d9440b74e\") " pod="openstack/glance-default-external-api-0" Feb 17 13:54:53 crc kubenswrapper[4768]: I0217 13:54:53.284012 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/b4f4a20b-94a7-4b16-bdca-a99d9440b74e-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"b4f4a20b-94a7-4b16-bdca-a99d9440b74e\") " pod="openstack/glance-default-external-api-0" Feb 17 13:54:53 crc kubenswrapper[4768]: I0217 13:54:53.284039 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b4f4a20b-94a7-4b16-bdca-a99d9440b74e-logs\") pod \"glance-default-external-api-0\" (UID: \"b4f4a20b-94a7-4b16-bdca-a99d9440b74e\") " pod="openstack/glance-default-external-api-0" Feb 17 13:54:53 crc kubenswrapper[4768]: I0217 13:54:53.284084 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-external-api-0\" (UID: \"b4f4a20b-94a7-4b16-bdca-a99d9440b74e\") " pod="openstack/glance-default-external-api-0" Feb 17 13:54:53 crc kubenswrapper[4768]: I0217 13:54:53.385244 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ncnxk\" (UniqueName: \"kubernetes.io/projected/b4f4a20b-94a7-4b16-bdca-a99d9440b74e-kube-api-access-ncnxk\") pod \"glance-default-external-api-0\" (UID: \"b4f4a20b-94a7-4b16-bdca-a99d9440b74e\") " pod="openstack/glance-default-external-api-0" Feb 17 13:54:53 crc kubenswrapper[4768]: I0217 13:54:53.385619 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b4f4a20b-94a7-4b16-bdca-a99d9440b74e-scripts\") pod \"glance-default-external-api-0\" (UID: \"b4f4a20b-94a7-4b16-bdca-a99d9440b74e\") " pod="openstack/glance-default-external-api-0" Feb 17 13:54:53 crc kubenswrapper[4768]: I0217 13:54:53.385656 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b4f4a20b-94a7-4b16-bdca-a99d9440b74e-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"b4f4a20b-94a7-4b16-bdca-a99d9440b74e\") " pod="openstack/glance-default-external-api-0" Feb 17 13:54:53 crc kubenswrapper[4768]: I0217 13:54:53.385707 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/b4f4a20b-94a7-4b16-bdca-a99d9440b74e-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"b4f4a20b-94a7-4b16-bdca-a99d9440b74e\") " pod="openstack/glance-default-external-api-0" Feb 17 13:54:53 crc kubenswrapper[4768]: I0217 13:54:53.385746 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b4f4a20b-94a7-4b16-bdca-a99d9440b74e-logs\") pod \"glance-default-external-api-0\" (UID: \"b4f4a20b-94a7-4b16-bdca-a99d9440b74e\") " pod="openstack/glance-default-external-api-0" Feb 17 13:54:53 crc kubenswrapper[4768]: I0217 13:54:53.385816 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-external-api-0\" (UID: \"b4f4a20b-94a7-4b16-bdca-a99d9440b74e\") " pod="openstack/glance-default-external-api-0" Feb 17 13:54:53 crc kubenswrapper[4768]: I0217 13:54:53.385843 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b4f4a20b-94a7-4b16-bdca-a99d9440b74e-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"b4f4a20b-94a7-4b16-bdca-a99d9440b74e\") " pod="openstack/glance-default-external-api-0" Feb 17 13:54:53 crc kubenswrapper[4768]: I0217 13:54:53.385877 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b4f4a20b-94a7-4b16-bdca-a99d9440b74e-config-data\") pod \"glance-default-external-api-0\" (UID: \"b4f4a20b-94a7-4b16-bdca-a99d9440b74e\") " pod="openstack/glance-default-external-api-0" Feb 17 13:54:53 crc kubenswrapper[4768]: I0217 13:54:53.386847 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b4f4a20b-94a7-4b16-bdca-a99d9440b74e-logs\") pod \"glance-default-external-api-0\" (UID: \"b4f4a20b-94a7-4b16-bdca-a99d9440b74e\") " pod="openstack/glance-default-external-api-0" Feb 17 13:54:53 crc kubenswrapper[4768]: I0217 13:54:53.392254 4768 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-external-api-0\" (UID: \"b4f4a20b-94a7-4b16-bdca-a99d9440b74e\") device mount path \"/mnt/openstack/pv11\"" pod="openstack/glance-default-external-api-0" Feb 17 13:54:53 crc kubenswrapper[4768]: I0217 13:54:53.392530 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/b4f4a20b-94a7-4b16-bdca-a99d9440b74e-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"b4f4a20b-94a7-4b16-bdca-a99d9440b74e\") " pod="openstack/glance-default-external-api-0" Feb 17 13:54:53 crc kubenswrapper[4768]: I0217 13:54:53.393569 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b4f4a20b-94a7-4b16-bdca-a99d9440b74e-scripts\") pod \"glance-default-external-api-0\" (UID: \"b4f4a20b-94a7-4b16-bdca-a99d9440b74e\") " pod="openstack/glance-default-external-api-0" Feb 17 13:54:53 crc kubenswrapper[4768]: I0217 13:54:53.395329 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b4f4a20b-94a7-4b16-bdca-a99d9440b74e-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"b4f4a20b-94a7-4b16-bdca-a99d9440b74e\") " pod="openstack/glance-default-external-api-0" Feb 17 13:54:53 crc kubenswrapper[4768]: I0217 13:54:53.399012 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b4f4a20b-94a7-4b16-bdca-a99d9440b74e-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"b4f4a20b-94a7-4b16-bdca-a99d9440b74e\") " pod="openstack/glance-default-external-api-0" Feb 17 13:54:53 crc kubenswrapper[4768]: I0217 13:54:53.410400 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b4f4a20b-94a7-4b16-bdca-a99d9440b74e-config-data\") pod \"glance-default-external-api-0\" (UID: \"b4f4a20b-94a7-4b16-bdca-a99d9440b74e\") " pod="openstack/glance-default-external-api-0" Feb 17 13:54:53 crc kubenswrapper[4768]: I0217 13:54:53.412537 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ncnxk\" (UniqueName: \"kubernetes.io/projected/b4f4a20b-94a7-4b16-bdca-a99d9440b74e-kube-api-access-ncnxk\") pod \"glance-default-external-api-0\" (UID: \"b4f4a20b-94a7-4b16-bdca-a99d9440b74e\") " pod="openstack/glance-default-external-api-0" Feb 17 13:54:53 crc kubenswrapper[4768]: I0217 13:54:53.427236 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-external-api-0\" (UID: \"b4f4a20b-94a7-4b16-bdca-a99d9440b74e\") " pod="openstack/glance-default-external-api-0" Feb 17 13:54:53 crc kubenswrapper[4768]: I0217 13:54:53.543971 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6a9e0b83-c5d7-4bfe-a90a-ab36b26a8aaa" path="/var/lib/kubelet/pods/6a9e0b83-c5d7-4bfe-a90a-ab36b26a8aaa/volumes" Feb 17 13:54:53 crc kubenswrapper[4768]: I0217 13:54:53.558816 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 17 13:54:55 crc kubenswrapper[4768]: I0217 13:54:55.622567 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-7645d5bbd9-t6l64"] Feb 17 13:54:55 crc kubenswrapper[4768]: I0217 13:54:55.662616 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 17 13:54:55 crc kubenswrapper[4768]: I0217 13:54:55.680228 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-684746c5d4-6lxfv"] Feb 17 13:54:55 crc kubenswrapper[4768]: I0217 13:54:55.682610 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-684746c5d4-6lxfv" Feb 17 13:54:55 crc kubenswrapper[4768]: I0217 13:54:55.685835 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-horizon-svc" Feb 17 13:54:55 crc kubenswrapper[4768]: I0217 13:54:55.689945 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-684746c5d4-6lxfv"] Feb 17 13:54:55 crc kubenswrapper[4768]: I0217 13:54:55.730707 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c20ad4a2-cf3e-4390-9141-1cc58518fd2b-scripts\") pod \"horizon-684746c5d4-6lxfv\" (UID: \"c20ad4a2-cf3e-4390-9141-1cc58518fd2b\") " pod="openstack/horizon-684746c5d4-6lxfv" Feb 17 13:54:55 crc kubenswrapper[4768]: I0217 13:54:55.730750 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/c20ad4a2-cf3e-4390-9141-1cc58518fd2b-horizon-tls-certs\") pod \"horizon-684746c5d4-6lxfv\" (UID: \"c20ad4a2-cf3e-4390-9141-1cc58518fd2b\") " pod="openstack/horizon-684746c5d4-6lxfv" Feb 17 13:54:55 crc kubenswrapper[4768]: I0217 13:54:55.730796 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c20ad4a2-cf3e-4390-9141-1cc58518fd2b-combined-ca-bundle\") pod \"horizon-684746c5d4-6lxfv\" (UID: \"c20ad4a2-cf3e-4390-9141-1cc58518fd2b\") " pod="openstack/horizon-684746c5d4-6lxfv" Feb 17 13:54:55 crc kubenswrapper[4768]: I0217 13:54:55.730825 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c20ad4a2-cf3e-4390-9141-1cc58518fd2b-logs\") pod \"horizon-684746c5d4-6lxfv\" (UID: \"c20ad4a2-cf3e-4390-9141-1cc58518fd2b\") " pod="openstack/horizon-684746c5d4-6lxfv" Feb 17 13:54:55 crc kubenswrapper[4768]: I0217 13:54:55.730848 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/c20ad4a2-cf3e-4390-9141-1cc58518fd2b-config-data\") pod \"horizon-684746c5d4-6lxfv\" (UID: \"c20ad4a2-cf3e-4390-9141-1cc58518fd2b\") " pod="openstack/horizon-684746c5d4-6lxfv" Feb 17 13:54:55 crc kubenswrapper[4768]: I0217 13:54:55.730896 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/c20ad4a2-cf3e-4390-9141-1cc58518fd2b-horizon-secret-key\") pod \"horizon-684746c5d4-6lxfv\" (UID: \"c20ad4a2-cf3e-4390-9141-1cc58518fd2b\") " pod="openstack/horizon-684746c5d4-6lxfv" Feb 17 13:54:55 crc kubenswrapper[4768]: I0217 13:54:55.730979 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bxtml\" (UniqueName: \"kubernetes.io/projected/c20ad4a2-cf3e-4390-9141-1cc58518fd2b-kube-api-access-bxtml\") pod \"horizon-684746c5d4-6lxfv\" (UID: \"c20ad4a2-cf3e-4390-9141-1cc58518fd2b\") " pod="openstack/horizon-684746c5d4-6lxfv" Feb 17 13:54:55 crc kubenswrapper[4768]: I0217 13:54:55.763719 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-66c5b78857-sdk9f"] Feb 17 13:54:55 crc kubenswrapper[4768]: I0217 13:54:55.803137 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-6584d79658-wtxrc"] Feb 17 13:54:55 crc kubenswrapper[4768]: I0217 13:54:55.808429 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6584d79658-wtxrc" Feb 17 13:54:55 crc kubenswrapper[4768]: I0217 13:54:55.828194 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-6584d79658-wtxrc"] Feb 17 13:54:55 crc kubenswrapper[4768]: I0217 13:54:55.832907 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/331a37d3-96b1-4065-9941-25acc64cc6c1-config-data\") pod \"horizon-6584d79658-wtxrc\" (UID: \"331a37d3-96b1-4065-9941-25acc64cc6c1\") " pod="openstack/horizon-6584d79658-wtxrc" Feb 17 13:54:55 crc kubenswrapper[4768]: I0217 13:54:55.832951 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/331a37d3-96b1-4065-9941-25acc64cc6c1-combined-ca-bundle\") pod \"horizon-6584d79658-wtxrc\" (UID: \"331a37d3-96b1-4065-9941-25acc64cc6c1\") " pod="openstack/horizon-6584d79658-wtxrc" Feb 17 13:54:55 crc kubenswrapper[4768]: I0217 13:54:55.832977 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/331a37d3-96b1-4065-9941-25acc64cc6c1-logs\") pod \"horizon-6584d79658-wtxrc\" (UID: \"331a37d3-96b1-4065-9941-25acc64cc6c1\") " pod="openstack/horizon-6584d79658-wtxrc" Feb 17 13:54:55 crc kubenswrapper[4768]: I0217 13:54:55.833028 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/c20ad4a2-cf3e-4390-9141-1cc58518fd2b-horizon-secret-key\") pod \"horizon-684746c5d4-6lxfv\" (UID: \"c20ad4a2-cf3e-4390-9141-1cc58518fd2b\") " pod="openstack/horizon-684746c5d4-6lxfv" Feb 17 13:54:55 crc kubenswrapper[4768]: I0217 13:54:55.833051 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8nfph\" (UniqueName: \"kubernetes.io/projected/331a37d3-96b1-4065-9941-25acc64cc6c1-kube-api-access-8nfph\") pod \"horizon-6584d79658-wtxrc\" (UID: \"331a37d3-96b1-4065-9941-25acc64cc6c1\") " pod="openstack/horizon-6584d79658-wtxrc" Feb 17 13:54:55 crc kubenswrapper[4768]: I0217 13:54:55.833074 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bxtml\" (UniqueName: \"kubernetes.io/projected/c20ad4a2-cf3e-4390-9141-1cc58518fd2b-kube-api-access-bxtml\") pod \"horizon-684746c5d4-6lxfv\" (UID: \"c20ad4a2-cf3e-4390-9141-1cc58518fd2b\") " pod="openstack/horizon-684746c5d4-6lxfv" Feb 17 13:54:55 crc kubenswrapper[4768]: I0217 13:54:55.833145 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/331a37d3-96b1-4065-9941-25acc64cc6c1-horizon-secret-key\") pod \"horizon-6584d79658-wtxrc\" (UID: \"331a37d3-96b1-4065-9941-25acc64cc6c1\") " pod="openstack/horizon-6584d79658-wtxrc" Feb 17 13:54:55 crc kubenswrapper[4768]: I0217 13:54:55.833166 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c20ad4a2-cf3e-4390-9141-1cc58518fd2b-scripts\") pod \"horizon-684746c5d4-6lxfv\" (UID: \"c20ad4a2-cf3e-4390-9141-1cc58518fd2b\") " pod="openstack/horizon-684746c5d4-6lxfv" Feb 17 13:54:55 crc kubenswrapper[4768]: I0217 13:54:55.833182 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/c20ad4a2-cf3e-4390-9141-1cc58518fd2b-horizon-tls-certs\") pod \"horizon-684746c5d4-6lxfv\" (UID: \"c20ad4a2-cf3e-4390-9141-1cc58518fd2b\") " pod="openstack/horizon-684746c5d4-6lxfv" Feb 17 13:54:55 crc kubenswrapper[4768]: I0217 13:54:55.833217 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c20ad4a2-cf3e-4390-9141-1cc58518fd2b-combined-ca-bundle\") pod \"horizon-684746c5d4-6lxfv\" (UID: \"c20ad4a2-cf3e-4390-9141-1cc58518fd2b\") " pod="openstack/horizon-684746c5d4-6lxfv" Feb 17 13:54:55 crc kubenswrapper[4768]: I0217 13:54:55.833244 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c20ad4a2-cf3e-4390-9141-1cc58518fd2b-logs\") pod \"horizon-684746c5d4-6lxfv\" (UID: \"c20ad4a2-cf3e-4390-9141-1cc58518fd2b\") " pod="openstack/horizon-684746c5d4-6lxfv" Feb 17 13:54:55 crc kubenswrapper[4768]: I0217 13:54:55.833261 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/331a37d3-96b1-4065-9941-25acc64cc6c1-horizon-tls-certs\") pod \"horizon-6584d79658-wtxrc\" (UID: \"331a37d3-96b1-4065-9941-25acc64cc6c1\") " pod="openstack/horizon-6584d79658-wtxrc" Feb 17 13:54:55 crc kubenswrapper[4768]: I0217 13:54:55.833278 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/331a37d3-96b1-4065-9941-25acc64cc6c1-scripts\") pod \"horizon-6584d79658-wtxrc\" (UID: \"331a37d3-96b1-4065-9941-25acc64cc6c1\") " pod="openstack/horizon-6584d79658-wtxrc" Feb 17 13:54:55 crc kubenswrapper[4768]: I0217 13:54:55.833293 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/c20ad4a2-cf3e-4390-9141-1cc58518fd2b-config-data\") pod \"horizon-684746c5d4-6lxfv\" (UID: \"c20ad4a2-cf3e-4390-9141-1cc58518fd2b\") " pod="openstack/horizon-684746c5d4-6lxfv" Feb 17 13:54:55 crc kubenswrapper[4768]: I0217 13:54:55.833951 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c20ad4a2-cf3e-4390-9141-1cc58518fd2b-logs\") pod \"horizon-684746c5d4-6lxfv\" (UID: \"c20ad4a2-cf3e-4390-9141-1cc58518fd2b\") " pod="openstack/horizon-684746c5d4-6lxfv" Feb 17 13:54:55 crc kubenswrapper[4768]: I0217 13:54:55.834390 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/c20ad4a2-cf3e-4390-9141-1cc58518fd2b-config-data\") pod \"horizon-684746c5d4-6lxfv\" (UID: \"c20ad4a2-cf3e-4390-9141-1cc58518fd2b\") " pod="openstack/horizon-684746c5d4-6lxfv" Feb 17 13:54:55 crc kubenswrapper[4768]: I0217 13:54:55.834816 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c20ad4a2-cf3e-4390-9141-1cc58518fd2b-scripts\") pod \"horizon-684746c5d4-6lxfv\" (UID: \"c20ad4a2-cf3e-4390-9141-1cc58518fd2b\") " pod="openstack/horizon-684746c5d4-6lxfv" Feb 17 13:54:55 crc kubenswrapper[4768]: I0217 13:54:55.839277 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c20ad4a2-cf3e-4390-9141-1cc58518fd2b-combined-ca-bundle\") pod \"horizon-684746c5d4-6lxfv\" (UID: \"c20ad4a2-cf3e-4390-9141-1cc58518fd2b\") " pod="openstack/horizon-684746c5d4-6lxfv" Feb 17 13:54:55 crc kubenswrapper[4768]: I0217 13:54:55.839662 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/c20ad4a2-cf3e-4390-9141-1cc58518fd2b-horizon-secret-key\") pod \"horizon-684746c5d4-6lxfv\" (UID: \"c20ad4a2-cf3e-4390-9141-1cc58518fd2b\") " pod="openstack/horizon-684746c5d4-6lxfv" Feb 17 13:54:55 crc kubenswrapper[4768]: I0217 13:54:55.848737 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/c20ad4a2-cf3e-4390-9141-1cc58518fd2b-horizon-tls-certs\") pod \"horizon-684746c5d4-6lxfv\" (UID: \"c20ad4a2-cf3e-4390-9141-1cc58518fd2b\") " pod="openstack/horizon-684746c5d4-6lxfv" Feb 17 13:54:55 crc kubenswrapper[4768]: I0217 13:54:55.853853 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bxtml\" (UniqueName: \"kubernetes.io/projected/c20ad4a2-cf3e-4390-9141-1cc58518fd2b-kube-api-access-bxtml\") pod \"horizon-684746c5d4-6lxfv\" (UID: \"c20ad4a2-cf3e-4390-9141-1cc58518fd2b\") " pod="openstack/horizon-684746c5d4-6lxfv" Feb 17 13:54:55 crc kubenswrapper[4768]: I0217 13:54:55.934321 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/331a37d3-96b1-4065-9941-25acc64cc6c1-horizon-secret-key\") pod \"horizon-6584d79658-wtxrc\" (UID: \"331a37d3-96b1-4065-9941-25acc64cc6c1\") " pod="openstack/horizon-6584d79658-wtxrc" Feb 17 13:54:55 crc kubenswrapper[4768]: I0217 13:54:55.934400 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/331a37d3-96b1-4065-9941-25acc64cc6c1-horizon-tls-certs\") pod \"horizon-6584d79658-wtxrc\" (UID: \"331a37d3-96b1-4065-9941-25acc64cc6c1\") " pod="openstack/horizon-6584d79658-wtxrc" Feb 17 13:54:55 crc kubenswrapper[4768]: I0217 13:54:55.934423 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/331a37d3-96b1-4065-9941-25acc64cc6c1-scripts\") pod \"horizon-6584d79658-wtxrc\" (UID: \"331a37d3-96b1-4065-9941-25acc64cc6c1\") " pod="openstack/horizon-6584d79658-wtxrc" Feb 17 13:54:55 crc kubenswrapper[4768]: I0217 13:54:55.934454 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/331a37d3-96b1-4065-9941-25acc64cc6c1-config-data\") pod \"horizon-6584d79658-wtxrc\" (UID: \"331a37d3-96b1-4065-9941-25acc64cc6c1\") " pod="openstack/horizon-6584d79658-wtxrc" Feb 17 13:54:55 crc kubenswrapper[4768]: I0217 13:54:55.934476 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/331a37d3-96b1-4065-9941-25acc64cc6c1-combined-ca-bundle\") pod \"horizon-6584d79658-wtxrc\" (UID: \"331a37d3-96b1-4065-9941-25acc64cc6c1\") " pod="openstack/horizon-6584d79658-wtxrc" Feb 17 13:54:55 crc kubenswrapper[4768]: I0217 13:54:55.934497 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/331a37d3-96b1-4065-9941-25acc64cc6c1-logs\") pod \"horizon-6584d79658-wtxrc\" (UID: \"331a37d3-96b1-4065-9941-25acc64cc6c1\") " pod="openstack/horizon-6584d79658-wtxrc" Feb 17 13:54:55 crc kubenswrapper[4768]: I0217 13:54:55.934529 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8nfph\" (UniqueName: \"kubernetes.io/projected/331a37d3-96b1-4065-9941-25acc64cc6c1-kube-api-access-8nfph\") pod \"horizon-6584d79658-wtxrc\" (UID: \"331a37d3-96b1-4065-9941-25acc64cc6c1\") " pod="openstack/horizon-6584d79658-wtxrc" Feb 17 13:54:55 crc kubenswrapper[4768]: I0217 13:54:55.938095 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/331a37d3-96b1-4065-9941-25acc64cc6c1-horizon-secret-key\") pod \"horizon-6584d79658-wtxrc\" (UID: \"331a37d3-96b1-4065-9941-25acc64cc6c1\") " pod="openstack/horizon-6584d79658-wtxrc" Feb 17 13:54:55 crc kubenswrapper[4768]: I0217 13:54:55.938884 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/331a37d3-96b1-4065-9941-25acc64cc6c1-scripts\") pod \"horizon-6584d79658-wtxrc\" (UID: \"331a37d3-96b1-4065-9941-25acc64cc6c1\") " pod="openstack/horizon-6584d79658-wtxrc" Feb 17 13:54:55 crc kubenswrapper[4768]: I0217 13:54:55.939194 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/331a37d3-96b1-4065-9941-25acc64cc6c1-config-data\") pod \"horizon-6584d79658-wtxrc\" (UID: \"331a37d3-96b1-4065-9941-25acc64cc6c1\") " pod="openstack/horizon-6584d79658-wtxrc" Feb 17 13:54:55 crc kubenswrapper[4768]: I0217 13:54:55.939190 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/331a37d3-96b1-4065-9941-25acc64cc6c1-logs\") pod \"horizon-6584d79658-wtxrc\" (UID: \"331a37d3-96b1-4065-9941-25acc64cc6c1\") " pod="openstack/horizon-6584d79658-wtxrc" Feb 17 13:54:55 crc kubenswrapper[4768]: I0217 13:54:55.939674 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/331a37d3-96b1-4065-9941-25acc64cc6c1-horizon-tls-certs\") pod \"horizon-6584d79658-wtxrc\" (UID: \"331a37d3-96b1-4065-9941-25acc64cc6c1\") " pod="openstack/horizon-6584d79658-wtxrc" Feb 17 13:54:55 crc kubenswrapper[4768]: I0217 13:54:55.940648 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/331a37d3-96b1-4065-9941-25acc64cc6c1-combined-ca-bundle\") pod \"horizon-6584d79658-wtxrc\" (UID: \"331a37d3-96b1-4065-9941-25acc64cc6c1\") " pod="openstack/horizon-6584d79658-wtxrc" Feb 17 13:54:55 crc kubenswrapper[4768]: I0217 13:54:55.952290 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8nfph\" (UniqueName: \"kubernetes.io/projected/331a37d3-96b1-4065-9941-25acc64cc6c1-kube-api-access-8nfph\") pod \"horizon-6584d79658-wtxrc\" (UID: \"331a37d3-96b1-4065-9941-25acc64cc6c1\") " pod="openstack/horizon-6584d79658-wtxrc" Feb 17 13:54:56 crc kubenswrapper[4768]: I0217 13:54:56.021772 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-684746c5d4-6lxfv" Feb 17 13:54:56 crc kubenswrapper[4768]: I0217 13:54:56.132643 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6584d79658-wtxrc" Feb 17 13:54:57 crc kubenswrapper[4768]: I0217 13:54:57.024513 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-785d8bcb8c-98n58" Feb 17 13:54:57 crc kubenswrapper[4768]: I0217 13:54:57.088884 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-764c5664d7-6pghj"] Feb 17 13:54:57 crc kubenswrapper[4768]: I0217 13:54:57.089176 4768 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-764c5664d7-6pghj" podUID="e0ab0a41-18cb-4ed9-9c99-009e71c184f6" containerName="dnsmasq-dns" containerID="cri-o://87d58f42c17bfa2aa8f87897bc651d29bae5ab1acae7c5e997259fb061e0bc27" gracePeriod=10 Feb 17 13:54:58 crc kubenswrapper[4768]: I0217 13:54:58.061222 4768 patch_prober.go:28] interesting pod/machine-config-daemon-p97z4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 13:54:58 crc kubenswrapper[4768]: I0217 13:54:58.061289 4768 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-p97z4" podUID="10c685ba-8fe0-425c-958c-3fb6754d3d84" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 13:54:58 crc kubenswrapper[4768]: I0217 13:54:58.224636 4768 generic.go:334] "Generic (PLEG): container finished" podID="e0ab0a41-18cb-4ed9-9c99-009e71c184f6" containerID="87d58f42c17bfa2aa8f87897bc651d29bae5ab1acae7c5e997259fb061e0bc27" exitCode=0 Feb 17 13:54:58 crc kubenswrapper[4768]: I0217 13:54:58.224752 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-764c5664d7-6pghj" event={"ID":"e0ab0a41-18cb-4ed9-9c99-009e71c184f6","Type":"ContainerDied","Data":"87d58f42c17bfa2aa8f87897bc651d29bae5ab1acae7c5e997259fb061e0bc27"} Feb 17 13:54:59 crc kubenswrapper[4768]: I0217 13:54:59.580785 4768 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-764c5664d7-6pghj" podUID="e0ab0a41-18cb-4ed9-9c99-009e71c184f6" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.124:5353: connect: connection refused" Feb 17 13:55:00 crc kubenswrapper[4768]: I0217 13:55:00.059785 4768 scope.go:117] "RemoveContainer" containerID="b2479a34d313047903415a4f7edd287b784efe4120635b39b516fcb4ae0f1e43" Feb 17 13:55:00 crc kubenswrapper[4768]: I0217 13:55:00.144732 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 17 13:55:00 crc kubenswrapper[4768]: I0217 13:55:00.150687 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-t6h7l" Feb 17 13:55:00 crc kubenswrapper[4768]: I0217 13:55:00.329187 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/83c7e8b0-a496-489a-b66d-230261a68227-fernet-keys\") pod \"83c7e8b0-a496-489a-b66d-230261a68227\" (UID: \"83c7e8b0-a496-489a-b66d-230261a68227\") " Feb 17 13:55:00 crc kubenswrapper[4768]: I0217 13:55:00.329240 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/60befae2-547b-4b2e-a117-6c181de8c29d-config-data\") pod \"60befae2-547b-4b2e-a117-6c181de8c29d\" (UID: \"60befae2-547b-4b2e-a117-6c181de8c29d\") " Feb 17 13:55:00 crc kubenswrapper[4768]: I0217 13:55:00.329277 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/60befae2-547b-4b2e-a117-6c181de8c29d-scripts\") pod \"60befae2-547b-4b2e-a117-6c181de8c29d\" (UID: \"60befae2-547b-4b2e-a117-6c181de8c29d\") " Feb 17 13:55:00 crc kubenswrapper[4768]: I0217 13:55:00.329318 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hr4bd\" (UniqueName: \"kubernetes.io/projected/83c7e8b0-a496-489a-b66d-230261a68227-kube-api-access-hr4bd\") pod \"83c7e8b0-a496-489a-b66d-230261a68227\" (UID: \"83c7e8b0-a496-489a-b66d-230261a68227\") " Feb 17 13:55:00 crc kubenswrapper[4768]: I0217 13:55:00.329346 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/83c7e8b0-a496-489a-b66d-230261a68227-combined-ca-bundle\") pod \"83c7e8b0-a496-489a-b66d-230261a68227\" (UID: \"83c7e8b0-a496-489a-b66d-230261a68227\") " Feb 17 13:55:00 crc kubenswrapper[4768]: I0217 13:55:00.329377 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kp62p\" (UniqueName: \"kubernetes.io/projected/60befae2-547b-4b2e-a117-6c181de8c29d-kube-api-access-kp62p\") pod \"60befae2-547b-4b2e-a117-6c181de8c29d\" (UID: \"60befae2-547b-4b2e-a117-6c181de8c29d\") " Feb 17 13:55:00 crc kubenswrapper[4768]: I0217 13:55:00.329404 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/60befae2-547b-4b2e-a117-6c181de8c29d-combined-ca-bundle\") pod \"60befae2-547b-4b2e-a117-6c181de8c29d\" (UID: \"60befae2-547b-4b2e-a117-6c181de8c29d\") " Feb 17 13:55:00 crc kubenswrapper[4768]: I0217 13:55:00.329427 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/83c7e8b0-a496-489a-b66d-230261a68227-credential-keys\") pod \"83c7e8b0-a496-489a-b66d-230261a68227\" (UID: \"83c7e8b0-a496-489a-b66d-230261a68227\") " Feb 17 13:55:00 crc kubenswrapper[4768]: I0217 13:55:00.329446 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/83c7e8b0-a496-489a-b66d-230261a68227-scripts\") pod \"83c7e8b0-a496-489a-b66d-230261a68227\" (UID: \"83c7e8b0-a496-489a-b66d-230261a68227\") " Feb 17 13:55:00 crc kubenswrapper[4768]: I0217 13:55:00.329474 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/60befae2-547b-4b2e-a117-6c181de8c29d-logs\") pod \"60befae2-547b-4b2e-a117-6c181de8c29d\" (UID: \"60befae2-547b-4b2e-a117-6c181de8c29d\") " Feb 17 13:55:00 crc kubenswrapper[4768]: I0217 13:55:00.329504 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/83c7e8b0-a496-489a-b66d-230261a68227-config-data\") pod \"83c7e8b0-a496-489a-b66d-230261a68227\" (UID: \"83c7e8b0-a496-489a-b66d-230261a68227\") " Feb 17 13:55:00 crc kubenswrapper[4768]: I0217 13:55:00.329585 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/60befae2-547b-4b2e-a117-6c181de8c29d-httpd-run\") pod \"60befae2-547b-4b2e-a117-6c181de8c29d\" (UID: \"60befae2-547b-4b2e-a117-6c181de8c29d\") " Feb 17 13:55:00 crc kubenswrapper[4768]: I0217 13:55:00.329604 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/60befae2-547b-4b2e-a117-6c181de8c29d-internal-tls-certs\") pod \"60befae2-547b-4b2e-a117-6c181de8c29d\" (UID: \"60befae2-547b-4b2e-a117-6c181de8c29d\") " Feb 17 13:55:00 crc kubenswrapper[4768]: I0217 13:55:00.329631 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"60befae2-547b-4b2e-a117-6c181de8c29d\" (UID: \"60befae2-547b-4b2e-a117-6c181de8c29d\") " Feb 17 13:55:00 crc kubenswrapper[4768]: I0217 13:55:00.334581 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/60befae2-547b-4b2e-a117-6c181de8c29d-logs" (OuterVolumeSpecName: "logs") pod "60befae2-547b-4b2e-a117-6c181de8c29d" (UID: "60befae2-547b-4b2e-a117-6c181de8c29d"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 13:55:00 crc kubenswrapper[4768]: I0217 13:55:00.350986 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/60befae2-547b-4b2e-a117-6c181de8c29d-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "60befae2-547b-4b2e-a117-6c181de8c29d" (UID: "60befae2-547b-4b2e-a117-6c181de8c29d"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 13:55:00 crc kubenswrapper[4768]: I0217 13:55:00.374357 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/83c7e8b0-a496-489a-b66d-230261a68227-scripts" (OuterVolumeSpecName: "scripts") pod "83c7e8b0-a496-489a-b66d-230261a68227" (UID: "83c7e8b0-a496-489a-b66d-230261a68227"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 13:55:00 crc kubenswrapper[4768]: I0217 13:55:00.374787 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/83c7e8b0-a496-489a-b66d-230261a68227-kube-api-access-hr4bd" (OuterVolumeSpecName: "kube-api-access-hr4bd") pod "83c7e8b0-a496-489a-b66d-230261a68227" (UID: "83c7e8b0-a496-489a-b66d-230261a68227"). InnerVolumeSpecName "kube-api-access-hr4bd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 13:55:00 crc kubenswrapper[4768]: I0217 13:55:00.375435 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/83c7e8b0-a496-489a-b66d-230261a68227-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "83c7e8b0-a496-489a-b66d-230261a68227" (UID: "83c7e8b0-a496-489a-b66d-230261a68227"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 13:55:00 crc kubenswrapper[4768]: I0217 13:55:00.375921 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/60befae2-547b-4b2e-a117-6c181de8c29d-kube-api-access-kp62p" (OuterVolumeSpecName: "kube-api-access-kp62p") pod "60befae2-547b-4b2e-a117-6c181de8c29d" (UID: "60befae2-547b-4b2e-a117-6c181de8c29d"). InnerVolumeSpecName "kube-api-access-kp62p". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 13:55:00 crc kubenswrapper[4768]: I0217 13:55:00.382289 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage10-crc" (OuterVolumeSpecName: "glance") pod "60befae2-547b-4b2e-a117-6c181de8c29d" (UID: "60befae2-547b-4b2e-a117-6c181de8c29d"). InnerVolumeSpecName "local-storage10-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Feb 17 13:55:00 crc kubenswrapper[4768]: I0217 13:55:00.384224 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/83c7e8b0-a496-489a-b66d-230261a68227-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "83c7e8b0-a496-489a-b66d-230261a68227" (UID: "83c7e8b0-a496-489a-b66d-230261a68227"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 13:55:00 crc kubenswrapper[4768]: I0217 13:55:00.384536 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-t6h7l" event={"ID":"83c7e8b0-a496-489a-b66d-230261a68227","Type":"ContainerDied","Data":"2d89ea7f309323fdd5a114791bdbbe2048b5bd8a04c542fbe771ff94d7bbc295"} Feb 17 13:55:00 crc kubenswrapper[4768]: I0217 13:55:00.384582 4768 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2d89ea7f309323fdd5a114791bdbbe2048b5bd8a04c542fbe771ff94d7bbc295" Feb 17 13:55:00 crc kubenswrapper[4768]: I0217 13:55:00.384680 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-t6h7l" Feb 17 13:55:00 crc kubenswrapper[4768]: I0217 13:55:00.401920 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/60befae2-547b-4b2e-a117-6c181de8c29d-scripts" (OuterVolumeSpecName: "scripts") pod "60befae2-547b-4b2e-a117-6c181de8c29d" (UID: "60befae2-547b-4b2e-a117-6c181de8c29d"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 13:55:00 crc kubenswrapper[4768]: I0217 13:55:00.415469 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"60befae2-547b-4b2e-a117-6c181de8c29d","Type":"ContainerDied","Data":"423745d4f64c1195d779d169d2643c4327848b5134e74bd81a08753511f9b427"} Feb 17 13:55:00 crc kubenswrapper[4768]: I0217 13:55:00.415586 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 17 13:55:00 crc kubenswrapper[4768]: I0217 13:55:00.420959 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/83c7e8b0-a496-489a-b66d-230261a68227-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "83c7e8b0-a496-489a-b66d-230261a68227" (UID: "83c7e8b0-a496-489a-b66d-230261a68227"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 13:55:00 crc kubenswrapper[4768]: I0217 13:55:00.429751 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/60befae2-547b-4b2e-a117-6c181de8c29d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "60befae2-547b-4b2e-a117-6c181de8c29d" (UID: "60befae2-547b-4b2e-a117-6c181de8c29d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 13:55:00 crc kubenswrapper[4768]: I0217 13:55:00.432545 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/83c7e8b0-a496-489a-b66d-230261a68227-config-data" (OuterVolumeSpecName: "config-data") pod "83c7e8b0-a496-489a-b66d-230261a68227" (UID: "83c7e8b0-a496-489a-b66d-230261a68227"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 13:55:00 crc kubenswrapper[4768]: I0217 13:55:00.436977 4768 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/60befae2-547b-4b2e-a117-6c181de8c29d-httpd-run\") on node \"crc\" DevicePath \"\"" Feb 17 13:55:00 crc kubenswrapper[4768]: I0217 13:55:00.437015 4768 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") on node \"crc\" " Feb 17 13:55:00 crc kubenswrapper[4768]: I0217 13:55:00.437025 4768 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/83c7e8b0-a496-489a-b66d-230261a68227-fernet-keys\") on node \"crc\" DevicePath \"\"" Feb 17 13:55:00 crc kubenswrapper[4768]: I0217 13:55:00.437034 4768 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/60befae2-547b-4b2e-a117-6c181de8c29d-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 13:55:00 crc kubenswrapper[4768]: I0217 13:55:00.437043 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hr4bd\" (UniqueName: \"kubernetes.io/projected/83c7e8b0-a496-489a-b66d-230261a68227-kube-api-access-hr4bd\") on node \"crc\" DevicePath \"\"" Feb 17 13:55:00 crc kubenswrapper[4768]: I0217 13:55:00.437053 4768 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/83c7e8b0-a496-489a-b66d-230261a68227-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 13:55:00 crc kubenswrapper[4768]: I0217 13:55:00.437061 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kp62p\" (UniqueName: \"kubernetes.io/projected/60befae2-547b-4b2e-a117-6c181de8c29d-kube-api-access-kp62p\") on node \"crc\" DevicePath \"\"" Feb 17 13:55:00 crc kubenswrapper[4768]: I0217 13:55:00.437069 4768 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/60befae2-547b-4b2e-a117-6c181de8c29d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 13:55:00 crc kubenswrapper[4768]: I0217 13:55:00.437077 4768 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/83c7e8b0-a496-489a-b66d-230261a68227-credential-keys\") on node \"crc\" DevicePath \"\"" Feb 17 13:55:00 crc kubenswrapper[4768]: I0217 13:55:00.437086 4768 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/83c7e8b0-a496-489a-b66d-230261a68227-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 13:55:00 crc kubenswrapper[4768]: I0217 13:55:00.437093 4768 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/60befae2-547b-4b2e-a117-6c181de8c29d-logs\") on node \"crc\" DevicePath \"\"" Feb 17 13:55:00 crc kubenswrapper[4768]: I0217 13:55:00.437114 4768 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/83c7e8b0-a496-489a-b66d-230261a68227-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 13:55:00 crc kubenswrapper[4768]: I0217 13:55:00.448762 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/60befae2-547b-4b2e-a117-6c181de8c29d-config-data" (OuterVolumeSpecName: "config-data") pod "60befae2-547b-4b2e-a117-6c181de8c29d" (UID: "60befae2-547b-4b2e-a117-6c181de8c29d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 13:55:00 crc kubenswrapper[4768]: I0217 13:55:00.462635 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/60befae2-547b-4b2e-a117-6c181de8c29d-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "60befae2-547b-4b2e-a117-6c181de8c29d" (UID: "60befae2-547b-4b2e-a117-6c181de8c29d"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 13:55:00 crc kubenswrapper[4768]: I0217 13:55:00.463190 4768 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage10-crc" (UniqueName: "kubernetes.io/local-volume/local-storage10-crc") on node "crc" Feb 17 13:55:00 crc kubenswrapper[4768]: I0217 13:55:00.538122 4768 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/60befae2-547b-4b2e-a117-6c181de8c29d-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 17 13:55:00 crc kubenswrapper[4768]: I0217 13:55:00.538157 4768 reconciler_common.go:293] "Volume detached for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") on node \"crc\" DevicePath \"\"" Feb 17 13:55:00 crc kubenswrapper[4768]: I0217 13:55:00.538171 4768 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/60befae2-547b-4b2e-a117-6c181de8c29d-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 13:55:00 crc kubenswrapper[4768]: I0217 13:55:00.749273 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 17 13:55:00 crc kubenswrapper[4768]: I0217 13:55:00.760750 4768 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 17 13:55:00 crc kubenswrapper[4768]: I0217 13:55:00.784588 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 17 13:55:00 crc kubenswrapper[4768]: E0217 13:55:00.785014 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="83c7e8b0-a496-489a-b66d-230261a68227" containerName="keystone-bootstrap" Feb 17 13:55:00 crc kubenswrapper[4768]: I0217 13:55:00.785037 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="83c7e8b0-a496-489a-b66d-230261a68227" containerName="keystone-bootstrap" Feb 17 13:55:00 crc kubenswrapper[4768]: E0217 13:55:00.785077 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="60befae2-547b-4b2e-a117-6c181de8c29d" containerName="glance-httpd" Feb 17 13:55:00 crc kubenswrapper[4768]: I0217 13:55:00.785085 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="60befae2-547b-4b2e-a117-6c181de8c29d" containerName="glance-httpd" Feb 17 13:55:00 crc kubenswrapper[4768]: E0217 13:55:00.785096 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="60befae2-547b-4b2e-a117-6c181de8c29d" containerName="glance-log" Feb 17 13:55:00 crc kubenswrapper[4768]: I0217 13:55:00.785122 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="60befae2-547b-4b2e-a117-6c181de8c29d" containerName="glance-log" Feb 17 13:55:00 crc kubenswrapper[4768]: I0217 13:55:00.785336 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="60befae2-547b-4b2e-a117-6c181de8c29d" containerName="glance-log" Feb 17 13:55:00 crc kubenswrapper[4768]: I0217 13:55:00.785366 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="83c7e8b0-a496-489a-b66d-230261a68227" containerName="keystone-bootstrap" Feb 17 13:55:00 crc kubenswrapper[4768]: I0217 13:55:00.785381 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="60befae2-547b-4b2e-a117-6c181de8c29d" containerName="glance-httpd" Feb 17 13:55:00 crc kubenswrapper[4768]: I0217 13:55:00.786511 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 17 13:55:00 crc kubenswrapper[4768]: I0217 13:55:00.788900 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Feb 17 13:55:00 crc kubenswrapper[4768]: I0217 13:55:00.789233 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Feb 17 13:55:00 crc kubenswrapper[4768]: I0217 13:55:00.797031 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 17 13:55:00 crc kubenswrapper[4768]: I0217 13:55:00.849992 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a6e3afdd-2e51-4f0a-9208-5784a5900c96-scripts\") pod \"glance-default-internal-api-0\" (UID: \"a6e3afdd-2e51-4f0a-9208-5784a5900c96\") " pod="openstack/glance-default-internal-api-0" Feb 17 13:55:00 crc kubenswrapper[4768]: I0217 13:55:00.850049 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a6e3afdd-2e51-4f0a-9208-5784a5900c96-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"a6e3afdd-2e51-4f0a-9208-5784a5900c96\") " pod="openstack/glance-default-internal-api-0" Feb 17 13:55:00 crc kubenswrapper[4768]: I0217 13:55:00.850073 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-866qp\" (UniqueName: \"kubernetes.io/projected/a6e3afdd-2e51-4f0a-9208-5784a5900c96-kube-api-access-866qp\") pod \"glance-default-internal-api-0\" (UID: \"a6e3afdd-2e51-4f0a-9208-5784a5900c96\") " pod="openstack/glance-default-internal-api-0" Feb 17 13:55:00 crc kubenswrapper[4768]: I0217 13:55:00.850123 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a6e3afdd-2e51-4f0a-9208-5784a5900c96-logs\") pod \"glance-default-internal-api-0\" (UID: \"a6e3afdd-2e51-4f0a-9208-5784a5900c96\") " pod="openstack/glance-default-internal-api-0" Feb 17 13:55:00 crc kubenswrapper[4768]: I0217 13:55:00.850184 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a6e3afdd-2e51-4f0a-9208-5784a5900c96-config-data\") pod \"glance-default-internal-api-0\" (UID: \"a6e3afdd-2e51-4f0a-9208-5784a5900c96\") " pod="openstack/glance-default-internal-api-0" Feb 17 13:55:00 crc kubenswrapper[4768]: I0217 13:55:00.850205 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-internal-api-0\" (UID: \"a6e3afdd-2e51-4f0a-9208-5784a5900c96\") " pod="openstack/glance-default-internal-api-0" Feb 17 13:55:00 crc kubenswrapper[4768]: I0217 13:55:00.850242 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a6e3afdd-2e51-4f0a-9208-5784a5900c96-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"a6e3afdd-2e51-4f0a-9208-5784a5900c96\") " pod="openstack/glance-default-internal-api-0" Feb 17 13:55:00 crc kubenswrapper[4768]: I0217 13:55:00.850275 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/a6e3afdd-2e51-4f0a-9208-5784a5900c96-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"a6e3afdd-2e51-4f0a-9208-5784a5900c96\") " pod="openstack/glance-default-internal-api-0" Feb 17 13:55:00 crc kubenswrapper[4768]: I0217 13:55:00.951381 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a6e3afdd-2e51-4f0a-9208-5784a5900c96-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"a6e3afdd-2e51-4f0a-9208-5784a5900c96\") " pod="openstack/glance-default-internal-api-0" Feb 17 13:55:00 crc kubenswrapper[4768]: I0217 13:55:00.951926 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/a6e3afdd-2e51-4f0a-9208-5784a5900c96-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"a6e3afdd-2e51-4f0a-9208-5784a5900c96\") " pod="openstack/glance-default-internal-api-0" Feb 17 13:55:00 crc kubenswrapper[4768]: I0217 13:55:00.951996 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a6e3afdd-2e51-4f0a-9208-5784a5900c96-scripts\") pod \"glance-default-internal-api-0\" (UID: \"a6e3afdd-2e51-4f0a-9208-5784a5900c96\") " pod="openstack/glance-default-internal-api-0" Feb 17 13:55:00 crc kubenswrapper[4768]: I0217 13:55:00.952038 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a6e3afdd-2e51-4f0a-9208-5784a5900c96-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"a6e3afdd-2e51-4f0a-9208-5784a5900c96\") " pod="openstack/glance-default-internal-api-0" Feb 17 13:55:00 crc kubenswrapper[4768]: I0217 13:55:00.952083 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-866qp\" (UniqueName: \"kubernetes.io/projected/a6e3afdd-2e51-4f0a-9208-5784a5900c96-kube-api-access-866qp\") pod \"glance-default-internal-api-0\" (UID: \"a6e3afdd-2e51-4f0a-9208-5784a5900c96\") " pod="openstack/glance-default-internal-api-0" Feb 17 13:55:00 crc kubenswrapper[4768]: I0217 13:55:00.952180 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a6e3afdd-2e51-4f0a-9208-5784a5900c96-logs\") pod \"glance-default-internal-api-0\" (UID: \"a6e3afdd-2e51-4f0a-9208-5784a5900c96\") " pod="openstack/glance-default-internal-api-0" Feb 17 13:55:00 crc kubenswrapper[4768]: I0217 13:55:00.952261 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a6e3afdd-2e51-4f0a-9208-5784a5900c96-config-data\") pod \"glance-default-internal-api-0\" (UID: \"a6e3afdd-2e51-4f0a-9208-5784a5900c96\") " pod="openstack/glance-default-internal-api-0" Feb 17 13:55:00 crc kubenswrapper[4768]: I0217 13:55:00.952295 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-internal-api-0\" (UID: \"a6e3afdd-2e51-4f0a-9208-5784a5900c96\") " pod="openstack/glance-default-internal-api-0" Feb 17 13:55:00 crc kubenswrapper[4768]: I0217 13:55:00.952311 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/a6e3afdd-2e51-4f0a-9208-5784a5900c96-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"a6e3afdd-2e51-4f0a-9208-5784a5900c96\") " pod="openstack/glance-default-internal-api-0" Feb 17 13:55:00 crc kubenswrapper[4768]: I0217 13:55:00.952644 4768 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-internal-api-0\" (UID: \"a6e3afdd-2e51-4f0a-9208-5784a5900c96\") device mount path \"/mnt/openstack/pv10\"" pod="openstack/glance-default-internal-api-0" Feb 17 13:55:00 crc kubenswrapper[4768]: I0217 13:55:00.953007 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a6e3afdd-2e51-4f0a-9208-5784a5900c96-logs\") pod \"glance-default-internal-api-0\" (UID: \"a6e3afdd-2e51-4f0a-9208-5784a5900c96\") " pod="openstack/glance-default-internal-api-0" Feb 17 13:55:00 crc kubenswrapper[4768]: I0217 13:55:00.956455 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a6e3afdd-2e51-4f0a-9208-5784a5900c96-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"a6e3afdd-2e51-4f0a-9208-5784a5900c96\") " pod="openstack/glance-default-internal-api-0" Feb 17 13:55:00 crc kubenswrapper[4768]: I0217 13:55:00.956604 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a6e3afdd-2e51-4f0a-9208-5784a5900c96-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"a6e3afdd-2e51-4f0a-9208-5784a5900c96\") " pod="openstack/glance-default-internal-api-0" Feb 17 13:55:00 crc kubenswrapper[4768]: I0217 13:55:00.957499 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a6e3afdd-2e51-4f0a-9208-5784a5900c96-scripts\") pod \"glance-default-internal-api-0\" (UID: \"a6e3afdd-2e51-4f0a-9208-5784a5900c96\") " pod="openstack/glance-default-internal-api-0" Feb 17 13:55:00 crc kubenswrapper[4768]: I0217 13:55:00.958208 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a6e3afdd-2e51-4f0a-9208-5784a5900c96-config-data\") pod \"glance-default-internal-api-0\" (UID: \"a6e3afdd-2e51-4f0a-9208-5784a5900c96\") " pod="openstack/glance-default-internal-api-0" Feb 17 13:55:00 crc kubenswrapper[4768]: I0217 13:55:00.975593 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-866qp\" (UniqueName: \"kubernetes.io/projected/a6e3afdd-2e51-4f0a-9208-5784a5900c96-kube-api-access-866qp\") pod \"glance-default-internal-api-0\" (UID: \"a6e3afdd-2e51-4f0a-9208-5784a5900c96\") " pod="openstack/glance-default-internal-api-0" Feb 17 13:55:00 crc kubenswrapper[4768]: I0217 13:55:00.985075 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-internal-api-0\" (UID: \"a6e3afdd-2e51-4f0a-9208-5784a5900c96\") " pod="openstack/glance-default-internal-api-0" Feb 17 13:55:01 crc kubenswrapper[4768]: I0217 13:55:01.124932 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 17 13:55:01 crc kubenswrapper[4768]: I0217 13:55:01.261587 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-t6h7l"] Feb 17 13:55:01 crc kubenswrapper[4768]: I0217 13:55:01.268910 4768 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-t6h7l"] Feb 17 13:55:01 crc kubenswrapper[4768]: I0217 13:55:01.352434 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-78zqh"] Feb 17 13:55:01 crc kubenswrapper[4768]: I0217 13:55:01.353835 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-78zqh" Feb 17 13:55:01 crc kubenswrapper[4768]: I0217 13:55:01.358556 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Feb 17 13:55:01 crc kubenswrapper[4768]: I0217 13:55:01.358721 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Feb 17 13:55:01 crc kubenswrapper[4768]: I0217 13:55:01.358979 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Feb 17 13:55:01 crc kubenswrapper[4768]: I0217 13:55:01.361719 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Feb 17 13:55:01 crc kubenswrapper[4768]: I0217 13:55:01.361836 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-xwvsr" Feb 17 13:55:01 crc kubenswrapper[4768]: I0217 13:55:01.368450 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-78zqh"] Feb 17 13:55:01 crc kubenswrapper[4768]: I0217 13:55:01.459490 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e1acea03-8a67-474d-a6b1-803ea949a747-config-data\") pod \"keystone-bootstrap-78zqh\" (UID: \"e1acea03-8a67-474d-a6b1-803ea949a747\") " pod="openstack/keystone-bootstrap-78zqh" Feb 17 13:55:01 crc kubenswrapper[4768]: I0217 13:55:01.459567 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/e1acea03-8a67-474d-a6b1-803ea949a747-credential-keys\") pod \"keystone-bootstrap-78zqh\" (UID: \"e1acea03-8a67-474d-a6b1-803ea949a747\") " pod="openstack/keystone-bootstrap-78zqh" Feb 17 13:55:01 crc kubenswrapper[4768]: I0217 13:55:01.459683 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/e1acea03-8a67-474d-a6b1-803ea949a747-fernet-keys\") pod \"keystone-bootstrap-78zqh\" (UID: \"e1acea03-8a67-474d-a6b1-803ea949a747\") " pod="openstack/keystone-bootstrap-78zqh" Feb 17 13:55:01 crc kubenswrapper[4768]: I0217 13:55:01.459864 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gfcfp\" (UniqueName: \"kubernetes.io/projected/e1acea03-8a67-474d-a6b1-803ea949a747-kube-api-access-gfcfp\") pod \"keystone-bootstrap-78zqh\" (UID: \"e1acea03-8a67-474d-a6b1-803ea949a747\") " pod="openstack/keystone-bootstrap-78zqh" Feb 17 13:55:01 crc kubenswrapper[4768]: I0217 13:55:01.460062 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e1acea03-8a67-474d-a6b1-803ea949a747-combined-ca-bundle\") pod \"keystone-bootstrap-78zqh\" (UID: \"e1acea03-8a67-474d-a6b1-803ea949a747\") " pod="openstack/keystone-bootstrap-78zqh" Feb 17 13:55:01 crc kubenswrapper[4768]: I0217 13:55:01.460165 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e1acea03-8a67-474d-a6b1-803ea949a747-scripts\") pod \"keystone-bootstrap-78zqh\" (UID: \"e1acea03-8a67-474d-a6b1-803ea949a747\") " pod="openstack/keystone-bootstrap-78zqh" Feb 17 13:55:01 crc kubenswrapper[4768]: I0217 13:55:01.548763 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="60befae2-547b-4b2e-a117-6c181de8c29d" path="/var/lib/kubelet/pods/60befae2-547b-4b2e-a117-6c181de8c29d/volumes" Feb 17 13:55:01 crc kubenswrapper[4768]: I0217 13:55:01.549751 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="83c7e8b0-a496-489a-b66d-230261a68227" path="/var/lib/kubelet/pods/83c7e8b0-a496-489a-b66d-230261a68227/volumes" Feb 17 13:55:01 crc kubenswrapper[4768]: I0217 13:55:01.561807 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/e1acea03-8a67-474d-a6b1-803ea949a747-credential-keys\") pod \"keystone-bootstrap-78zqh\" (UID: \"e1acea03-8a67-474d-a6b1-803ea949a747\") " pod="openstack/keystone-bootstrap-78zqh" Feb 17 13:55:01 crc kubenswrapper[4768]: I0217 13:55:01.561883 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/e1acea03-8a67-474d-a6b1-803ea949a747-fernet-keys\") pod \"keystone-bootstrap-78zqh\" (UID: \"e1acea03-8a67-474d-a6b1-803ea949a747\") " pod="openstack/keystone-bootstrap-78zqh" Feb 17 13:55:01 crc kubenswrapper[4768]: I0217 13:55:01.561939 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gfcfp\" (UniqueName: \"kubernetes.io/projected/e1acea03-8a67-474d-a6b1-803ea949a747-kube-api-access-gfcfp\") pod \"keystone-bootstrap-78zqh\" (UID: \"e1acea03-8a67-474d-a6b1-803ea949a747\") " pod="openstack/keystone-bootstrap-78zqh" Feb 17 13:55:01 crc kubenswrapper[4768]: I0217 13:55:01.562012 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e1acea03-8a67-474d-a6b1-803ea949a747-combined-ca-bundle\") pod \"keystone-bootstrap-78zqh\" (UID: \"e1acea03-8a67-474d-a6b1-803ea949a747\") " pod="openstack/keystone-bootstrap-78zqh" Feb 17 13:55:01 crc kubenswrapper[4768]: I0217 13:55:01.562062 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e1acea03-8a67-474d-a6b1-803ea949a747-scripts\") pod \"keystone-bootstrap-78zqh\" (UID: \"e1acea03-8a67-474d-a6b1-803ea949a747\") " pod="openstack/keystone-bootstrap-78zqh" Feb 17 13:55:01 crc kubenswrapper[4768]: I0217 13:55:01.562081 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e1acea03-8a67-474d-a6b1-803ea949a747-config-data\") pod \"keystone-bootstrap-78zqh\" (UID: \"e1acea03-8a67-474d-a6b1-803ea949a747\") " pod="openstack/keystone-bootstrap-78zqh" Feb 17 13:55:01 crc kubenswrapper[4768]: I0217 13:55:01.566783 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/e1acea03-8a67-474d-a6b1-803ea949a747-credential-keys\") pod \"keystone-bootstrap-78zqh\" (UID: \"e1acea03-8a67-474d-a6b1-803ea949a747\") " pod="openstack/keystone-bootstrap-78zqh" Feb 17 13:55:01 crc kubenswrapper[4768]: I0217 13:55:01.567063 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e1acea03-8a67-474d-a6b1-803ea949a747-combined-ca-bundle\") pod \"keystone-bootstrap-78zqh\" (UID: \"e1acea03-8a67-474d-a6b1-803ea949a747\") " pod="openstack/keystone-bootstrap-78zqh" Feb 17 13:55:01 crc kubenswrapper[4768]: I0217 13:55:01.567074 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e1acea03-8a67-474d-a6b1-803ea949a747-scripts\") pod \"keystone-bootstrap-78zqh\" (UID: \"e1acea03-8a67-474d-a6b1-803ea949a747\") " pod="openstack/keystone-bootstrap-78zqh" Feb 17 13:55:01 crc kubenswrapper[4768]: I0217 13:55:01.567139 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/e1acea03-8a67-474d-a6b1-803ea949a747-fernet-keys\") pod \"keystone-bootstrap-78zqh\" (UID: \"e1acea03-8a67-474d-a6b1-803ea949a747\") " pod="openstack/keystone-bootstrap-78zqh" Feb 17 13:55:01 crc kubenswrapper[4768]: I0217 13:55:01.570139 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e1acea03-8a67-474d-a6b1-803ea949a747-config-data\") pod \"keystone-bootstrap-78zqh\" (UID: \"e1acea03-8a67-474d-a6b1-803ea949a747\") " pod="openstack/keystone-bootstrap-78zqh" Feb 17 13:55:01 crc kubenswrapper[4768]: I0217 13:55:01.579666 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gfcfp\" (UniqueName: \"kubernetes.io/projected/e1acea03-8a67-474d-a6b1-803ea949a747-kube-api-access-gfcfp\") pod \"keystone-bootstrap-78zqh\" (UID: \"e1acea03-8a67-474d-a6b1-803ea949a747\") " pod="openstack/keystone-bootstrap-78zqh" Feb 17 13:55:01 crc kubenswrapper[4768]: I0217 13:55:01.680004 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-78zqh" Feb 17 13:55:04 crc kubenswrapper[4768]: I0217 13:55:04.580655 4768 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-764c5664d7-6pghj" podUID="e0ab0a41-18cb-4ed9-9c99-009e71c184f6" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.124:5353: connect: connection refused" Feb 17 13:55:07 crc kubenswrapper[4768]: E0217 13:55:07.247536 4768 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-horizon:current-podified" Feb 17 13:55:07 crc kubenswrapper[4768]: E0217 13:55:07.248173 4768 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:horizon-log,Image:quay.io/podified-antelope-centos9/openstack-horizon:current-podified,Command:[/bin/bash],Args:[-c tail -n+1 -F /var/log/horizon/horizon.log],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n5b6hb9h76h77hb9h575hb5h599h95h5bdh7ch68chffh578h678h9dh59dhc5h5c9h64dh56bhc7h667h5bbh55bh69h88hc6h5c8hbdhc9h56fq,ValueFrom:nil,},EnvVar{Name:ENABLE_DESIGNATE,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_HEAT,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_IRONIC,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_MANILA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_OCTAVIA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_WATCHER,Value:no,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},EnvVar{Name:UNPACK_THEME,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/horizon,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-cr8zp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*48,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*42400,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-7645d5bbd9-t6l64_openstack(90fa85b0-3a62-4df2-835f-ca176c602f7b): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 17 13:55:07 crc kubenswrapper[4768]: E0217 13:55:07.250428 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"horizon-log\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"horizon\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-horizon:current-podified\\\"\"]" pod="openstack/horizon-7645d5bbd9-t6l64" podUID="90fa85b0-3a62-4df2-835f-ca176c602f7b" Feb 17 13:55:07 crc kubenswrapper[4768]: E0217 13:55:07.612208 4768 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-ceilometer-central:current-podified" Feb 17 13:55:07 crc kubenswrapper[4768]: E0217 13:55:07.614395 4768 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.io/podified-antelope-centos9/openstack-ceilometer-central:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n59fh5c6hb6h5c7h67fh64bh566h54bh57bh686h5f8h654h678h66bh56h5chcbh5ffh5f5h55dhcfh96h666h88h68fhch668h58h5dh5fh8h65bq,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jqxc7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(1c0e296a-80f3-4efe-bb28-17fdfd153397): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 17 13:55:07 crc kubenswrapper[4768]: E0217 13:55:07.628737 4768 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-horizon:current-podified" Feb 17 13:55:07 crc kubenswrapper[4768]: E0217 13:55:07.629011 4768 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:horizon-log,Image:quay.io/podified-antelope-centos9/openstack-horizon:current-podified,Command:[/bin/bash],Args:[-c tail -n+1 -F /var/log/horizon/horizon.log],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n569h596h666h597h667h557h5ddh7fh698hcbh86hb6h59hdh95h7chbbh5c5h5bdh595h646h54h58h59ch69h549hcbh656hd8hbfh5f5h656q,ValueFrom:nil,},EnvVar{Name:ENABLE_DESIGNATE,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_HEAT,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_IRONIC,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_MANILA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_OCTAVIA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_WATCHER,Value:no,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},EnvVar{Name:UNPACK_THEME,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/horizon,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xz8jg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*48,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*42400,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-66c5b78857-sdk9f_openstack(30f56441-0814-409d-98c2-f51795b60a80): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 17 13:55:07 crc kubenswrapper[4768]: E0217 13:55:07.630093 4768 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-horizon:current-podified" Feb 17 13:55:07 crc kubenswrapper[4768]: E0217 13:55:07.630329 4768 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:horizon-log,Image:quay.io/podified-antelope-centos9/openstack-horizon:current-podified,Command:[/bin/bash],Args:[-c tail -n+1 -F /var/log/horizon/horizon.log],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n694hdbh85h694h5dfh566h8bh645h684h647h78h5d5h59bh5bfh564h5cch56fh56fh677h87h566h5b4h594h65h55bhbfh546h5f5h585h584h565h545q,ValueFrom:nil,},EnvVar{Name:ENABLE_DESIGNATE,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_HEAT,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_IRONIC,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_MANILA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_OCTAVIA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_WATCHER,Value:no,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},EnvVar{Name:UNPACK_THEME,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/horizon,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-x7n6m,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*48,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*42400,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-55d496646f-4tq7c_openstack(23e8824d-9b92-4724-971e-c807d48d8229): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 17 13:55:07 crc kubenswrapper[4768]: E0217 13:55:07.631624 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"horizon-log\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"horizon\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-horizon:current-podified\\\"\"]" pod="openstack/horizon-66c5b78857-sdk9f" podUID="30f56441-0814-409d-98c2-f51795b60a80" Feb 17 13:55:07 crc kubenswrapper[4768]: E0217 13:55:07.632385 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"horizon-log\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"horizon\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-horizon:current-podified\\\"\"]" pod="openstack/horizon-55d496646f-4tq7c" podUID="23e8824d-9b92-4724-971e-c807d48d8229" Feb 17 13:55:09 crc kubenswrapper[4768]: I0217 13:55:09.504675 4768 generic.go:334] "Generic (PLEG): container finished" podID="33f4a984-4d42-469e-8eda-c49264f0e4d9" containerID="1025948318c7e5bdae6ba53f7f34d7fc4f909f69d14dc541130d510c4e0b05c6" exitCode=0 Feb 17 13:55:09 crc kubenswrapper[4768]: I0217 13:55:09.504752 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-4vs6w" event={"ID":"33f4a984-4d42-469e-8eda-c49264f0e4d9","Type":"ContainerDied","Data":"1025948318c7e5bdae6ba53f7f34d7fc4f909f69d14dc541130d510c4e0b05c6"} Feb 17 13:55:14 crc kubenswrapper[4768]: I0217 13:55:14.580898 4768 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-764c5664d7-6pghj" podUID="e0ab0a41-18cb-4ed9-9c99-009e71c184f6" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.124:5353: i/o timeout" Feb 17 13:55:14 crc kubenswrapper[4768]: I0217 13:55:14.581442 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-764c5664d7-6pghj" Feb 17 13:55:15 crc kubenswrapper[4768]: E0217 13:55:15.746752 4768 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified" Feb 17 13:55:15 crc kubenswrapper[4768]: E0217 13:55:15.746962 4768 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:barbican-db-sync,Image:quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified,Command:[/bin/bash],Args:[-c barbican-manage db upgrade],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/barbican/barbican.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-tzfsj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42403,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42403,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod barbican-db-sync-jmq2h_openstack(e23e418f-2c16-4fa8-94fb-5e575affd61b): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 17 13:55:15 crc kubenswrapper[4768]: E0217 13:55:15.748217 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/barbican-db-sync-jmq2h" podUID="e23e418f-2c16-4fa8-94fb-5e575affd61b" Feb 17 13:55:15 crc kubenswrapper[4768]: I0217 13:55:15.835653 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-4vs6w" Feb 17 13:55:15 crc kubenswrapper[4768]: I0217 13:55:15.846158 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-764c5664d7-6pghj" Feb 17 13:55:15 crc kubenswrapper[4768]: I0217 13:55:15.930712 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/33f4a984-4d42-469e-8eda-c49264f0e4d9-combined-ca-bundle\") pod \"33f4a984-4d42-469e-8eda-c49264f0e4d9\" (UID: \"33f4a984-4d42-469e-8eda-c49264f0e4d9\") " Feb 17 13:55:15 crc kubenswrapper[4768]: I0217 13:55:15.930754 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e0ab0a41-18cb-4ed9-9c99-009e71c184f6-ovsdbserver-nb\") pod \"e0ab0a41-18cb-4ed9-9c99-009e71c184f6\" (UID: \"e0ab0a41-18cb-4ed9-9c99-009e71c184f6\") " Feb 17 13:55:15 crc kubenswrapper[4768]: I0217 13:55:15.930785 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4bkq2\" (UniqueName: \"kubernetes.io/projected/33f4a984-4d42-469e-8eda-c49264f0e4d9-kube-api-access-4bkq2\") pod \"33f4a984-4d42-469e-8eda-c49264f0e4d9\" (UID: \"33f4a984-4d42-469e-8eda-c49264f0e4d9\") " Feb 17 13:55:15 crc kubenswrapper[4768]: I0217 13:55:15.930827 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e0ab0a41-18cb-4ed9-9c99-009e71c184f6-ovsdbserver-sb\") pod \"e0ab0a41-18cb-4ed9-9c99-009e71c184f6\" (UID: \"e0ab0a41-18cb-4ed9-9c99-009e71c184f6\") " Feb 17 13:55:15 crc kubenswrapper[4768]: I0217 13:55:15.930897 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f4v6f\" (UniqueName: \"kubernetes.io/projected/e0ab0a41-18cb-4ed9-9c99-009e71c184f6-kube-api-access-f4v6f\") pod \"e0ab0a41-18cb-4ed9-9c99-009e71c184f6\" (UID: \"e0ab0a41-18cb-4ed9-9c99-009e71c184f6\") " Feb 17 13:55:15 crc kubenswrapper[4768]: I0217 13:55:15.930932 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/33f4a984-4d42-469e-8eda-c49264f0e4d9-config\") pod \"33f4a984-4d42-469e-8eda-c49264f0e4d9\" (UID: \"33f4a984-4d42-469e-8eda-c49264f0e4d9\") " Feb 17 13:55:15 crc kubenswrapper[4768]: I0217 13:55:15.930963 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e0ab0a41-18cb-4ed9-9c99-009e71c184f6-dns-svc\") pod \"e0ab0a41-18cb-4ed9-9c99-009e71c184f6\" (UID: \"e0ab0a41-18cb-4ed9-9c99-009e71c184f6\") " Feb 17 13:55:15 crc kubenswrapper[4768]: I0217 13:55:15.931041 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e0ab0a41-18cb-4ed9-9c99-009e71c184f6-config\") pod \"e0ab0a41-18cb-4ed9-9c99-009e71c184f6\" (UID: \"e0ab0a41-18cb-4ed9-9c99-009e71c184f6\") " Feb 17 13:55:15 crc kubenswrapper[4768]: I0217 13:55:15.931121 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e0ab0a41-18cb-4ed9-9c99-009e71c184f6-dns-swift-storage-0\") pod \"e0ab0a41-18cb-4ed9-9c99-009e71c184f6\" (UID: \"e0ab0a41-18cb-4ed9-9c99-009e71c184f6\") " Feb 17 13:55:15 crc kubenswrapper[4768]: I0217 13:55:15.935697 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e0ab0a41-18cb-4ed9-9c99-009e71c184f6-kube-api-access-f4v6f" (OuterVolumeSpecName: "kube-api-access-f4v6f") pod "e0ab0a41-18cb-4ed9-9c99-009e71c184f6" (UID: "e0ab0a41-18cb-4ed9-9c99-009e71c184f6"). InnerVolumeSpecName "kube-api-access-f4v6f". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 13:55:15 crc kubenswrapper[4768]: I0217 13:55:15.945416 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/33f4a984-4d42-469e-8eda-c49264f0e4d9-kube-api-access-4bkq2" (OuterVolumeSpecName: "kube-api-access-4bkq2") pod "33f4a984-4d42-469e-8eda-c49264f0e4d9" (UID: "33f4a984-4d42-469e-8eda-c49264f0e4d9"). InnerVolumeSpecName "kube-api-access-4bkq2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 13:55:15 crc kubenswrapper[4768]: I0217 13:55:15.958525 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/33f4a984-4d42-469e-8eda-c49264f0e4d9-config" (OuterVolumeSpecName: "config") pod "33f4a984-4d42-469e-8eda-c49264f0e4d9" (UID: "33f4a984-4d42-469e-8eda-c49264f0e4d9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 13:55:15 crc kubenswrapper[4768]: I0217 13:55:15.981046 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/33f4a984-4d42-469e-8eda-c49264f0e4d9-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "33f4a984-4d42-469e-8eda-c49264f0e4d9" (UID: "33f4a984-4d42-469e-8eda-c49264f0e4d9"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 13:55:15 crc kubenswrapper[4768]: I0217 13:55:15.981192 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e0ab0a41-18cb-4ed9-9c99-009e71c184f6-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "e0ab0a41-18cb-4ed9-9c99-009e71c184f6" (UID: "e0ab0a41-18cb-4ed9-9c99-009e71c184f6"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 13:55:15 crc kubenswrapper[4768]: I0217 13:55:15.988585 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e0ab0a41-18cb-4ed9-9c99-009e71c184f6-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "e0ab0a41-18cb-4ed9-9c99-009e71c184f6" (UID: "e0ab0a41-18cb-4ed9-9c99-009e71c184f6"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 13:55:15 crc kubenswrapper[4768]: I0217 13:55:15.998451 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e0ab0a41-18cb-4ed9-9c99-009e71c184f6-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "e0ab0a41-18cb-4ed9-9c99-009e71c184f6" (UID: "e0ab0a41-18cb-4ed9-9c99-009e71c184f6"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 13:55:16 crc kubenswrapper[4768]: I0217 13:55:16.006576 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e0ab0a41-18cb-4ed9-9c99-009e71c184f6-config" (OuterVolumeSpecName: "config") pod "e0ab0a41-18cb-4ed9-9c99-009e71c184f6" (UID: "e0ab0a41-18cb-4ed9-9c99-009e71c184f6"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 13:55:16 crc kubenswrapper[4768]: I0217 13:55:16.012863 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e0ab0a41-18cb-4ed9-9c99-009e71c184f6-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "e0ab0a41-18cb-4ed9-9c99-009e71c184f6" (UID: "e0ab0a41-18cb-4ed9-9c99-009e71c184f6"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 13:55:16 crc kubenswrapper[4768]: I0217 13:55:16.032622 4768 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/33f4a984-4d42-469e-8eda-c49264f0e4d9-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 13:55:16 crc kubenswrapper[4768]: I0217 13:55:16.032654 4768 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e0ab0a41-18cb-4ed9-9c99-009e71c184f6-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 17 13:55:16 crc kubenswrapper[4768]: I0217 13:55:16.032664 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4bkq2\" (UniqueName: \"kubernetes.io/projected/33f4a984-4d42-469e-8eda-c49264f0e4d9-kube-api-access-4bkq2\") on node \"crc\" DevicePath \"\"" Feb 17 13:55:16 crc kubenswrapper[4768]: I0217 13:55:16.032683 4768 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e0ab0a41-18cb-4ed9-9c99-009e71c184f6-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 17 13:55:16 crc kubenswrapper[4768]: I0217 13:55:16.032693 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f4v6f\" (UniqueName: \"kubernetes.io/projected/e0ab0a41-18cb-4ed9-9c99-009e71c184f6-kube-api-access-f4v6f\") on node \"crc\" DevicePath \"\"" Feb 17 13:55:16 crc kubenswrapper[4768]: I0217 13:55:16.032701 4768 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/33f4a984-4d42-469e-8eda-c49264f0e4d9-config\") on node \"crc\" DevicePath \"\"" Feb 17 13:55:16 crc kubenswrapper[4768]: I0217 13:55:16.032709 4768 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e0ab0a41-18cb-4ed9-9c99-009e71c184f6-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 17 13:55:16 crc kubenswrapper[4768]: I0217 13:55:16.032716 4768 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e0ab0a41-18cb-4ed9-9c99-009e71c184f6-config\") on node \"crc\" DevicePath \"\"" Feb 17 13:55:16 crc kubenswrapper[4768]: I0217 13:55:16.032724 4768 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e0ab0a41-18cb-4ed9-9c99-009e71c184f6-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 17 13:55:16 crc kubenswrapper[4768]: I0217 13:55:16.572421 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-764c5664d7-6pghj" event={"ID":"e0ab0a41-18cb-4ed9-9c99-009e71c184f6","Type":"ContainerDied","Data":"cc94329e1aff3185766a6487d10350a3d6f6172799b2507b11fa914e7edf9210"} Feb 17 13:55:16 crc kubenswrapper[4768]: I0217 13:55:16.572522 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-764c5664d7-6pghj" Feb 17 13:55:16 crc kubenswrapper[4768]: I0217 13:55:16.580487 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-4vs6w" event={"ID":"33f4a984-4d42-469e-8eda-c49264f0e4d9","Type":"ContainerDied","Data":"5fd0d71010b75e3c237c0de8e25f1154dfacd4f1e91f0d0d4c373780e9be1cdd"} Feb 17 13:55:16 crc kubenswrapper[4768]: I0217 13:55:16.580528 4768 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5fd0d71010b75e3c237c0de8e25f1154dfacd4f1e91f0d0d4c373780e9be1cdd" Feb 17 13:55:16 crc kubenswrapper[4768]: I0217 13:55:16.580592 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-4vs6w" Feb 17 13:55:16 crc kubenswrapper[4768]: E0217 13:55:16.582679 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified\\\"\"" pod="openstack/barbican-db-sync-jmq2h" podUID="e23e418f-2c16-4fa8-94fb-5e575affd61b" Feb 17 13:55:16 crc kubenswrapper[4768]: I0217 13:55:16.621360 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-764c5664d7-6pghj"] Feb 17 13:55:16 crc kubenswrapper[4768]: I0217 13:55:16.628875 4768 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-764c5664d7-6pghj"] Feb 17 13:55:17 crc kubenswrapper[4768]: I0217 13:55:17.006612 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-55f844cf75-x5846"] Feb 17 13:55:17 crc kubenswrapper[4768]: E0217 13:55:17.007162 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e0ab0a41-18cb-4ed9-9c99-009e71c184f6" containerName="dnsmasq-dns" Feb 17 13:55:17 crc kubenswrapper[4768]: I0217 13:55:17.007182 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="e0ab0a41-18cb-4ed9-9c99-009e71c184f6" containerName="dnsmasq-dns" Feb 17 13:55:17 crc kubenswrapper[4768]: E0217 13:55:17.007206 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="33f4a984-4d42-469e-8eda-c49264f0e4d9" containerName="neutron-db-sync" Feb 17 13:55:17 crc kubenswrapper[4768]: I0217 13:55:17.007213 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="33f4a984-4d42-469e-8eda-c49264f0e4d9" containerName="neutron-db-sync" Feb 17 13:55:17 crc kubenswrapper[4768]: E0217 13:55:17.007226 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e0ab0a41-18cb-4ed9-9c99-009e71c184f6" containerName="init" Feb 17 13:55:17 crc kubenswrapper[4768]: I0217 13:55:17.007233 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="e0ab0a41-18cb-4ed9-9c99-009e71c184f6" containerName="init" Feb 17 13:55:17 crc kubenswrapper[4768]: I0217 13:55:17.007397 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="e0ab0a41-18cb-4ed9-9c99-009e71c184f6" containerName="dnsmasq-dns" Feb 17 13:55:17 crc kubenswrapper[4768]: I0217 13:55:17.007413 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="33f4a984-4d42-469e-8eda-c49264f0e4d9" containerName="neutron-db-sync" Feb 17 13:55:17 crc kubenswrapper[4768]: I0217 13:55:17.010413 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55f844cf75-x5846" Feb 17 13:55:17 crc kubenswrapper[4768]: I0217 13:55:17.014450 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-55f844cf75-x5846"] Feb 17 13:55:17 crc kubenswrapper[4768]: I0217 13:55:17.048055 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9ce4d08f-4b2c-4831-acce-546ddff7277a-ovsdbserver-sb\") pod \"dnsmasq-dns-55f844cf75-x5846\" (UID: \"9ce4d08f-4b2c-4831-acce-546ddff7277a\") " pod="openstack/dnsmasq-dns-55f844cf75-x5846" Feb 17 13:55:17 crc kubenswrapper[4768]: I0217 13:55:17.048379 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pbvz8\" (UniqueName: \"kubernetes.io/projected/9ce4d08f-4b2c-4831-acce-546ddff7277a-kube-api-access-pbvz8\") pod \"dnsmasq-dns-55f844cf75-x5846\" (UID: \"9ce4d08f-4b2c-4831-acce-546ddff7277a\") " pod="openstack/dnsmasq-dns-55f844cf75-x5846" Feb 17 13:55:17 crc kubenswrapper[4768]: I0217 13:55:17.048425 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9ce4d08f-4b2c-4831-acce-546ddff7277a-dns-svc\") pod \"dnsmasq-dns-55f844cf75-x5846\" (UID: \"9ce4d08f-4b2c-4831-acce-546ddff7277a\") " pod="openstack/dnsmasq-dns-55f844cf75-x5846" Feb 17 13:55:17 crc kubenswrapper[4768]: I0217 13:55:17.048460 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/9ce4d08f-4b2c-4831-acce-546ddff7277a-dns-swift-storage-0\") pod \"dnsmasq-dns-55f844cf75-x5846\" (UID: \"9ce4d08f-4b2c-4831-acce-546ddff7277a\") " pod="openstack/dnsmasq-dns-55f844cf75-x5846" Feb 17 13:55:17 crc kubenswrapper[4768]: I0217 13:55:17.048497 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9ce4d08f-4b2c-4831-acce-546ddff7277a-ovsdbserver-nb\") pod \"dnsmasq-dns-55f844cf75-x5846\" (UID: \"9ce4d08f-4b2c-4831-acce-546ddff7277a\") " pod="openstack/dnsmasq-dns-55f844cf75-x5846" Feb 17 13:55:17 crc kubenswrapper[4768]: I0217 13:55:17.048538 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9ce4d08f-4b2c-4831-acce-546ddff7277a-config\") pod \"dnsmasq-dns-55f844cf75-x5846\" (UID: \"9ce4d08f-4b2c-4831-acce-546ddff7277a\") " pod="openstack/dnsmasq-dns-55f844cf75-x5846" Feb 17 13:55:17 crc kubenswrapper[4768]: I0217 13:55:17.115545 4768 scope.go:117] "RemoveContainer" containerID="b7da8f33eceb01f55d44ef32bc96249591916ed6f39df5da15e27f61550a8fbc" Feb 17 13:55:17 crc kubenswrapper[4768]: I0217 13:55:17.149504 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9ce4d08f-4b2c-4831-acce-546ddff7277a-ovsdbserver-sb\") pod \"dnsmasq-dns-55f844cf75-x5846\" (UID: \"9ce4d08f-4b2c-4831-acce-546ddff7277a\") " pod="openstack/dnsmasq-dns-55f844cf75-x5846" Feb 17 13:55:17 crc kubenswrapper[4768]: I0217 13:55:17.149548 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pbvz8\" (UniqueName: \"kubernetes.io/projected/9ce4d08f-4b2c-4831-acce-546ddff7277a-kube-api-access-pbvz8\") pod \"dnsmasq-dns-55f844cf75-x5846\" (UID: \"9ce4d08f-4b2c-4831-acce-546ddff7277a\") " pod="openstack/dnsmasq-dns-55f844cf75-x5846" Feb 17 13:55:17 crc kubenswrapper[4768]: I0217 13:55:17.149585 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9ce4d08f-4b2c-4831-acce-546ddff7277a-dns-svc\") pod \"dnsmasq-dns-55f844cf75-x5846\" (UID: \"9ce4d08f-4b2c-4831-acce-546ddff7277a\") " pod="openstack/dnsmasq-dns-55f844cf75-x5846" Feb 17 13:55:17 crc kubenswrapper[4768]: I0217 13:55:17.149624 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/9ce4d08f-4b2c-4831-acce-546ddff7277a-dns-swift-storage-0\") pod \"dnsmasq-dns-55f844cf75-x5846\" (UID: \"9ce4d08f-4b2c-4831-acce-546ddff7277a\") " pod="openstack/dnsmasq-dns-55f844cf75-x5846" Feb 17 13:55:17 crc kubenswrapper[4768]: I0217 13:55:17.149663 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9ce4d08f-4b2c-4831-acce-546ddff7277a-ovsdbserver-nb\") pod \"dnsmasq-dns-55f844cf75-x5846\" (UID: \"9ce4d08f-4b2c-4831-acce-546ddff7277a\") " pod="openstack/dnsmasq-dns-55f844cf75-x5846" Feb 17 13:55:17 crc kubenswrapper[4768]: I0217 13:55:17.149700 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9ce4d08f-4b2c-4831-acce-546ddff7277a-config\") pod \"dnsmasq-dns-55f844cf75-x5846\" (UID: \"9ce4d08f-4b2c-4831-acce-546ddff7277a\") " pod="openstack/dnsmasq-dns-55f844cf75-x5846" Feb 17 13:55:17 crc kubenswrapper[4768]: I0217 13:55:17.150493 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9ce4d08f-4b2c-4831-acce-546ddff7277a-ovsdbserver-sb\") pod \"dnsmasq-dns-55f844cf75-x5846\" (UID: \"9ce4d08f-4b2c-4831-acce-546ddff7277a\") " pod="openstack/dnsmasq-dns-55f844cf75-x5846" Feb 17 13:55:17 crc kubenswrapper[4768]: I0217 13:55:17.151473 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-b89745fbd-lcjtt"] Feb 17 13:55:17 crc kubenswrapper[4768]: I0217 13:55:17.152972 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-b89745fbd-lcjtt" Feb 17 13:55:17 crc kubenswrapper[4768]: I0217 13:55:17.154288 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/9ce4d08f-4b2c-4831-acce-546ddff7277a-dns-swift-storage-0\") pod \"dnsmasq-dns-55f844cf75-x5846\" (UID: \"9ce4d08f-4b2c-4831-acce-546ddff7277a\") " pod="openstack/dnsmasq-dns-55f844cf75-x5846" Feb 17 13:55:17 crc kubenswrapper[4768]: I0217 13:55:17.153690 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9ce4d08f-4b2c-4831-acce-546ddff7277a-dns-svc\") pod \"dnsmasq-dns-55f844cf75-x5846\" (UID: \"9ce4d08f-4b2c-4831-acce-546ddff7277a\") " pod="openstack/dnsmasq-dns-55f844cf75-x5846" Feb 17 13:55:17 crc kubenswrapper[4768]: I0217 13:55:17.153360 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9ce4d08f-4b2c-4831-acce-546ddff7277a-config\") pod \"dnsmasq-dns-55f844cf75-x5846\" (UID: \"9ce4d08f-4b2c-4831-acce-546ddff7277a\") " pod="openstack/dnsmasq-dns-55f844cf75-x5846" Feb 17 13:55:17 crc kubenswrapper[4768]: I0217 13:55:17.154762 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9ce4d08f-4b2c-4831-acce-546ddff7277a-ovsdbserver-nb\") pod \"dnsmasq-dns-55f844cf75-x5846\" (UID: \"9ce4d08f-4b2c-4831-acce-546ddff7277a\") " pod="openstack/dnsmasq-dns-55f844cf75-x5846" Feb 17 13:55:17 crc kubenswrapper[4768]: I0217 13:55:17.157220 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-ovndbs" Feb 17 13:55:17 crc kubenswrapper[4768]: I0217 13:55:17.157557 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Feb 17 13:55:17 crc kubenswrapper[4768]: I0217 13:55:17.157704 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-gwhsg" Feb 17 13:55:17 crc kubenswrapper[4768]: I0217 13:55:17.158432 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Feb 17 13:55:17 crc kubenswrapper[4768]: I0217 13:55:17.168680 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-b89745fbd-lcjtt"] Feb 17 13:55:17 crc kubenswrapper[4768]: E0217 13:55:17.171066 4768 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified" Feb 17 13:55:17 crc kubenswrapper[4768]: E0217 13:55:17.171287 4768 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cinder-db-sync,Image:quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_set_configs && /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-machine-id,ReadOnly:true,MountPath:/etc/machine-id,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/config-data/merged,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/cinder/cinder.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:db-sync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-l2kmg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cinder-db-sync-5c8md_openstack(df7e53d7-b63b-41b4-b909-c6effd0dab0c): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 17 13:55:17 crc kubenswrapper[4768]: E0217 13:55:17.172530 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/cinder-db-sync-5c8md" podUID="df7e53d7-b63b-41b4-b909-c6effd0dab0c" Feb 17 13:55:17 crc kubenswrapper[4768]: I0217 13:55:17.183785 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pbvz8\" (UniqueName: \"kubernetes.io/projected/9ce4d08f-4b2c-4831-acce-546ddff7277a-kube-api-access-pbvz8\") pod \"dnsmasq-dns-55f844cf75-x5846\" (UID: \"9ce4d08f-4b2c-4831-acce-546ddff7277a\") " pod="openstack/dnsmasq-dns-55f844cf75-x5846" Feb 17 13:55:17 crc kubenswrapper[4768]: I0217 13:55:17.248772 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-7645d5bbd9-t6l64" Feb 17 13:55:17 crc kubenswrapper[4768]: I0217 13:55:17.252799 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/c732e620-9ed0-4246-93ca-c71277029344-httpd-config\") pod \"neutron-b89745fbd-lcjtt\" (UID: \"c732e620-9ed0-4246-93ca-c71277029344\") " pod="openstack/neutron-b89745fbd-lcjtt" Feb 17 13:55:17 crc kubenswrapper[4768]: I0217 13:55:17.252855 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/c732e620-9ed0-4246-93ca-c71277029344-config\") pod \"neutron-b89745fbd-lcjtt\" (UID: \"c732e620-9ed0-4246-93ca-c71277029344\") " pod="openstack/neutron-b89745fbd-lcjtt" Feb 17 13:55:17 crc kubenswrapper[4768]: I0217 13:55:17.252908 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s8ncg\" (UniqueName: \"kubernetes.io/projected/c732e620-9ed0-4246-93ca-c71277029344-kube-api-access-s8ncg\") pod \"neutron-b89745fbd-lcjtt\" (UID: \"c732e620-9ed0-4246-93ca-c71277029344\") " pod="openstack/neutron-b89745fbd-lcjtt" Feb 17 13:55:17 crc kubenswrapper[4768]: I0217 13:55:17.252938 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c732e620-9ed0-4246-93ca-c71277029344-combined-ca-bundle\") pod \"neutron-b89745fbd-lcjtt\" (UID: \"c732e620-9ed0-4246-93ca-c71277029344\") " pod="openstack/neutron-b89745fbd-lcjtt" Feb 17 13:55:17 crc kubenswrapper[4768]: I0217 13:55:17.252965 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/c732e620-9ed0-4246-93ca-c71277029344-ovndb-tls-certs\") pod \"neutron-b89745fbd-lcjtt\" (UID: \"c732e620-9ed0-4246-93ca-c71277029344\") " pod="openstack/neutron-b89745fbd-lcjtt" Feb 17 13:55:17 crc kubenswrapper[4768]: I0217 13:55:17.257954 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-55d496646f-4tq7c" Feb 17 13:55:17 crc kubenswrapper[4768]: I0217 13:55:17.266086 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-66c5b78857-sdk9f" Feb 17 13:55:17 crc kubenswrapper[4768]: I0217 13:55:17.333474 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55f844cf75-x5846" Feb 17 13:55:17 crc kubenswrapper[4768]: I0217 13:55:17.354523 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/30f56441-0814-409d-98c2-f51795b60a80-logs\") pod \"30f56441-0814-409d-98c2-f51795b60a80\" (UID: \"30f56441-0814-409d-98c2-f51795b60a80\") " Feb 17 13:55:17 crc kubenswrapper[4768]: I0217 13:55:17.354786 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/23e8824d-9b92-4724-971e-c807d48d8229-scripts\") pod \"23e8824d-9b92-4724-971e-c807d48d8229\" (UID: \"23e8824d-9b92-4724-971e-c807d48d8229\") " Feb 17 13:55:17 crc kubenswrapper[4768]: I0217 13:55:17.354947 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cr8zp\" (UniqueName: \"kubernetes.io/projected/90fa85b0-3a62-4df2-835f-ca176c602f7b-kube-api-access-cr8zp\") pod \"90fa85b0-3a62-4df2-835f-ca176c602f7b\" (UID: \"90fa85b0-3a62-4df2-835f-ca176c602f7b\") " Feb 17 13:55:17 crc kubenswrapper[4768]: I0217 13:55:17.354838 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/30f56441-0814-409d-98c2-f51795b60a80-logs" (OuterVolumeSpecName: "logs") pod "30f56441-0814-409d-98c2-f51795b60a80" (UID: "30f56441-0814-409d-98c2-f51795b60a80"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 13:55:17 crc kubenswrapper[4768]: I0217 13:55:17.355170 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/23e8824d-9b92-4724-971e-c807d48d8229-scripts" (OuterVolumeSpecName: "scripts") pod "23e8824d-9b92-4724-971e-c807d48d8229" (UID: "23e8824d-9b92-4724-971e-c807d48d8229"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 13:55:17 crc kubenswrapper[4768]: I0217 13:55:17.355370 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/30f56441-0814-409d-98c2-f51795b60a80-horizon-secret-key\") pod \"30f56441-0814-409d-98c2-f51795b60a80\" (UID: \"30f56441-0814-409d-98c2-f51795b60a80\") " Feb 17 13:55:17 crc kubenswrapper[4768]: I0217 13:55:17.355842 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xz8jg\" (UniqueName: \"kubernetes.io/projected/30f56441-0814-409d-98c2-f51795b60a80-kube-api-access-xz8jg\") pod \"30f56441-0814-409d-98c2-f51795b60a80\" (UID: \"30f56441-0814-409d-98c2-f51795b60a80\") " Feb 17 13:55:17 crc kubenswrapper[4768]: I0217 13:55:17.356472 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/23e8824d-9b92-4724-971e-c807d48d8229-logs\") pod \"23e8824d-9b92-4724-971e-c807d48d8229\" (UID: \"23e8824d-9b92-4724-971e-c807d48d8229\") " Feb 17 13:55:17 crc kubenswrapper[4768]: I0217 13:55:17.356619 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/90fa85b0-3a62-4df2-835f-ca176c602f7b-scripts\") pod \"90fa85b0-3a62-4df2-835f-ca176c602f7b\" (UID: \"90fa85b0-3a62-4df2-835f-ca176c602f7b\") " Feb 17 13:55:17 crc kubenswrapper[4768]: I0217 13:55:17.356854 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/23e8824d-9b92-4724-971e-c807d48d8229-logs" (OuterVolumeSpecName: "logs") pod "23e8824d-9b92-4724-971e-c807d48d8229" (UID: "23e8824d-9b92-4724-971e-c807d48d8229"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 13:55:17 crc kubenswrapper[4768]: I0217 13:55:17.356959 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/90fa85b0-3a62-4df2-835f-ca176c602f7b-config-data\") pod \"90fa85b0-3a62-4df2-835f-ca176c602f7b\" (UID: \"90fa85b0-3a62-4df2-835f-ca176c602f7b\") " Feb 17 13:55:17 crc kubenswrapper[4768]: I0217 13:55:17.357261 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/90fa85b0-3a62-4df2-835f-ca176c602f7b-scripts" (OuterVolumeSpecName: "scripts") pod "90fa85b0-3a62-4df2-835f-ca176c602f7b" (UID: "90fa85b0-3a62-4df2-835f-ca176c602f7b"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 13:55:17 crc kubenswrapper[4768]: I0217 13:55:17.357730 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/90fa85b0-3a62-4df2-835f-ca176c602f7b-config-data" (OuterVolumeSpecName: "config-data") pod "90fa85b0-3a62-4df2-835f-ca176c602f7b" (UID: "90fa85b0-3a62-4df2-835f-ca176c602f7b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 13:55:17 crc kubenswrapper[4768]: I0217 13:55:17.357751 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/23e8824d-9b92-4724-971e-c807d48d8229-horizon-secret-key\") pod \"23e8824d-9b92-4724-971e-c807d48d8229\" (UID: \"23e8824d-9b92-4724-971e-c807d48d8229\") " Feb 17 13:55:17 crc kubenswrapper[4768]: I0217 13:55:17.358038 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/90fa85b0-3a62-4df2-835f-ca176c602f7b-logs\") pod \"90fa85b0-3a62-4df2-835f-ca176c602f7b\" (UID: \"90fa85b0-3a62-4df2-835f-ca176c602f7b\") " Feb 17 13:55:17 crc kubenswrapper[4768]: I0217 13:55:17.358172 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/30f56441-0814-409d-98c2-f51795b60a80-scripts\") pod \"30f56441-0814-409d-98c2-f51795b60a80\" (UID: \"30f56441-0814-409d-98c2-f51795b60a80\") " Feb 17 13:55:17 crc kubenswrapper[4768]: I0217 13:55:17.358321 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7n6m\" (UniqueName: \"kubernetes.io/projected/23e8824d-9b92-4724-971e-c807d48d8229-kube-api-access-x7n6m\") pod \"23e8824d-9b92-4724-971e-c807d48d8229\" (UID: \"23e8824d-9b92-4724-971e-c807d48d8229\") " Feb 17 13:55:17 crc kubenswrapper[4768]: I0217 13:55:17.358622 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/90fa85b0-3a62-4df2-835f-ca176c602f7b-logs" (OuterVolumeSpecName: "logs") pod "90fa85b0-3a62-4df2-835f-ca176c602f7b" (UID: "90fa85b0-3a62-4df2-835f-ca176c602f7b"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 13:55:17 crc kubenswrapper[4768]: I0217 13:55:17.358957 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/30f56441-0814-409d-98c2-f51795b60a80-kube-api-access-xz8jg" (OuterVolumeSpecName: "kube-api-access-xz8jg") pod "30f56441-0814-409d-98c2-f51795b60a80" (UID: "30f56441-0814-409d-98c2-f51795b60a80"). InnerVolumeSpecName "kube-api-access-xz8jg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 13:55:17 crc kubenswrapper[4768]: I0217 13:55:17.359033 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/30f56441-0814-409d-98c2-f51795b60a80-scripts" (OuterVolumeSpecName: "scripts") pod "30f56441-0814-409d-98c2-f51795b60a80" (UID: "30f56441-0814-409d-98c2-f51795b60a80"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 13:55:17 crc kubenswrapper[4768]: I0217 13:55:17.359653 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/90fa85b0-3a62-4df2-835f-ca176c602f7b-horizon-secret-key\") pod \"90fa85b0-3a62-4df2-835f-ca176c602f7b\" (UID: \"90fa85b0-3a62-4df2-835f-ca176c602f7b\") " Feb 17 13:55:17 crc kubenswrapper[4768]: I0217 13:55:17.359924 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/23e8824d-9b92-4724-971e-c807d48d8229-config-data\") pod \"23e8824d-9b92-4724-971e-c807d48d8229\" (UID: \"23e8824d-9b92-4724-971e-c807d48d8229\") " Feb 17 13:55:17 crc kubenswrapper[4768]: I0217 13:55:17.360070 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/30f56441-0814-409d-98c2-f51795b60a80-config-data\") pod \"30f56441-0814-409d-98c2-f51795b60a80\" (UID: \"30f56441-0814-409d-98c2-f51795b60a80\") " Feb 17 13:55:17 crc kubenswrapper[4768]: I0217 13:55:17.360649 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/30f56441-0814-409d-98c2-f51795b60a80-config-data" (OuterVolumeSpecName: "config-data") pod "30f56441-0814-409d-98c2-f51795b60a80" (UID: "30f56441-0814-409d-98c2-f51795b60a80"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 13:55:17 crc kubenswrapper[4768]: I0217 13:55:17.360781 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/23e8824d-9b92-4724-971e-c807d48d8229-config-data" (OuterVolumeSpecName: "config-data") pod "23e8824d-9b92-4724-971e-c807d48d8229" (UID: "23e8824d-9b92-4724-971e-c807d48d8229"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 13:55:17 crc kubenswrapper[4768]: I0217 13:55:17.360968 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/90fa85b0-3a62-4df2-835f-ca176c602f7b-kube-api-access-cr8zp" (OuterVolumeSpecName: "kube-api-access-cr8zp") pod "90fa85b0-3a62-4df2-835f-ca176c602f7b" (UID: "90fa85b0-3a62-4df2-835f-ca176c602f7b"). InnerVolumeSpecName "kube-api-access-cr8zp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 13:55:17 crc kubenswrapper[4768]: I0217 13:55:17.361420 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/c732e620-9ed0-4246-93ca-c71277029344-httpd-config\") pod \"neutron-b89745fbd-lcjtt\" (UID: \"c732e620-9ed0-4246-93ca-c71277029344\") " pod="openstack/neutron-b89745fbd-lcjtt" Feb 17 13:55:17 crc kubenswrapper[4768]: I0217 13:55:17.361620 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/c732e620-9ed0-4246-93ca-c71277029344-config\") pod \"neutron-b89745fbd-lcjtt\" (UID: \"c732e620-9ed0-4246-93ca-c71277029344\") " pod="openstack/neutron-b89745fbd-lcjtt" Feb 17 13:55:17 crc kubenswrapper[4768]: I0217 13:55:17.361860 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s8ncg\" (UniqueName: \"kubernetes.io/projected/c732e620-9ed0-4246-93ca-c71277029344-kube-api-access-s8ncg\") pod \"neutron-b89745fbd-lcjtt\" (UID: \"c732e620-9ed0-4246-93ca-c71277029344\") " pod="openstack/neutron-b89745fbd-lcjtt" Feb 17 13:55:17 crc kubenswrapper[4768]: I0217 13:55:17.362327 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c732e620-9ed0-4246-93ca-c71277029344-combined-ca-bundle\") pod \"neutron-b89745fbd-lcjtt\" (UID: \"c732e620-9ed0-4246-93ca-c71277029344\") " pod="openstack/neutron-b89745fbd-lcjtt" Feb 17 13:55:17 crc kubenswrapper[4768]: I0217 13:55:17.362480 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/c732e620-9ed0-4246-93ca-c71277029344-ovndb-tls-certs\") pod \"neutron-b89745fbd-lcjtt\" (UID: \"c732e620-9ed0-4246-93ca-c71277029344\") " pod="openstack/neutron-b89745fbd-lcjtt" Feb 17 13:55:17 crc kubenswrapper[4768]: I0217 13:55:17.364658 4768 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/30f56441-0814-409d-98c2-f51795b60a80-logs\") on node \"crc\" DevicePath \"\"" Feb 17 13:55:17 crc kubenswrapper[4768]: I0217 13:55:17.364797 4768 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/23e8824d-9b92-4724-971e-c807d48d8229-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 13:55:17 crc kubenswrapper[4768]: I0217 13:55:17.365066 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cr8zp\" (UniqueName: \"kubernetes.io/projected/90fa85b0-3a62-4df2-835f-ca176c602f7b-kube-api-access-cr8zp\") on node \"crc\" DevicePath \"\"" Feb 17 13:55:17 crc kubenswrapper[4768]: I0217 13:55:17.365242 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xz8jg\" (UniqueName: \"kubernetes.io/projected/30f56441-0814-409d-98c2-f51795b60a80-kube-api-access-xz8jg\") on node \"crc\" DevicePath \"\"" Feb 17 13:55:17 crc kubenswrapper[4768]: I0217 13:55:17.365339 4768 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/23e8824d-9b92-4724-971e-c807d48d8229-logs\") on node \"crc\" DevicePath \"\"" Feb 17 13:55:17 crc kubenswrapper[4768]: I0217 13:55:17.365446 4768 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/90fa85b0-3a62-4df2-835f-ca176c602f7b-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 13:55:17 crc kubenswrapper[4768]: I0217 13:55:17.365553 4768 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/90fa85b0-3a62-4df2-835f-ca176c602f7b-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 13:55:17 crc kubenswrapper[4768]: I0217 13:55:17.365672 4768 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/90fa85b0-3a62-4df2-835f-ca176c602f7b-logs\") on node \"crc\" DevicePath \"\"" Feb 17 13:55:17 crc kubenswrapper[4768]: I0217 13:55:17.365776 4768 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/30f56441-0814-409d-98c2-f51795b60a80-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 13:55:17 crc kubenswrapper[4768]: I0217 13:55:17.365895 4768 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/23e8824d-9b92-4724-971e-c807d48d8229-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 13:55:17 crc kubenswrapper[4768]: I0217 13:55:17.365994 4768 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/30f56441-0814-409d-98c2-f51795b60a80-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 13:55:17 crc kubenswrapper[4768]: I0217 13:55:17.364983 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/30f56441-0814-409d-98c2-f51795b60a80-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "30f56441-0814-409d-98c2-f51795b60a80" (UID: "30f56441-0814-409d-98c2-f51795b60a80"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 13:55:17 crc kubenswrapper[4768]: I0217 13:55:17.364988 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/90fa85b0-3a62-4df2-835f-ca176c602f7b-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "90fa85b0-3a62-4df2-835f-ca176c602f7b" (UID: "90fa85b0-3a62-4df2-835f-ca176c602f7b"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 13:55:17 crc kubenswrapper[4768]: I0217 13:55:17.368047 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/23e8824d-9b92-4724-971e-c807d48d8229-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "23e8824d-9b92-4724-971e-c807d48d8229" (UID: "23e8824d-9b92-4724-971e-c807d48d8229"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 13:55:17 crc kubenswrapper[4768]: I0217 13:55:17.374424 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/c732e620-9ed0-4246-93ca-c71277029344-ovndb-tls-certs\") pod \"neutron-b89745fbd-lcjtt\" (UID: \"c732e620-9ed0-4246-93ca-c71277029344\") " pod="openstack/neutron-b89745fbd-lcjtt" Feb 17 13:55:17 crc kubenswrapper[4768]: I0217 13:55:17.374604 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/c732e620-9ed0-4246-93ca-c71277029344-config\") pod \"neutron-b89745fbd-lcjtt\" (UID: \"c732e620-9ed0-4246-93ca-c71277029344\") " pod="openstack/neutron-b89745fbd-lcjtt" Feb 17 13:55:17 crc kubenswrapper[4768]: I0217 13:55:17.375419 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/c732e620-9ed0-4246-93ca-c71277029344-httpd-config\") pod \"neutron-b89745fbd-lcjtt\" (UID: \"c732e620-9ed0-4246-93ca-c71277029344\") " pod="openstack/neutron-b89745fbd-lcjtt" Feb 17 13:55:17 crc kubenswrapper[4768]: I0217 13:55:17.377421 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c732e620-9ed0-4246-93ca-c71277029344-combined-ca-bundle\") pod \"neutron-b89745fbd-lcjtt\" (UID: \"c732e620-9ed0-4246-93ca-c71277029344\") " pod="openstack/neutron-b89745fbd-lcjtt" Feb 17 13:55:17 crc kubenswrapper[4768]: I0217 13:55:17.382765 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s8ncg\" (UniqueName: \"kubernetes.io/projected/c732e620-9ed0-4246-93ca-c71277029344-kube-api-access-s8ncg\") pod \"neutron-b89745fbd-lcjtt\" (UID: \"c732e620-9ed0-4246-93ca-c71277029344\") " pod="openstack/neutron-b89745fbd-lcjtt" Feb 17 13:55:17 crc kubenswrapper[4768]: I0217 13:55:17.383358 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/23e8824d-9b92-4724-971e-c807d48d8229-kube-api-access-x7n6m" (OuterVolumeSpecName: "kube-api-access-x7n6m") pod "23e8824d-9b92-4724-971e-c807d48d8229" (UID: "23e8824d-9b92-4724-971e-c807d48d8229"). InnerVolumeSpecName "kube-api-access-x7n6m". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 13:55:17 crc kubenswrapper[4768]: I0217 13:55:17.467301 4768 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/23e8824d-9b92-4724-971e-c807d48d8229-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Feb 17 13:55:17 crc kubenswrapper[4768]: I0217 13:55:17.467338 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7n6m\" (UniqueName: \"kubernetes.io/projected/23e8824d-9b92-4724-971e-c807d48d8229-kube-api-access-x7n6m\") on node \"crc\" DevicePath \"\"" Feb 17 13:55:17 crc kubenswrapper[4768]: I0217 13:55:17.467351 4768 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/90fa85b0-3a62-4df2-835f-ca176c602f7b-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Feb 17 13:55:17 crc kubenswrapper[4768]: I0217 13:55:17.467363 4768 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/30f56441-0814-409d-98c2-f51795b60a80-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Feb 17 13:55:17 crc kubenswrapper[4768]: I0217 13:55:17.543524 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e0ab0a41-18cb-4ed9-9c99-009e71c184f6" path="/var/lib/kubelet/pods/e0ab0a41-18cb-4ed9-9c99-009e71c184f6/volumes" Feb 17 13:55:17 crc kubenswrapper[4768]: I0217 13:55:17.580766 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-b89745fbd-lcjtt" Feb 17 13:55:17 crc kubenswrapper[4768]: I0217 13:55:17.617844 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-55d496646f-4tq7c" event={"ID":"23e8824d-9b92-4724-971e-c807d48d8229","Type":"ContainerDied","Data":"30a9df22abc456a7e1bc28304fb6ef77802d88c961a23692d448f2e131af72fb"} Feb 17 13:55:17 crc kubenswrapper[4768]: I0217 13:55:17.617890 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-55d496646f-4tq7c" Feb 17 13:55:17 crc kubenswrapper[4768]: I0217 13:55:17.624426 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-66c5b78857-sdk9f" event={"ID":"30f56441-0814-409d-98c2-f51795b60a80","Type":"ContainerDied","Data":"df8ea5c2b2725dd5f342b485c3f416dd19a956f873112c676225d48fde0c4837"} Feb 17 13:55:17 crc kubenswrapper[4768]: I0217 13:55:17.624912 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-66c5b78857-sdk9f" Feb 17 13:55:17 crc kubenswrapper[4768]: I0217 13:55:17.625724 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-7645d5bbd9-t6l64" Feb 17 13:55:17 crc kubenswrapper[4768]: I0217 13:55:17.626015 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7645d5bbd9-t6l64" event={"ID":"90fa85b0-3a62-4df2-835f-ca176c602f7b","Type":"ContainerDied","Data":"eb0f353174c91cb882e7c2a9f0100ad0cf029a76b1bf9e5c74f8fd3ad7a717ec"} Feb 17 13:55:17 crc kubenswrapper[4768]: E0217 13:55:17.627121 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified\\\"\"" pod="openstack/cinder-db-sync-5c8md" podUID="df7e53d7-b63b-41b4-b909-c6effd0dab0c" Feb 17 13:55:17 crc kubenswrapper[4768]: I0217 13:55:17.668759 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-55d496646f-4tq7c"] Feb 17 13:55:17 crc kubenswrapper[4768]: I0217 13:55:17.669244 4768 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-55d496646f-4tq7c"] Feb 17 13:55:17 crc kubenswrapper[4768]: I0217 13:55:17.724369 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-66c5b78857-sdk9f"] Feb 17 13:55:17 crc kubenswrapper[4768]: I0217 13:55:17.734604 4768 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-66c5b78857-sdk9f"] Feb 17 13:55:17 crc kubenswrapper[4768]: I0217 13:55:17.770488 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-7645d5bbd9-t6l64"] Feb 17 13:55:17 crc kubenswrapper[4768]: I0217 13:55:17.776818 4768 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-7645d5bbd9-t6l64"] Feb 17 13:55:17 crc kubenswrapper[4768]: I0217 13:55:17.920278 4768 scope.go:117] "RemoveContainer" containerID="08cc5bc6f875fb0b38b26674a4928dd368487de9487bba4965941f52ab0edd2c" Feb 17 13:55:17 crc kubenswrapper[4768]: I0217 13:55:17.985194 4768 scope.go:117] "RemoveContainer" containerID="87d58f42c17bfa2aa8f87897bc651d29bae5ab1acae7c5e997259fb061e0bc27" Feb 17 13:55:18 crc kubenswrapper[4768]: I0217 13:55:18.031742 4768 scope.go:117] "RemoveContainer" containerID="2e9636a2cc437556f953304bf94ccb0842ded9aa2e1cd892196e5f4f432ad6ce" Feb 17 13:55:18 crc kubenswrapper[4768]: I0217 13:55:18.415545 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-6584d79658-wtxrc"] Feb 17 13:55:18 crc kubenswrapper[4768]: W0217 13:55:18.547964 4768 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode1acea03_8a67_474d_a6b1_803ea949a747.slice/crio-13c9ea902cdee4196e4faaf35fe4445469372e0238e9f42189afb39cd41b8042 WatchSource:0}: Error finding container 13c9ea902cdee4196e4faaf35fe4445469372e0238e9f42189afb39cd41b8042: Status 404 returned error can't find the container with id 13c9ea902cdee4196e4faaf35fe4445469372e0238e9f42189afb39cd41b8042 Feb 17 13:55:18 crc kubenswrapper[4768]: I0217 13:55:18.555019 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-78zqh"] Feb 17 13:55:18 crc kubenswrapper[4768]: W0217 13:55:18.576365 4768 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc20ad4a2_cf3e_4390_9141_1cc58518fd2b.slice/crio-1d17c1ad4b2966fe20b3c50b7d3e8591d321560f133eb3c2ff7c07184fbb95e5 WatchSource:0}: Error finding container 1d17c1ad4b2966fe20b3c50b7d3e8591d321560f133eb3c2ff7c07184fbb95e5: Status 404 returned error can't find the container with id 1d17c1ad4b2966fe20b3c50b7d3e8591d321560f133eb3c2ff7c07184fbb95e5 Feb 17 13:55:18 crc kubenswrapper[4768]: I0217 13:55:18.582547 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-684746c5d4-6lxfv"] Feb 17 13:55:18 crc kubenswrapper[4768]: I0217 13:55:18.636994 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1c0e296a-80f3-4efe-bb28-17fdfd153397","Type":"ContainerStarted","Data":"1076244baedb5276f95f29bd05ad24800133851d3eaaca34c9a31c87bf95c679"} Feb 17 13:55:18 crc kubenswrapper[4768]: I0217 13:55:18.640741 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-xfpzr" event={"ID":"7ff099b6-c514-40c8-aa19-370d7f8dfbaf","Type":"ContainerStarted","Data":"817654ff26d17a41b821e13d9494c615148a42d040ef50715af28a50f1a3360a"} Feb 17 13:55:18 crc kubenswrapper[4768]: I0217 13:55:18.643644 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6584d79658-wtxrc" event={"ID":"331a37d3-96b1-4065-9941-25acc64cc6c1","Type":"ContainerStarted","Data":"6546c8c8ac5ce7b6afa2b8e754bd8bb2b4d83e54ed941093319ff1ab5b6d6600"} Feb 17 13:55:18 crc kubenswrapper[4768]: I0217 13:55:18.645724 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-684746c5d4-6lxfv" event={"ID":"c20ad4a2-cf3e-4390-9141-1cc58518fd2b","Type":"ContainerStarted","Data":"1d17c1ad4b2966fe20b3c50b7d3e8591d321560f133eb3c2ff7c07184fbb95e5"} Feb 17 13:55:18 crc kubenswrapper[4768]: I0217 13:55:18.650057 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-78zqh" event={"ID":"e1acea03-8a67-474d-a6b1-803ea949a747","Type":"ContainerStarted","Data":"13c9ea902cdee4196e4faaf35fe4445469372e0238e9f42189afb39cd41b8042"} Feb 17 13:55:18 crc kubenswrapper[4768]: I0217 13:55:18.670158 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-sync-xfpzr" podStartSLOduration=4.705897246 podStartE2EDuration="32.67012642s" podCreationTimestamp="2026-02-17 13:54:46 +0000 UTC" firstStartedPulling="2026-02-17 13:54:47.76792367 +0000 UTC m=+1107.047310112" lastFinishedPulling="2026-02-17 13:55:15.732152844 +0000 UTC m=+1135.011539286" observedRunningTime="2026-02-17 13:55:18.65787396 +0000 UTC m=+1137.937260402" watchObservedRunningTime="2026-02-17 13:55:18.67012642 +0000 UTC m=+1137.949512872" Feb 17 13:55:18 crc kubenswrapper[4768]: I0217 13:55:18.734500 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-55f844cf75-x5846"] Feb 17 13:55:18 crc kubenswrapper[4768]: W0217 13:55:18.737771 4768 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9ce4d08f_4b2c_4831_acce_546ddff7277a.slice/crio-1b70669dbe1d78fe68e25e287f4ffd999ee11277612a4732005a3526475d34c3 WatchSource:0}: Error finding container 1b70669dbe1d78fe68e25e287f4ffd999ee11277612a4732005a3526475d34c3: Status 404 returned error can't find the container with id 1b70669dbe1d78fe68e25e287f4ffd999ee11277612a4732005a3526475d34c3 Feb 17 13:55:18 crc kubenswrapper[4768]: I0217 13:55:18.816017 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 17 13:55:18 crc kubenswrapper[4768]: W0217 13:55:18.816483 4768 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda6e3afdd_2e51_4f0a_9208_5784a5900c96.slice/crio-4b2868befb5600dd322335a286ed6bbfe75a41e04c110c758f1a5a3601b13e50 WatchSource:0}: Error finding container 4b2868befb5600dd322335a286ed6bbfe75a41e04c110c758f1a5a3601b13e50: Status 404 returned error can't find the container with id 4b2868befb5600dd322335a286ed6bbfe75a41e04c110c758f1a5a3601b13e50 Feb 17 13:55:19 crc kubenswrapper[4768]: I0217 13:55:19.454606 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-686d5745f-p8vdx"] Feb 17 13:55:19 crc kubenswrapper[4768]: I0217 13:55:19.457079 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-686d5745f-p8vdx" Feb 17 13:55:19 crc kubenswrapper[4768]: I0217 13:55:19.462741 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-internal-svc" Feb 17 13:55:19 crc kubenswrapper[4768]: I0217 13:55:19.463004 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-public-svc" Feb 17 13:55:19 crc kubenswrapper[4768]: I0217 13:55:19.491574 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-686d5745f-p8vdx"] Feb 17 13:55:19 crc kubenswrapper[4768]: I0217 13:55:19.516237 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2de38494-6385-477a-9ec8-2383ad286611-combined-ca-bundle\") pod \"neutron-686d5745f-p8vdx\" (UID: \"2de38494-6385-477a-9ec8-2383ad286611\") " pod="openstack/neutron-686d5745f-p8vdx" Feb 17 13:55:19 crc kubenswrapper[4768]: I0217 13:55:19.516662 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2de38494-6385-477a-9ec8-2383ad286611-public-tls-certs\") pod \"neutron-686d5745f-p8vdx\" (UID: \"2de38494-6385-477a-9ec8-2383ad286611\") " pod="openstack/neutron-686d5745f-p8vdx" Feb 17 13:55:19 crc kubenswrapper[4768]: I0217 13:55:19.516732 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/2de38494-6385-477a-9ec8-2383ad286611-ovndb-tls-certs\") pod \"neutron-686d5745f-p8vdx\" (UID: \"2de38494-6385-477a-9ec8-2383ad286611\") " pod="openstack/neutron-686d5745f-p8vdx" Feb 17 13:55:19 crc kubenswrapper[4768]: I0217 13:55:19.516820 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rdx7f\" (UniqueName: \"kubernetes.io/projected/2de38494-6385-477a-9ec8-2383ad286611-kube-api-access-rdx7f\") pod \"neutron-686d5745f-p8vdx\" (UID: \"2de38494-6385-477a-9ec8-2383ad286611\") " pod="openstack/neutron-686d5745f-p8vdx" Feb 17 13:55:19 crc kubenswrapper[4768]: I0217 13:55:19.516907 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/2de38494-6385-477a-9ec8-2383ad286611-config\") pod \"neutron-686d5745f-p8vdx\" (UID: \"2de38494-6385-477a-9ec8-2383ad286611\") " pod="openstack/neutron-686d5745f-p8vdx" Feb 17 13:55:19 crc kubenswrapper[4768]: I0217 13:55:19.517040 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2de38494-6385-477a-9ec8-2383ad286611-internal-tls-certs\") pod \"neutron-686d5745f-p8vdx\" (UID: \"2de38494-6385-477a-9ec8-2383ad286611\") " pod="openstack/neutron-686d5745f-p8vdx" Feb 17 13:55:19 crc kubenswrapper[4768]: I0217 13:55:19.517091 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/2de38494-6385-477a-9ec8-2383ad286611-httpd-config\") pod \"neutron-686d5745f-p8vdx\" (UID: \"2de38494-6385-477a-9ec8-2383ad286611\") " pod="openstack/neutron-686d5745f-p8vdx" Feb 17 13:55:19 crc kubenswrapper[4768]: I0217 13:55:19.550522 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="23e8824d-9b92-4724-971e-c807d48d8229" path="/var/lib/kubelet/pods/23e8824d-9b92-4724-971e-c807d48d8229/volumes" Feb 17 13:55:19 crc kubenswrapper[4768]: I0217 13:55:19.551493 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="30f56441-0814-409d-98c2-f51795b60a80" path="/var/lib/kubelet/pods/30f56441-0814-409d-98c2-f51795b60a80/volumes" Feb 17 13:55:19 crc kubenswrapper[4768]: I0217 13:55:19.552290 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="90fa85b0-3a62-4df2-835f-ca176c602f7b" path="/var/lib/kubelet/pods/90fa85b0-3a62-4df2-835f-ca176c602f7b/volumes" Feb 17 13:55:19 crc kubenswrapper[4768]: I0217 13:55:19.581670 4768 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-764c5664d7-6pghj" podUID="e0ab0a41-18cb-4ed9-9c99-009e71c184f6" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.124:5353: i/o timeout" Feb 17 13:55:19 crc kubenswrapper[4768]: I0217 13:55:19.631420 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2de38494-6385-477a-9ec8-2383ad286611-public-tls-certs\") pod \"neutron-686d5745f-p8vdx\" (UID: \"2de38494-6385-477a-9ec8-2383ad286611\") " pod="openstack/neutron-686d5745f-p8vdx" Feb 17 13:55:19 crc kubenswrapper[4768]: I0217 13:55:19.631536 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/2de38494-6385-477a-9ec8-2383ad286611-ovndb-tls-certs\") pod \"neutron-686d5745f-p8vdx\" (UID: \"2de38494-6385-477a-9ec8-2383ad286611\") " pod="openstack/neutron-686d5745f-p8vdx" Feb 17 13:55:19 crc kubenswrapper[4768]: I0217 13:55:19.631584 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdx7f\" (UniqueName: \"kubernetes.io/projected/2de38494-6385-477a-9ec8-2383ad286611-kube-api-access-rdx7f\") pod \"neutron-686d5745f-p8vdx\" (UID: \"2de38494-6385-477a-9ec8-2383ad286611\") " pod="openstack/neutron-686d5745f-p8vdx" Feb 17 13:55:19 crc kubenswrapper[4768]: I0217 13:55:19.631634 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/2de38494-6385-477a-9ec8-2383ad286611-config\") pod \"neutron-686d5745f-p8vdx\" (UID: \"2de38494-6385-477a-9ec8-2383ad286611\") " pod="openstack/neutron-686d5745f-p8vdx" Feb 17 13:55:19 crc kubenswrapper[4768]: I0217 13:55:19.631677 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2de38494-6385-477a-9ec8-2383ad286611-internal-tls-certs\") pod \"neutron-686d5745f-p8vdx\" (UID: \"2de38494-6385-477a-9ec8-2383ad286611\") " pod="openstack/neutron-686d5745f-p8vdx" Feb 17 13:55:19 crc kubenswrapper[4768]: I0217 13:55:19.631700 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/2de38494-6385-477a-9ec8-2383ad286611-httpd-config\") pod \"neutron-686d5745f-p8vdx\" (UID: \"2de38494-6385-477a-9ec8-2383ad286611\") " pod="openstack/neutron-686d5745f-p8vdx" Feb 17 13:55:19 crc kubenswrapper[4768]: I0217 13:55:19.631730 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2de38494-6385-477a-9ec8-2383ad286611-combined-ca-bundle\") pod \"neutron-686d5745f-p8vdx\" (UID: \"2de38494-6385-477a-9ec8-2383ad286611\") " pod="openstack/neutron-686d5745f-p8vdx" Feb 17 13:55:19 crc kubenswrapper[4768]: I0217 13:55:19.636547 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2de38494-6385-477a-9ec8-2383ad286611-combined-ca-bundle\") pod \"neutron-686d5745f-p8vdx\" (UID: \"2de38494-6385-477a-9ec8-2383ad286611\") " pod="openstack/neutron-686d5745f-p8vdx" Feb 17 13:55:19 crc kubenswrapper[4768]: I0217 13:55:19.636733 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/2de38494-6385-477a-9ec8-2383ad286611-ovndb-tls-certs\") pod \"neutron-686d5745f-p8vdx\" (UID: \"2de38494-6385-477a-9ec8-2383ad286611\") " pod="openstack/neutron-686d5745f-p8vdx" Feb 17 13:55:19 crc kubenswrapper[4768]: I0217 13:55:19.636999 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/2de38494-6385-477a-9ec8-2383ad286611-httpd-config\") pod \"neutron-686d5745f-p8vdx\" (UID: \"2de38494-6385-477a-9ec8-2383ad286611\") " pod="openstack/neutron-686d5745f-p8vdx" Feb 17 13:55:19 crc kubenswrapper[4768]: I0217 13:55:19.637352 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2de38494-6385-477a-9ec8-2383ad286611-public-tls-certs\") pod \"neutron-686d5745f-p8vdx\" (UID: \"2de38494-6385-477a-9ec8-2383ad286611\") " pod="openstack/neutron-686d5745f-p8vdx" Feb 17 13:55:19 crc kubenswrapper[4768]: I0217 13:55:19.638027 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2de38494-6385-477a-9ec8-2383ad286611-internal-tls-certs\") pod \"neutron-686d5745f-p8vdx\" (UID: \"2de38494-6385-477a-9ec8-2383ad286611\") " pod="openstack/neutron-686d5745f-p8vdx" Feb 17 13:55:19 crc kubenswrapper[4768]: I0217 13:55:19.639074 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/2de38494-6385-477a-9ec8-2383ad286611-config\") pod \"neutron-686d5745f-p8vdx\" (UID: \"2de38494-6385-477a-9ec8-2383ad286611\") " pod="openstack/neutron-686d5745f-p8vdx" Feb 17 13:55:19 crc kubenswrapper[4768]: I0217 13:55:19.660006 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdx7f\" (UniqueName: \"kubernetes.io/projected/2de38494-6385-477a-9ec8-2383ad286611-kube-api-access-rdx7f\") pod \"neutron-686d5745f-p8vdx\" (UID: \"2de38494-6385-477a-9ec8-2383ad286611\") " pod="openstack/neutron-686d5745f-p8vdx" Feb 17 13:55:19 crc kubenswrapper[4768]: I0217 13:55:19.664042 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"a6e3afdd-2e51-4f0a-9208-5784a5900c96","Type":"ContainerStarted","Data":"4b2868befb5600dd322335a286ed6bbfe75a41e04c110c758f1a5a3601b13e50"} Feb 17 13:55:19 crc kubenswrapper[4768]: I0217 13:55:19.665302 4768 generic.go:334] "Generic (PLEG): container finished" podID="9ce4d08f-4b2c-4831-acce-546ddff7277a" containerID="2b489a10eb1f6fd897082fc1b09044c15df00c13e18d839bde03c47b74d55153" exitCode=0 Feb 17 13:55:19 crc kubenswrapper[4768]: I0217 13:55:19.665380 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55f844cf75-x5846" event={"ID":"9ce4d08f-4b2c-4831-acce-546ddff7277a","Type":"ContainerDied","Data":"2b489a10eb1f6fd897082fc1b09044c15df00c13e18d839bde03c47b74d55153"} Feb 17 13:55:19 crc kubenswrapper[4768]: I0217 13:55:19.665411 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55f844cf75-x5846" event={"ID":"9ce4d08f-4b2c-4831-acce-546ddff7277a","Type":"ContainerStarted","Data":"1b70669dbe1d78fe68e25e287f4ffd999ee11277612a4732005a3526475d34c3"} Feb 17 13:55:19 crc kubenswrapper[4768]: I0217 13:55:19.671748 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-78zqh" event={"ID":"e1acea03-8a67-474d-a6b1-803ea949a747","Type":"ContainerStarted","Data":"ce75f57dc3df8a873079f9c8b07d66c9fdac75ed9895fb9cdad8d31fc27e241b"} Feb 17 13:55:19 crc kubenswrapper[4768]: I0217 13:55:19.677255 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6584d79658-wtxrc" event={"ID":"331a37d3-96b1-4065-9941-25acc64cc6c1","Type":"ContainerStarted","Data":"b818c78244c4b3b8cd025ae7e4de3e22225ac85dc7488b8f37eb092ba8165b91"} Feb 17 13:55:19 crc kubenswrapper[4768]: I0217 13:55:19.715937 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-78zqh" podStartSLOduration=18.7159169 podStartE2EDuration="18.7159169s" podCreationTimestamp="2026-02-17 13:55:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 13:55:19.708625788 +0000 UTC m=+1138.988012230" watchObservedRunningTime="2026-02-17 13:55:19.7159169 +0000 UTC m=+1138.995303342" Feb 17 13:55:19 crc kubenswrapper[4768]: I0217 13:55:19.752217 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-b89745fbd-lcjtt"] Feb 17 13:55:19 crc kubenswrapper[4768]: I0217 13:55:19.813616 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-686d5745f-p8vdx" Feb 17 13:55:19 crc kubenswrapper[4768]: W0217 13:55:19.897689 4768 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb4f4a20b_94a7_4b16_bdca_a99d9440b74e.slice/crio-782f8a56a0f86725dcff987d3ee4f41f2f733efcb258de20c7849e679c619a1f WatchSource:0}: Error finding container 782f8a56a0f86725dcff987d3ee4f41f2f733efcb258de20c7849e679c619a1f: Status 404 returned error can't find the container with id 782f8a56a0f86725dcff987d3ee4f41f2f733efcb258de20c7849e679c619a1f Feb 17 13:55:19 crc kubenswrapper[4768]: I0217 13:55:19.899857 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 17 13:55:20 crc kubenswrapper[4768]: I0217 13:55:20.567344 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-686d5745f-p8vdx"] Feb 17 13:55:20 crc kubenswrapper[4768]: W0217 13:55:20.653042 4768 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2de38494_6385_477a_9ec8_2383ad286611.slice/crio-6f92d699f2bc25182ef672b461cd4434c814f8d9f57ce67b159fb40da99ec8ed WatchSource:0}: Error finding container 6f92d699f2bc25182ef672b461cd4434c814f8d9f57ce67b159fb40da99ec8ed: Status 404 returned error can't find the container with id 6f92d699f2bc25182ef672b461cd4434c814f8d9f57ce67b159fb40da99ec8ed Feb 17 13:55:20 crc kubenswrapper[4768]: I0217 13:55:20.692542 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-684746c5d4-6lxfv" event={"ID":"c20ad4a2-cf3e-4390-9141-1cc58518fd2b","Type":"ContainerStarted","Data":"4725ccba78631322acecbc5f57357e815cbf090efacbc802aac47a86ce2bbbf1"} Feb 17 13:55:20 crc kubenswrapper[4768]: I0217 13:55:20.692585 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-684746c5d4-6lxfv" event={"ID":"c20ad4a2-cf3e-4390-9141-1cc58518fd2b","Type":"ContainerStarted","Data":"90e9f664f863f155e20726677206c7a01e93a7a33ecb475dfebd4929f9cabf08"} Feb 17 13:55:20 crc kubenswrapper[4768]: I0217 13:55:20.703594 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"b4f4a20b-94a7-4b16-bdca-a99d9440b74e","Type":"ContainerStarted","Data":"782f8a56a0f86725dcff987d3ee4f41f2f733efcb258de20c7849e679c619a1f"} Feb 17 13:55:20 crc kubenswrapper[4768]: I0217 13:55:20.705442 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"a6e3afdd-2e51-4f0a-9208-5784a5900c96","Type":"ContainerStarted","Data":"203ea60ac8fe35f253b6cd1d649e5f9ff820300b5239154f25461f6868be29a3"} Feb 17 13:55:20 crc kubenswrapper[4768]: I0217 13:55:20.720543 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55f844cf75-x5846" event={"ID":"9ce4d08f-4b2c-4831-acce-546ddff7277a","Type":"ContainerStarted","Data":"006722482ae4b4fe8cbbb77abdaf4c58cb033909d2a1b6b5a0cdcb756fd45af2"} Feb 17 13:55:20 crc kubenswrapper[4768]: I0217 13:55:20.720616 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-55f844cf75-x5846" Feb 17 13:55:20 crc kubenswrapper[4768]: I0217 13:55:20.739301 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-686d5745f-p8vdx" event={"ID":"2de38494-6385-477a-9ec8-2383ad286611","Type":"ContainerStarted","Data":"6f92d699f2bc25182ef672b461cd4434c814f8d9f57ce67b159fb40da99ec8ed"} Feb 17 13:55:20 crc kubenswrapper[4768]: I0217 13:55:20.748859 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6584d79658-wtxrc" event={"ID":"331a37d3-96b1-4065-9941-25acc64cc6c1","Type":"ContainerStarted","Data":"57455e8c07ed58349781fe032b9639f155c394fb14614ace4df357d2d76404e2"} Feb 17 13:55:20 crc kubenswrapper[4768]: I0217 13:55:20.753484 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-684746c5d4-6lxfv" podStartSLOduration=25.07797092 podStartE2EDuration="25.753467542s" podCreationTimestamp="2026-02-17 13:54:55 +0000 UTC" firstStartedPulling="2026-02-17 13:55:18.594641939 +0000 UTC m=+1137.874028381" lastFinishedPulling="2026-02-17 13:55:19.270138561 +0000 UTC m=+1138.549525003" observedRunningTime="2026-02-17 13:55:20.717522586 +0000 UTC m=+1139.996909028" watchObservedRunningTime="2026-02-17 13:55:20.753467542 +0000 UTC m=+1140.032853984" Feb 17 13:55:20 crc kubenswrapper[4768]: I0217 13:55:20.757823 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-55f844cf75-x5846" podStartSLOduration=4.757806162 podStartE2EDuration="4.757806162s" podCreationTimestamp="2026-02-17 13:55:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 13:55:20.755317823 +0000 UTC m=+1140.034704265" watchObservedRunningTime="2026-02-17 13:55:20.757806162 +0000 UTC m=+1140.037192604" Feb 17 13:55:20 crc kubenswrapper[4768]: I0217 13:55:20.770320 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-b89745fbd-lcjtt" event={"ID":"c732e620-9ed0-4246-93ca-c71277029344","Type":"ContainerStarted","Data":"52b2a11a2eea4835beeebdcbe488ab1711193cf6232d4e75045d81b19a9a758c"} Feb 17 13:55:20 crc kubenswrapper[4768]: I0217 13:55:20.770361 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-b89745fbd-lcjtt" event={"ID":"c732e620-9ed0-4246-93ca-c71277029344","Type":"ContainerStarted","Data":"9229cce94e15b6699b3ab990d66b23241868303d7e67cbe615cd0b79079aea50"} Feb 17 13:55:20 crc kubenswrapper[4768]: I0217 13:55:20.770374 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-b89745fbd-lcjtt" event={"ID":"c732e620-9ed0-4246-93ca-c71277029344","Type":"ContainerStarted","Data":"ace3362ed07f5e0539ea6dc60aee1c83920d3560c685450b37450b8a59c0cb07"} Feb 17 13:55:20 crc kubenswrapper[4768]: I0217 13:55:20.770502 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-b89745fbd-lcjtt" Feb 17 13:55:20 crc kubenswrapper[4768]: I0217 13:55:20.776374 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-6584d79658-wtxrc" podStartSLOduration=25.241358665 podStartE2EDuration="25.776361196s" podCreationTimestamp="2026-02-17 13:54:55 +0000 UTC" firstStartedPulling="2026-02-17 13:55:18.426394108 +0000 UTC m=+1137.705780540" lastFinishedPulling="2026-02-17 13:55:18.961396629 +0000 UTC m=+1138.240783071" observedRunningTime="2026-02-17 13:55:20.774631658 +0000 UTC m=+1140.054018100" watchObservedRunningTime="2026-02-17 13:55:20.776361196 +0000 UTC m=+1140.055747628" Feb 17 13:55:20 crc kubenswrapper[4768]: I0217 13:55:20.811940 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-b89745fbd-lcjtt" podStartSLOduration=3.811869999 podStartE2EDuration="3.811869999s" podCreationTimestamp="2026-02-17 13:55:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 13:55:20.796462773 +0000 UTC m=+1140.075849215" watchObservedRunningTime="2026-02-17 13:55:20.811869999 +0000 UTC m=+1140.091256441" Feb 17 13:55:21 crc kubenswrapper[4768]: I0217 13:55:21.782410 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-686d5745f-p8vdx" event={"ID":"2de38494-6385-477a-9ec8-2383ad286611","Type":"ContainerStarted","Data":"fcbc857d76003d061ae2ada53814e510b340c56d8f411b185e781d8fc8783777"} Feb 17 13:55:21 crc kubenswrapper[4768]: I0217 13:55:21.784433 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"b4f4a20b-94a7-4b16-bdca-a99d9440b74e","Type":"ContainerStarted","Data":"04b3f00812431f3d6f90de59a55a97520898f4deb1a854d9f19c74c18d6d94cc"} Feb 17 13:55:21 crc kubenswrapper[4768]: I0217 13:55:21.786949 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"a6e3afdd-2e51-4f0a-9208-5784a5900c96","Type":"ContainerStarted","Data":"451132f07075a65508d8072b3c4ecbb82a0ca3b3da706b6771e4c4b4fc56d7a5"} Feb 17 13:55:21 crc kubenswrapper[4768]: I0217 13:55:21.819803 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=21.819748589 podStartE2EDuration="21.819748589s" podCreationTimestamp="2026-02-17 13:55:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 13:55:21.805894415 +0000 UTC m=+1141.085280867" watchObservedRunningTime="2026-02-17 13:55:21.819748589 +0000 UTC m=+1141.099135051" Feb 17 13:55:22 crc kubenswrapper[4768]: I0217 13:55:22.815148 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-686d5745f-p8vdx" event={"ID":"2de38494-6385-477a-9ec8-2383ad286611","Type":"ContainerStarted","Data":"bf84fafe4fc7f26ccccf6d413c8d3e7af106f1cc068189749c02e1aa10b69fe3"} Feb 17 13:55:22 crc kubenswrapper[4768]: I0217 13:55:22.815449 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-686d5745f-p8vdx" Feb 17 13:55:22 crc kubenswrapper[4768]: I0217 13:55:22.820967 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"b4f4a20b-94a7-4b16-bdca-a99d9440b74e","Type":"ContainerStarted","Data":"1d4fbbf46a63e14d15177df0a165a2b20ece73dc6eeb9563ca49f2bda1edd532"} Feb 17 13:55:22 crc kubenswrapper[4768]: I0217 13:55:22.821042 4768 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="b4f4a20b-94a7-4b16-bdca-a99d9440b74e" containerName="glance-log" containerID="cri-o://04b3f00812431f3d6f90de59a55a97520898f4deb1a854d9f19c74c18d6d94cc" gracePeriod=30 Feb 17 13:55:22 crc kubenswrapper[4768]: I0217 13:55:22.821064 4768 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="b4f4a20b-94a7-4b16-bdca-a99d9440b74e" containerName="glance-httpd" containerID="cri-o://1d4fbbf46a63e14d15177df0a165a2b20ece73dc6eeb9563ca49f2bda1edd532" gracePeriod=30 Feb 17 13:55:22 crc kubenswrapper[4768]: I0217 13:55:22.844458 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-686d5745f-p8vdx" podStartSLOduration=3.844431635 podStartE2EDuration="3.844431635s" podCreationTimestamp="2026-02-17 13:55:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 13:55:22.841670838 +0000 UTC m=+1142.121057280" watchObservedRunningTime="2026-02-17 13:55:22.844431635 +0000 UTC m=+1142.123818077" Feb 17 13:55:22 crc kubenswrapper[4768]: I0217 13:55:22.873513 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=29.87349704 podStartE2EDuration="29.87349704s" podCreationTimestamp="2026-02-17 13:54:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 13:55:22.872073121 +0000 UTC m=+1142.151459563" watchObservedRunningTime="2026-02-17 13:55:22.87349704 +0000 UTC m=+1142.152883482" Feb 17 13:55:23 crc kubenswrapper[4768]: I0217 13:55:23.560311 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Feb 17 13:55:23 crc kubenswrapper[4768]: I0217 13:55:23.560619 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Feb 17 13:55:23 crc kubenswrapper[4768]: I0217 13:55:23.830747 4768 generic.go:334] "Generic (PLEG): container finished" podID="7ff099b6-c514-40c8-aa19-370d7f8dfbaf" containerID="817654ff26d17a41b821e13d9494c615148a42d040ef50715af28a50f1a3360a" exitCode=0 Feb 17 13:55:23 crc kubenswrapper[4768]: I0217 13:55:23.830803 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-xfpzr" event={"ID":"7ff099b6-c514-40c8-aa19-370d7f8dfbaf","Type":"ContainerDied","Data":"817654ff26d17a41b821e13d9494c615148a42d040ef50715af28a50f1a3360a"} Feb 17 13:55:23 crc kubenswrapper[4768]: I0217 13:55:23.834552 4768 generic.go:334] "Generic (PLEG): container finished" podID="b4f4a20b-94a7-4b16-bdca-a99d9440b74e" containerID="1d4fbbf46a63e14d15177df0a165a2b20ece73dc6eeb9563ca49f2bda1edd532" exitCode=0 Feb 17 13:55:23 crc kubenswrapper[4768]: I0217 13:55:23.834579 4768 generic.go:334] "Generic (PLEG): container finished" podID="b4f4a20b-94a7-4b16-bdca-a99d9440b74e" containerID="04b3f00812431f3d6f90de59a55a97520898f4deb1a854d9f19c74c18d6d94cc" exitCode=143 Feb 17 13:55:23 crc kubenswrapper[4768]: I0217 13:55:23.834620 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"b4f4a20b-94a7-4b16-bdca-a99d9440b74e","Type":"ContainerDied","Data":"1d4fbbf46a63e14d15177df0a165a2b20ece73dc6eeb9563ca49f2bda1edd532"} Feb 17 13:55:23 crc kubenswrapper[4768]: I0217 13:55:23.834645 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"b4f4a20b-94a7-4b16-bdca-a99d9440b74e","Type":"ContainerDied","Data":"04b3f00812431f3d6f90de59a55a97520898f4deb1a854d9f19c74c18d6d94cc"} Feb 17 13:55:23 crc kubenswrapper[4768]: I0217 13:55:23.836562 4768 generic.go:334] "Generic (PLEG): container finished" podID="e1acea03-8a67-474d-a6b1-803ea949a747" containerID="ce75f57dc3df8a873079f9c8b07d66c9fdac75ed9895fb9cdad8d31fc27e241b" exitCode=0 Feb 17 13:55:23 crc kubenswrapper[4768]: I0217 13:55:23.837330 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-78zqh" event={"ID":"e1acea03-8a67-474d-a6b1-803ea949a747","Type":"ContainerDied","Data":"ce75f57dc3df8a873079f9c8b07d66c9fdac75ed9895fb9cdad8d31fc27e241b"} Feb 17 13:55:26 crc kubenswrapper[4768]: I0217 13:55:26.023351 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-684746c5d4-6lxfv" Feb 17 13:55:26 crc kubenswrapper[4768]: I0217 13:55:26.023886 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-684746c5d4-6lxfv" Feb 17 13:55:26 crc kubenswrapper[4768]: I0217 13:55:26.133755 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-6584d79658-wtxrc" Feb 17 13:55:26 crc kubenswrapper[4768]: I0217 13:55:26.135184 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-6584d79658-wtxrc" Feb 17 13:55:26 crc kubenswrapper[4768]: I0217 13:55:26.632188 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-78zqh" Feb 17 13:55:26 crc kubenswrapper[4768]: I0217 13:55:26.645927 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-xfpzr" Feb 17 13:55:26 crc kubenswrapper[4768]: I0217 13:55:26.794022 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/e1acea03-8a67-474d-a6b1-803ea949a747-credential-keys\") pod \"e1acea03-8a67-474d-a6b1-803ea949a747\" (UID: \"e1acea03-8a67-474d-a6b1-803ea949a747\") " Feb 17 13:55:26 crc kubenswrapper[4768]: I0217 13:55:26.794059 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gfcfp\" (UniqueName: \"kubernetes.io/projected/e1acea03-8a67-474d-a6b1-803ea949a747-kube-api-access-gfcfp\") pod \"e1acea03-8a67-474d-a6b1-803ea949a747\" (UID: \"e1acea03-8a67-474d-a6b1-803ea949a747\") " Feb 17 13:55:26 crc kubenswrapper[4768]: I0217 13:55:26.794172 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bqgvw\" (UniqueName: \"kubernetes.io/projected/7ff099b6-c514-40c8-aa19-370d7f8dfbaf-kube-api-access-bqgvw\") pod \"7ff099b6-c514-40c8-aa19-370d7f8dfbaf\" (UID: \"7ff099b6-c514-40c8-aa19-370d7f8dfbaf\") " Feb 17 13:55:26 crc kubenswrapper[4768]: I0217 13:55:26.794197 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e1acea03-8a67-474d-a6b1-803ea949a747-combined-ca-bundle\") pod \"e1acea03-8a67-474d-a6b1-803ea949a747\" (UID: \"e1acea03-8a67-474d-a6b1-803ea949a747\") " Feb 17 13:55:26 crc kubenswrapper[4768]: I0217 13:55:26.794223 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/e1acea03-8a67-474d-a6b1-803ea949a747-fernet-keys\") pod \"e1acea03-8a67-474d-a6b1-803ea949a747\" (UID: \"e1acea03-8a67-474d-a6b1-803ea949a747\") " Feb 17 13:55:26 crc kubenswrapper[4768]: I0217 13:55:26.794273 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7ff099b6-c514-40c8-aa19-370d7f8dfbaf-scripts\") pod \"7ff099b6-c514-40c8-aa19-370d7f8dfbaf\" (UID: \"7ff099b6-c514-40c8-aa19-370d7f8dfbaf\") " Feb 17 13:55:26 crc kubenswrapper[4768]: I0217 13:55:26.794290 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7ff099b6-c514-40c8-aa19-370d7f8dfbaf-combined-ca-bundle\") pod \"7ff099b6-c514-40c8-aa19-370d7f8dfbaf\" (UID: \"7ff099b6-c514-40c8-aa19-370d7f8dfbaf\") " Feb 17 13:55:26 crc kubenswrapper[4768]: I0217 13:55:26.794305 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e1acea03-8a67-474d-a6b1-803ea949a747-config-data\") pod \"e1acea03-8a67-474d-a6b1-803ea949a747\" (UID: \"e1acea03-8a67-474d-a6b1-803ea949a747\") " Feb 17 13:55:26 crc kubenswrapper[4768]: I0217 13:55:26.794354 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7ff099b6-c514-40c8-aa19-370d7f8dfbaf-logs\") pod \"7ff099b6-c514-40c8-aa19-370d7f8dfbaf\" (UID: \"7ff099b6-c514-40c8-aa19-370d7f8dfbaf\") " Feb 17 13:55:26 crc kubenswrapper[4768]: I0217 13:55:26.794385 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e1acea03-8a67-474d-a6b1-803ea949a747-scripts\") pod \"e1acea03-8a67-474d-a6b1-803ea949a747\" (UID: \"e1acea03-8a67-474d-a6b1-803ea949a747\") " Feb 17 13:55:26 crc kubenswrapper[4768]: I0217 13:55:26.794411 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7ff099b6-c514-40c8-aa19-370d7f8dfbaf-config-data\") pod \"7ff099b6-c514-40c8-aa19-370d7f8dfbaf\" (UID: \"7ff099b6-c514-40c8-aa19-370d7f8dfbaf\") " Feb 17 13:55:26 crc kubenswrapper[4768]: I0217 13:55:26.796885 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7ff099b6-c514-40c8-aa19-370d7f8dfbaf-logs" (OuterVolumeSpecName: "logs") pod "7ff099b6-c514-40c8-aa19-370d7f8dfbaf" (UID: "7ff099b6-c514-40c8-aa19-370d7f8dfbaf"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 13:55:26 crc kubenswrapper[4768]: I0217 13:55:26.802212 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e1acea03-8a67-474d-a6b1-803ea949a747-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "e1acea03-8a67-474d-a6b1-803ea949a747" (UID: "e1acea03-8a67-474d-a6b1-803ea949a747"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 13:55:26 crc kubenswrapper[4768]: I0217 13:55:26.806730 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7ff099b6-c514-40c8-aa19-370d7f8dfbaf-scripts" (OuterVolumeSpecName: "scripts") pod "7ff099b6-c514-40c8-aa19-370d7f8dfbaf" (UID: "7ff099b6-c514-40c8-aa19-370d7f8dfbaf"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 13:55:26 crc kubenswrapper[4768]: I0217 13:55:26.808826 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e1acea03-8a67-474d-a6b1-803ea949a747-scripts" (OuterVolumeSpecName: "scripts") pod "e1acea03-8a67-474d-a6b1-803ea949a747" (UID: "e1acea03-8a67-474d-a6b1-803ea949a747"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 13:55:26 crc kubenswrapper[4768]: I0217 13:55:26.816982 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e1acea03-8a67-474d-a6b1-803ea949a747-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "e1acea03-8a67-474d-a6b1-803ea949a747" (UID: "e1acea03-8a67-474d-a6b1-803ea949a747"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 13:55:26 crc kubenswrapper[4768]: I0217 13:55:26.819396 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7ff099b6-c514-40c8-aa19-370d7f8dfbaf-kube-api-access-bqgvw" (OuterVolumeSpecName: "kube-api-access-bqgvw") pod "7ff099b6-c514-40c8-aa19-370d7f8dfbaf" (UID: "7ff099b6-c514-40c8-aa19-370d7f8dfbaf"). InnerVolumeSpecName "kube-api-access-bqgvw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 13:55:26 crc kubenswrapper[4768]: I0217 13:55:26.824731 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e1acea03-8a67-474d-a6b1-803ea949a747-kube-api-access-gfcfp" (OuterVolumeSpecName: "kube-api-access-gfcfp") pod "e1acea03-8a67-474d-a6b1-803ea949a747" (UID: "e1acea03-8a67-474d-a6b1-803ea949a747"). InnerVolumeSpecName "kube-api-access-gfcfp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 13:55:26 crc kubenswrapper[4768]: I0217 13:55:26.842047 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7ff099b6-c514-40c8-aa19-370d7f8dfbaf-config-data" (OuterVolumeSpecName: "config-data") pod "7ff099b6-c514-40c8-aa19-370d7f8dfbaf" (UID: "7ff099b6-c514-40c8-aa19-370d7f8dfbaf"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 13:55:26 crc kubenswrapper[4768]: I0217 13:55:26.871829 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7ff099b6-c514-40c8-aa19-370d7f8dfbaf-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7ff099b6-c514-40c8-aa19-370d7f8dfbaf" (UID: "7ff099b6-c514-40c8-aa19-370d7f8dfbaf"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 13:55:26 crc kubenswrapper[4768]: I0217 13:55:26.873751 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-xfpzr" event={"ID":"7ff099b6-c514-40c8-aa19-370d7f8dfbaf","Type":"ContainerDied","Data":"604d11ec5eb14e3b30e1c76a35ca9f2a356ea5b1f90986a96134dd99b940906d"} Feb 17 13:55:26 crc kubenswrapper[4768]: I0217 13:55:26.873800 4768 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="604d11ec5eb14e3b30e1c76a35ca9f2a356ea5b1f90986a96134dd99b940906d" Feb 17 13:55:26 crc kubenswrapper[4768]: I0217 13:55:26.873856 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-xfpzr" Feb 17 13:55:26 crc kubenswrapper[4768]: I0217 13:55:26.876301 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-78zqh" Feb 17 13:55:26 crc kubenswrapper[4768]: I0217 13:55:26.876264 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-78zqh" event={"ID":"e1acea03-8a67-474d-a6b1-803ea949a747","Type":"ContainerDied","Data":"13c9ea902cdee4196e4faaf35fe4445469372e0238e9f42189afb39cd41b8042"} Feb 17 13:55:26 crc kubenswrapper[4768]: I0217 13:55:26.876575 4768 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="13c9ea902cdee4196e4faaf35fe4445469372e0238e9f42189afb39cd41b8042" Feb 17 13:55:26 crc kubenswrapper[4768]: I0217 13:55:26.888028 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e1acea03-8a67-474d-a6b1-803ea949a747-config-data" (OuterVolumeSpecName: "config-data") pod "e1acea03-8a67-474d-a6b1-803ea949a747" (UID: "e1acea03-8a67-474d-a6b1-803ea949a747"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 13:55:26 crc kubenswrapper[4768]: I0217 13:55:26.897432 4768 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7ff099b6-c514-40c8-aa19-370d7f8dfbaf-logs\") on node \"crc\" DevicePath \"\"" Feb 17 13:55:26 crc kubenswrapper[4768]: I0217 13:55:26.897468 4768 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e1acea03-8a67-474d-a6b1-803ea949a747-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 13:55:26 crc kubenswrapper[4768]: I0217 13:55:26.897480 4768 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7ff099b6-c514-40c8-aa19-370d7f8dfbaf-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 13:55:26 crc kubenswrapper[4768]: I0217 13:55:26.897492 4768 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/e1acea03-8a67-474d-a6b1-803ea949a747-credential-keys\") on node \"crc\" DevicePath \"\"" Feb 17 13:55:26 crc kubenswrapper[4768]: I0217 13:55:26.897506 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gfcfp\" (UniqueName: \"kubernetes.io/projected/e1acea03-8a67-474d-a6b1-803ea949a747-kube-api-access-gfcfp\") on node \"crc\" DevicePath \"\"" Feb 17 13:55:26 crc kubenswrapper[4768]: I0217 13:55:26.897521 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bqgvw\" (UniqueName: \"kubernetes.io/projected/7ff099b6-c514-40c8-aa19-370d7f8dfbaf-kube-api-access-bqgvw\") on node \"crc\" DevicePath \"\"" Feb 17 13:55:26 crc kubenswrapper[4768]: I0217 13:55:26.897532 4768 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/e1acea03-8a67-474d-a6b1-803ea949a747-fernet-keys\") on node \"crc\" DevicePath \"\"" Feb 17 13:55:26 crc kubenswrapper[4768]: I0217 13:55:26.897543 4768 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7ff099b6-c514-40c8-aa19-370d7f8dfbaf-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 13:55:26 crc kubenswrapper[4768]: I0217 13:55:26.897553 4768 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e1acea03-8a67-474d-a6b1-803ea949a747-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 13:55:26 crc kubenswrapper[4768]: I0217 13:55:26.897564 4768 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7ff099b6-c514-40c8-aa19-370d7f8dfbaf-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 13:55:26 crc kubenswrapper[4768]: I0217 13:55:26.917347 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e1acea03-8a67-474d-a6b1-803ea949a747-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e1acea03-8a67-474d-a6b1-803ea949a747" (UID: "e1acea03-8a67-474d-a6b1-803ea949a747"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 13:55:26 crc kubenswrapper[4768]: I0217 13:55:26.998978 4768 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e1acea03-8a67-474d-a6b1-803ea949a747-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 13:55:27 crc kubenswrapper[4768]: I0217 13:55:27.021309 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 17 13:55:27 crc kubenswrapper[4768]: I0217 13:55:27.201741 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/b4f4a20b-94a7-4b16-bdca-a99d9440b74e-httpd-run\") pod \"b4f4a20b-94a7-4b16-bdca-a99d9440b74e\" (UID: \"b4f4a20b-94a7-4b16-bdca-a99d9440b74e\") " Feb 17 13:55:27 crc kubenswrapper[4768]: I0217 13:55:27.201879 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b4f4a20b-94a7-4b16-bdca-a99d9440b74e-public-tls-certs\") pod \"b4f4a20b-94a7-4b16-bdca-a99d9440b74e\" (UID: \"b4f4a20b-94a7-4b16-bdca-a99d9440b74e\") " Feb 17 13:55:27 crc kubenswrapper[4768]: I0217 13:55:27.201914 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b4f4a20b-94a7-4b16-bdca-a99d9440b74e-scripts\") pod \"b4f4a20b-94a7-4b16-bdca-a99d9440b74e\" (UID: \"b4f4a20b-94a7-4b16-bdca-a99d9440b74e\") " Feb 17 13:55:27 crc kubenswrapper[4768]: I0217 13:55:27.201951 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"b4f4a20b-94a7-4b16-bdca-a99d9440b74e\" (UID: \"b4f4a20b-94a7-4b16-bdca-a99d9440b74e\") " Feb 17 13:55:27 crc kubenswrapper[4768]: I0217 13:55:27.202185 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b4f4a20b-94a7-4b16-bdca-a99d9440b74e-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "b4f4a20b-94a7-4b16-bdca-a99d9440b74e" (UID: "b4f4a20b-94a7-4b16-bdca-a99d9440b74e"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 13:55:27 crc kubenswrapper[4768]: I0217 13:55:27.202518 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b4f4a20b-94a7-4b16-bdca-a99d9440b74e-logs\") pod \"b4f4a20b-94a7-4b16-bdca-a99d9440b74e\" (UID: \"b4f4a20b-94a7-4b16-bdca-a99d9440b74e\") " Feb 17 13:55:27 crc kubenswrapper[4768]: I0217 13:55:27.202584 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b4f4a20b-94a7-4b16-bdca-a99d9440b74e-combined-ca-bundle\") pod \"b4f4a20b-94a7-4b16-bdca-a99d9440b74e\" (UID: \"b4f4a20b-94a7-4b16-bdca-a99d9440b74e\") " Feb 17 13:55:27 crc kubenswrapper[4768]: I0217 13:55:27.202650 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b4f4a20b-94a7-4b16-bdca-a99d9440b74e-config-data\") pod \"b4f4a20b-94a7-4b16-bdca-a99d9440b74e\" (UID: \"b4f4a20b-94a7-4b16-bdca-a99d9440b74e\") " Feb 17 13:55:27 crc kubenswrapper[4768]: I0217 13:55:27.202745 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ncnxk\" (UniqueName: \"kubernetes.io/projected/b4f4a20b-94a7-4b16-bdca-a99d9440b74e-kube-api-access-ncnxk\") pod \"b4f4a20b-94a7-4b16-bdca-a99d9440b74e\" (UID: \"b4f4a20b-94a7-4b16-bdca-a99d9440b74e\") " Feb 17 13:55:27 crc kubenswrapper[4768]: I0217 13:55:27.203218 4768 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/b4f4a20b-94a7-4b16-bdca-a99d9440b74e-httpd-run\") on node \"crc\" DevicePath \"\"" Feb 17 13:55:27 crc kubenswrapper[4768]: I0217 13:55:27.206900 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b4f4a20b-94a7-4b16-bdca-a99d9440b74e-logs" (OuterVolumeSpecName: "logs") pod "b4f4a20b-94a7-4b16-bdca-a99d9440b74e" (UID: "b4f4a20b-94a7-4b16-bdca-a99d9440b74e"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 13:55:27 crc kubenswrapper[4768]: I0217 13:55:27.210249 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b4f4a20b-94a7-4b16-bdca-a99d9440b74e-scripts" (OuterVolumeSpecName: "scripts") pod "b4f4a20b-94a7-4b16-bdca-a99d9440b74e" (UID: "b4f4a20b-94a7-4b16-bdca-a99d9440b74e"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 13:55:27 crc kubenswrapper[4768]: I0217 13:55:27.210315 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b4f4a20b-94a7-4b16-bdca-a99d9440b74e-kube-api-access-ncnxk" (OuterVolumeSpecName: "kube-api-access-ncnxk") pod "b4f4a20b-94a7-4b16-bdca-a99d9440b74e" (UID: "b4f4a20b-94a7-4b16-bdca-a99d9440b74e"). InnerVolumeSpecName "kube-api-access-ncnxk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 13:55:27 crc kubenswrapper[4768]: I0217 13:55:27.211449 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage11-crc" (OuterVolumeSpecName: "glance") pod "b4f4a20b-94a7-4b16-bdca-a99d9440b74e" (UID: "b4f4a20b-94a7-4b16-bdca-a99d9440b74e"). InnerVolumeSpecName "local-storage11-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Feb 17 13:55:27 crc kubenswrapper[4768]: I0217 13:55:27.227711 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b4f4a20b-94a7-4b16-bdca-a99d9440b74e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b4f4a20b-94a7-4b16-bdca-a99d9440b74e" (UID: "b4f4a20b-94a7-4b16-bdca-a99d9440b74e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 13:55:27 crc kubenswrapper[4768]: I0217 13:55:27.265178 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b4f4a20b-94a7-4b16-bdca-a99d9440b74e-config-data" (OuterVolumeSpecName: "config-data") pod "b4f4a20b-94a7-4b16-bdca-a99d9440b74e" (UID: "b4f4a20b-94a7-4b16-bdca-a99d9440b74e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 13:55:27 crc kubenswrapper[4768]: I0217 13:55:27.265204 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b4f4a20b-94a7-4b16-bdca-a99d9440b74e-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "b4f4a20b-94a7-4b16-bdca-a99d9440b74e" (UID: "b4f4a20b-94a7-4b16-bdca-a99d9440b74e"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 13:55:27 crc kubenswrapper[4768]: I0217 13:55:27.304546 4768 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b4f4a20b-94a7-4b16-bdca-a99d9440b74e-logs\") on node \"crc\" DevicePath \"\"" Feb 17 13:55:27 crc kubenswrapper[4768]: I0217 13:55:27.304593 4768 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b4f4a20b-94a7-4b16-bdca-a99d9440b74e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 13:55:27 crc kubenswrapper[4768]: I0217 13:55:27.304611 4768 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b4f4a20b-94a7-4b16-bdca-a99d9440b74e-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 13:55:27 crc kubenswrapper[4768]: I0217 13:55:27.304623 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ncnxk\" (UniqueName: \"kubernetes.io/projected/b4f4a20b-94a7-4b16-bdca-a99d9440b74e-kube-api-access-ncnxk\") on node \"crc\" DevicePath \"\"" Feb 17 13:55:27 crc kubenswrapper[4768]: I0217 13:55:27.304636 4768 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b4f4a20b-94a7-4b16-bdca-a99d9440b74e-public-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 17 13:55:27 crc kubenswrapper[4768]: I0217 13:55:27.304647 4768 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b4f4a20b-94a7-4b16-bdca-a99d9440b74e-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 13:55:27 crc kubenswrapper[4768]: I0217 13:55:27.304690 4768 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") on node \"crc\" " Feb 17 13:55:27 crc kubenswrapper[4768]: I0217 13:55:27.331668 4768 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage11-crc" (UniqueName: "kubernetes.io/local-volume/local-storage11-crc") on node "crc" Feb 17 13:55:27 crc kubenswrapper[4768]: I0217 13:55:27.335492 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-55f844cf75-x5846" Feb 17 13:55:27 crc kubenswrapper[4768]: I0217 13:55:27.388775 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-785d8bcb8c-98n58"] Feb 17 13:55:27 crc kubenswrapper[4768]: I0217 13:55:27.389437 4768 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-785d8bcb8c-98n58" podUID="ff892b26-a158-4942-85e5-6a657ffe4d4d" containerName="dnsmasq-dns" containerID="cri-o://7542cb62164edc829a2341e05c3597d2591cf13f9fc3242f4775fab96d07162d" gracePeriod=10 Feb 17 13:55:27 crc kubenswrapper[4768]: I0217 13:55:27.407232 4768 reconciler_common.go:293] "Volume detached for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") on node \"crc\" DevicePath \"\"" Feb 17 13:55:27 crc kubenswrapper[4768]: I0217 13:55:27.773072 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-77c78fc8c5-fgk9h"] Feb 17 13:55:27 crc kubenswrapper[4768]: E0217 13:55:27.773864 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b4f4a20b-94a7-4b16-bdca-a99d9440b74e" containerName="glance-log" Feb 17 13:55:27 crc kubenswrapper[4768]: I0217 13:55:27.773887 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="b4f4a20b-94a7-4b16-bdca-a99d9440b74e" containerName="glance-log" Feb 17 13:55:27 crc kubenswrapper[4768]: E0217 13:55:27.773907 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e1acea03-8a67-474d-a6b1-803ea949a747" containerName="keystone-bootstrap" Feb 17 13:55:27 crc kubenswrapper[4768]: I0217 13:55:27.773917 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="e1acea03-8a67-474d-a6b1-803ea949a747" containerName="keystone-bootstrap" Feb 17 13:55:27 crc kubenswrapper[4768]: E0217 13:55:27.773940 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b4f4a20b-94a7-4b16-bdca-a99d9440b74e" containerName="glance-httpd" Feb 17 13:55:27 crc kubenswrapper[4768]: I0217 13:55:27.773948 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="b4f4a20b-94a7-4b16-bdca-a99d9440b74e" containerName="glance-httpd" Feb 17 13:55:27 crc kubenswrapper[4768]: E0217 13:55:27.773961 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7ff099b6-c514-40c8-aa19-370d7f8dfbaf" containerName="placement-db-sync" Feb 17 13:55:27 crc kubenswrapper[4768]: I0217 13:55:27.773969 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="7ff099b6-c514-40c8-aa19-370d7f8dfbaf" containerName="placement-db-sync" Feb 17 13:55:27 crc kubenswrapper[4768]: I0217 13:55:27.774272 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="7ff099b6-c514-40c8-aa19-370d7f8dfbaf" containerName="placement-db-sync" Feb 17 13:55:27 crc kubenswrapper[4768]: I0217 13:55:27.774313 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="b4f4a20b-94a7-4b16-bdca-a99d9440b74e" containerName="glance-log" Feb 17 13:55:27 crc kubenswrapper[4768]: I0217 13:55:27.774327 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="b4f4a20b-94a7-4b16-bdca-a99d9440b74e" containerName="glance-httpd" Feb 17 13:55:27 crc kubenswrapper[4768]: I0217 13:55:27.774355 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="e1acea03-8a67-474d-a6b1-803ea949a747" containerName="keystone-bootstrap" Feb 17 13:55:27 crc kubenswrapper[4768]: I0217 13:55:27.775141 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-77c78fc8c5-fgk9h" Feb 17 13:55:27 crc kubenswrapper[4768]: I0217 13:55:27.778645 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Feb 17 13:55:27 crc kubenswrapper[4768]: I0217 13:55:27.778836 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Feb 17 13:55:27 crc kubenswrapper[4768]: I0217 13:55:27.778967 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-xwvsr" Feb 17 13:55:27 crc kubenswrapper[4768]: I0217 13:55:27.779120 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Feb 17 13:55:27 crc kubenswrapper[4768]: I0217 13:55:27.779318 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-public-svc" Feb 17 13:55:27 crc kubenswrapper[4768]: I0217 13:55:27.779549 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-internal-svc" Feb 17 13:55:27 crc kubenswrapper[4768]: I0217 13:55:27.805532 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-77c78fc8c5-fgk9h"] Feb 17 13:55:27 crc kubenswrapper[4768]: I0217 13:55:27.846573 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-6c85df6c44-rr84t"] Feb 17 13:55:27 crc kubenswrapper[4768]: I0217 13:55:27.848492 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-6c85df6c44-rr84t" Feb 17 13:55:27 crc kubenswrapper[4768]: I0217 13:55:27.854753 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Feb 17 13:55:27 crc kubenswrapper[4768]: I0217 13:55:27.855182 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-internal-svc" Feb 17 13:55:27 crc kubenswrapper[4768]: I0217 13:55:27.855429 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Feb 17 13:55:27 crc kubenswrapper[4768]: I0217 13:55:27.855645 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-mbkz8" Feb 17 13:55:27 crc kubenswrapper[4768]: I0217 13:55:27.855849 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-public-svc" Feb 17 13:55:27 crc kubenswrapper[4768]: I0217 13:55:27.900014 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-6c85df6c44-rr84t"] Feb 17 13:55:27 crc kubenswrapper[4768]: I0217 13:55:27.923534 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tbxbs\" (UniqueName: \"kubernetes.io/projected/f8201b1d-afab-4fc2-bde1-bad212359f0a-kube-api-access-tbxbs\") pod \"keystone-77c78fc8c5-fgk9h\" (UID: \"f8201b1d-afab-4fc2-bde1-bad212359f0a\") " pod="openstack/keystone-77c78fc8c5-fgk9h" Feb 17 13:55:27 crc kubenswrapper[4768]: I0217 13:55:27.923647 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/f8201b1d-afab-4fc2-bde1-bad212359f0a-credential-keys\") pod \"keystone-77c78fc8c5-fgk9h\" (UID: \"f8201b1d-afab-4fc2-bde1-bad212359f0a\") " pod="openstack/keystone-77c78fc8c5-fgk9h" Feb 17 13:55:27 crc kubenswrapper[4768]: I0217 13:55:27.923681 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f8201b1d-afab-4fc2-bde1-bad212359f0a-internal-tls-certs\") pod \"keystone-77c78fc8c5-fgk9h\" (UID: \"f8201b1d-afab-4fc2-bde1-bad212359f0a\") " pod="openstack/keystone-77c78fc8c5-fgk9h" Feb 17 13:55:27 crc kubenswrapper[4768]: I0217 13:55:27.923745 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f8201b1d-afab-4fc2-bde1-bad212359f0a-scripts\") pod \"keystone-77c78fc8c5-fgk9h\" (UID: \"f8201b1d-afab-4fc2-bde1-bad212359f0a\") " pod="openstack/keystone-77c78fc8c5-fgk9h" Feb 17 13:55:27 crc kubenswrapper[4768]: I0217 13:55:27.923788 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/f8201b1d-afab-4fc2-bde1-bad212359f0a-fernet-keys\") pod \"keystone-77c78fc8c5-fgk9h\" (UID: \"f8201b1d-afab-4fc2-bde1-bad212359f0a\") " pod="openstack/keystone-77c78fc8c5-fgk9h" Feb 17 13:55:27 crc kubenswrapper[4768]: I0217 13:55:27.923910 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f8201b1d-afab-4fc2-bde1-bad212359f0a-config-data\") pod \"keystone-77c78fc8c5-fgk9h\" (UID: \"f8201b1d-afab-4fc2-bde1-bad212359f0a\") " pod="openstack/keystone-77c78fc8c5-fgk9h" Feb 17 13:55:27 crc kubenswrapper[4768]: I0217 13:55:27.923994 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f8201b1d-afab-4fc2-bde1-bad212359f0a-combined-ca-bundle\") pod \"keystone-77c78fc8c5-fgk9h\" (UID: \"f8201b1d-afab-4fc2-bde1-bad212359f0a\") " pod="openstack/keystone-77c78fc8c5-fgk9h" Feb 17 13:55:27 crc kubenswrapper[4768]: I0217 13:55:27.924075 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f8201b1d-afab-4fc2-bde1-bad212359f0a-public-tls-certs\") pod \"keystone-77c78fc8c5-fgk9h\" (UID: \"f8201b1d-afab-4fc2-bde1-bad212359f0a\") " pod="openstack/keystone-77c78fc8c5-fgk9h" Feb 17 13:55:27 crc kubenswrapper[4768]: I0217 13:55:27.932613 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"b4f4a20b-94a7-4b16-bdca-a99d9440b74e","Type":"ContainerDied","Data":"782f8a56a0f86725dcff987d3ee4f41f2f733efcb258de20c7849e679c619a1f"} Feb 17 13:55:27 crc kubenswrapper[4768]: I0217 13:55:27.932792 4768 scope.go:117] "RemoveContainer" containerID="1d4fbbf46a63e14d15177df0a165a2b20ece73dc6eeb9563ca49f2bda1edd532" Feb 17 13:55:27 crc kubenswrapper[4768]: I0217 13:55:27.933052 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 17 13:55:27 crc kubenswrapper[4768]: I0217 13:55:27.946258 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1c0e296a-80f3-4efe-bb28-17fdfd153397","Type":"ContainerStarted","Data":"ccd2128ae128dcd19682ea32114c0e6bfb94fa980bae79579791788a93fd9111"} Feb 17 13:55:27 crc kubenswrapper[4768]: I0217 13:55:27.966178 4768 generic.go:334] "Generic (PLEG): container finished" podID="ff892b26-a158-4942-85e5-6a657ffe4d4d" containerID="7542cb62164edc829a2341e05c3597d2591cf13f9fc3242f4775fab96d07162d" exitCode=0 Feb 17 13:55:27 crc kubenswrapper[4768]: I0217 13:55:27.966234 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-785d8bcb8c-98n58" event={"ID":"ff892b26-a158-4942-85e5-6a657ffe4d4d","Type":"ContainerDied","Data":"7542cb62164edc829a2341e05c3597d2591cf13f9fc3242f4775fab96d07162d"} Feb 17 13:55:27 crc kubenswrapper[4768]: I0217 13:55:27.999440 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-785d8bcb8c-98n58" Feb 17 13:55:28 crc kubenswrapper[4768]: I0217 13:55:28.019786 4768 scope.go:117] "RemoveContainer" containerID="04b3f00812431f3d6f90de59a55a97520898f4deb1a854d9f19c74c18d6d94cc" Feb 17 13:55:28 crc kubenswrapper[4768]: I0217 13:55:28.020296 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 17 13:55:28 crc kubenswrapper[4768]: I0217 13:55:28.025533 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f8201b1d-afab-4fc2-bde1-bad212359f0a-config-data\") pod \"keystone-77c78fc8c5-fgk9h\" (UID: \"f8201b1d-afab-4fc2-bde1-bad212359f0a\") " pod="openstack/keystone-77c78fc8c5-fgk9h" Feb 17 13:55:28 crc kubenswrapper[4768]: I0217 13:55:28.025586 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f8201b1d-afab-4fc2-bde1-bad212359f0a-combined-ca-bundle\") pod \"keystone-77c78fc8c5-fgk9h\" (UID: \"f8201b1d-afab-4fc2-bde1-bad212359f0a\") " pod="openstack/keystone-77c78fc8c5-fgk9h" Feb 17 13:55:28 crc kubenswrapper[4768]: I0217 13:55:28.025627 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f8201b1d-afab-4fc2-bde1-bad212359f0a-public-tls-certs\") pod \"keystone-77c78fc8c5-fgk9h\" (UID: \"f8201b1d-afab-4fc2-bde1-bad212359f0a\") " pod="openstack/keystone-77c78fc8c5-fgk9h" Feb 17 13:55:28 crc kubenswrapper[4768]: I0217 13:55:28.025652 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/55c82341-24e7-4524-82c7-996a851af418-logs\") pod \"placement-6c85df6c44-rr84t\" (UID: \"55c82341-24e7-4524-82c7-996a851af418\") " pod="openstack/placement-6c85df6c44-rr84t" Feb 17 13:55:28 crc kubenswrapper[4768]: I0217 13:55:28.025681 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tbxbs\" (UniqueName: \"kubernetes.io/projected/f8201b1d-afab-4fc2-bde1-bad212359f0a-kube-api-access-tbxbs\") pod \"keystone-77c78fc8c5-fgk9h\" (UID: \"f8201b1d-afab-4fc2-bde1-bad212359f0a\") " pod="openstack/keystone-77c78fc8c5-fgk9h" Feb 17 13:55:28 crc kubenswrapper[4768]: I0217 13:55:28.025700 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/55c82341-24e7-4524-82c7-996a851af418-scripts\") pod \"placement-6c85df6c44-rr84t\" (UID: \"55c82341-24e7-4524-82c7-996a851af418\") " pod="openstack/placement-6c85df6c44-rr84t" Feb 17 13:55:28 crc kubenswrapper[4768]: I0217 13:55:28.025737 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/f8201b1d-afab-4fc2-bde1-bad212359f0a-credential-keys\") pod \"keystone-77c78fc8c5-fgk9h\" (UID: \"f8201b1d-afab-4fc2-bde1-bad212359f0a\") " pod="openstack/keystone-77c78fc8c5-fgk9h" Feb 17 13:55:28 crc kubenswrapper[4768]: I0217 13:55:28.025755 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f8201b1d-afab-4fc2-bde1-bad212359f0a-internal-tls-certs\") pod \"keystone-77c78fc8c5-fgk9h\" (UID: \"f8201b1d-afab-4fc2-bde1-bad212359f0a\") " pod="openstack/keystone-77c78fc8c5-fgk9h" Feb 17 13:55:28 crc kubenswrapper[4768]: I0217 13:55:28.025773 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/55c82341-24e7-4524-82c7-996a851af418-config-data\") pod \"placement-6c85df6c44-rr84t\" (UID: \"55c82341-24e7-4524-82c7-996a851af418\") " pod="openstack/placement-6c85df6c44-rr84t" Feb 17 13:55:28 crc kubenswrapper[4768]: I0217 13:55:28.025792 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/55c82341-24e7-4524-82c7-996a851af418-public-tls-certs\") pod \"placement-6c85df6c44-rr84t\" (UID: \"55c82341-24e7-4524-82c7-996a851af418\") " pod="openstack/placement-6c85df6c44-rr84t" Feb 17 13:55:28 crc kubenswrapper[4768]: I0217 13:55:28.025825 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/55c82341-24e7-4524-82c7-996a851af418-internal-tls-certs\") pod \"placement-6c85df6c44-rr84t\" (UID: \"55c82341-24e7-4524-82c7-996a851af418\") " pod="openstack/placement-6c85df6c44-rr84t" Feb 17 13:55:28 crc kubenswrapper[4768]: I0217 13:55:28.025844 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nckcb\" (UniqueName: \"kubernetes.io/projected/55c82341-24e7-4524-82c7-996a851af418-kube-api-access-nckcb\") pod \"placement-6c85df6c44-rr84t\" (UID: \"55c82341-24e7-4524-82c7-996a851af418\") " pod="openstack/placement-6c85df6c44-rr84t" Feb 17 13:55:28 crc kubenswrapper[4768]: I0217 13:55:28.025873 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f8201b1d-afab-4fc2-bde1-bad212359f0a-scripts\") pod \"keystone-77c78fc8c5-fgk9h\" (UID: \"f8201b1d-afab-4fc2-bde1-bad212359f0a\") " pod="openstack/keystone-77c78fc8c5-fgk9h" Feb 17 13:55:28 crc kubenswrapper[4768]: I0217 13:55:28.025895 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/55c82341-24e7-4524-82c7-996a851af418-combined-ca-bundle\") pod \"placement-6c85df6c44-rr84t\" (UID: \"55c82341-24e7-4524-82c7-996a851af418\") " pod="openstack/placement-6c85df6c44-rr84t" Feb 17 13:55:28 crc kubenswrapper[4768]: I0217 13:55:28.025921 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/f8201b1d-afab-4fc2-bde1-bad212359f0a-fernet-keys\") pod \"keystone-77c78fc8c5-fgk9h\" (UID: \"f8201b1d-afab-4fc2-bde1-bad212359f0a\") " pod="openstack/keystone-77c78fc8c5-fgk9h" Feb 17 13:55:28 crc kubenswrapper[4768]: I0217 13:55:28.032909 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f8201b1d-afab-4fc2-bde1-bad212359f0a-internal-tls-certs\") pod \"keystone-77c78fc8c5-fgk9h\" (UID: \"f8201b1d-afab-4fc2-bde1-bad212359f0a\") " pod="openstack/keystone-77c78fc8c5-fgk9h" Feb 17 13:55:28 crc kubenswrapper[4768]: I0217 13:55:28.033637 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/f8201b1d-afab-4fc2-bde1-bad212359f0a-fernet-keys\") pod \"keystone-77c78fc8c5-fgk9h\" (UID: \"f8201b1d-afab-4fc2-bde1-bad212359f0a\") " pod="openstack/keystone-77c78fc8c5-fgk9h" Feb 17 13:55:28 crc kubenswrapper[4768]: I0217 13:55:28.034195 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/f8201b1d-afab-4fc2-bde1-bad212359f0a-credential-keys\") pod \"keystone-77c78fc8c5-fgk9h\" (UID: \"f8201b1d-afab-4fc2-bde1-bad212359f0a\") " pod="openstack/keystone-77c78fc8c5-fgk9h" Feb 17 13:55:28 crc kubenswrapper[4768]: I0217 13:55:28.042438 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f8201b1d-afab-4fc2-bde1-bad212359f0a-config-data\") pod \"keystone-77c78fc8c5-fgk9h\" (UID: \"f8201b1d-afab-4fc2-bde1-bad212359f0a\") " pod="openstack/keystone-77c78fc8c5-fgk9h" Feb 17 13:55:28 crc kubenswrapper[4768]: I0217 13:55:28.043948 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f8201b1d-afab-4fc2-bde1-bad212359f0a-public-tls-certs\") pod \"keystone-77c78fc8c5-fgk9h\" (UID: \"f8201b1d-afab-4fc2-bde1-bad212359f0a\") " pod="openstack/keystone-77c78fc8c5-fgk9h" Feb 17 13:55:28 crc kubenswrapper[4768]: I0217 13:55:28.048033 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f8201b1d-afab-4fc2-bde1-bad212359f0a-scripts\") pod \"keystone-77c78fc8c5-fgk9h\" (UID: \"f8201b1d-afab-4fc2-bde1-bad212359f0a\") " pod="openstack/keystone-77c78fc8c5-fgk9h" Feb 17 13:55:28 crc kubenswrapper[4768]: I0217 13:55:28.049707 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f8201b1d-afab-4fc2-bde1-bad212359f0a-combined-ca-bundle\") pod \"keystone-77c78fc8c5-fgk9h\" (UID: \"f8201b1d-afab-4fc2-bde1-bad212359f0a\") " pod="openstack/keystone-77c78fc8c5-fgk9h" Feb 17 13:55:28 crc kubenswrapper[4768]: I0217 13:55:28.050883 4768 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 17 13:55:28 crc kubenswrapper[4768]: I0217 13:55:28.054121 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tbxbs\" (UniqueName: \"kubernetes.io/projected/f8201b1d-afab-4fc2-bde1-bad212359f0a-kube-api-access-tbxbs\") pod \"keystone-77c78fc8c5-fgk9h\" (UID: \"f8201b1d-afab-4fc2-bde1-bad212359f0a\") " pod="openstack/keystone-77c78fc8c5-fgk9h" Feb 17 13:55:28 crc kubenswrapper[4768]: I0217 13:55:28.066633 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Feb 17 13:55:28 crc kubenswrapper[4768]: E0217 13:55:28.067165 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ff892b26-a158-4942-85e5-6a657ffe4d4d" containerName="init" Feb 17 13:55:28 crc kubenswrapper[4768]: I0217 13:55:28.067188 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="ff892b26-a158-4942-85e5-6a657ffe4d4d" containerName="init" Feb 17 13:55:28 crc kubenswrapper[4768]: E0217 13:55:28.067202 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ff892b26-a158-4942-85e5-6a657ffe4d4d" containerName="dnsmasq-dns" Feb 17 13:55:28 crc kubenswrapper[4768]: I0217 13:55:28.067212 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="ff892b26-a158-4942-85e5-6a657ffe4d4d" containerName="dnsmasq-dns" Feb 17 13:55:28 crc kubenswrapper[4768]: I0217 13:55:28.067478 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="ff892b26-a158-4942-85e5-6a657ffe4d4d" containerName="dnsmasq-dns" Feb 17 13:55:28 crc kubenswrapper[4768]: I0217 13:55:28.068393 4768 patch_prober.go:28] interesting pod/machine-config-daemon-p97z4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 13:55:28 crc kubenswrapper[4768]: I0217 13:55:28.068445 4768 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-p97z4" podUID="10c685ba-8fe0-425c-958c-3fb6754d3d84" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 13:55:28 crc kubenswrapper[4768]: I0217 13:55:28.068514 4768 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-p97z4" Feb 17 13:55:28 crc kubenswrapper[4768]: I0217 13:55:28.069265 4768 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"8c7bac4bbfa7a551b4bc123db2f23e406ad5c1983352def084482a277bb70005"} pod="openshift-machine-config-operator/machine-config-daemon-p97z4" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 17 13:55:28 crc kubenswrapper[4768]: I0217 13:55:28.069328 4768 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-p97z4" podUID="10c685ba-8fe0-425c-958c-3fb6754d3d84" containerName="machine-config-daemon" containerID="cri-o://8c7bac4bbfa7a551b4bc123db2f23e406ad5c1983352def084482a277bb70005" gracePeriod=600 Feb 17 13:55:28 crc kubenswrapper[4768]: I0217 13:55:28.069706 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 17 13:55:28 crc kubenswrapper[4768]: I0217 13:55:28.074326 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Feb 17 13:55:28 crc kubenswrapper[4768]: I0217 13:55:28.074379 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Feb 17 13:55:28 crc kubenswrapper[4768]: I0217 13:55:28.103429 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 17 13:55:28 crc kubenswrapper[4768]: I0217 13:55:28.113553 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-77c78fc8c5-fgk9h" Feb 17 13:55:28 crc kubenswrapper[4768]: I0217 13:55:28.127609 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jvsjv\" (UniqueName: \"kubernetes.io/projected/ff892b26-a158-4942-85e5-6a657ffe4d4d-kube-api-access-jvsjv\") pod \"ff892b26-a158-4942-85e5-6a657ffe4d4d\" (UID: \"ff892b26-a158-4942-85e5-6a657ffe4d4d\") " Feb 17 13:55:28 crc kubenswrapper[4768]: I0217 13:55:28.127716 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ff892b26-a158-4942-85e5-6a657ffe4d4d-dns-svc\") pod \"ff892b26-a158-4942-85e5-6a657ffe4d4d\" (UID: \"ff892b26-a158-4942-85e5-6a657ffe4d4d\") " Feb 17 13:55:28 crc kubenswrapper[4768]: I0217 13:55:28.127741 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ff892b26-a158-4942-85e5-6a657ffe4d4d-ovsdbserver-sb\") pod \"ff892b26-a158-4942-85e5-6a657ffe4d4d\" (UID: \"ff892b26-a158-4942-85e5-6a657ffe4d4d\") " Feb 17 13:55:28 crc kubenswrapper[4768]: I0217 13:55:28.127761 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ff892b26-a158-4942-85e5-6a657ffe4d4d-dns-swift-storage-0\") pod \"ff892b26-a158-4942-85e5-6a657ffe4d4d\" (UID: \"ff892b26-a158-4942-85e5-6a657ffe4d4d\") " Feb 17 13:55:28 crc kubenswrapper[4768]: I0217 13:55:28.127781 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ff892b26-a158-4942-85e5-6a657ffe4d4d-ovsdbserver-nb\") pod \"ff892b26-a158-4942-85e5-6a657ffe4d4d\" (UID: \"ff892b26-a158-4942-85e5-6a657ffe4d4d\") " Feb 17 13:55:28 crc kubenswrapper[4768]: I0217 13:55:28.127857 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ff892b26-a158-4942-85e5-6a657ffe4d4d-config\") pod \"ff892b26-a158-4942-85e5-6a657ffe4d4d\" (UID: \"ff892b26-a158-4942-85e5-6a657ffe4d4d\") " Feb 17 13:55:28 crc kubenswrapper[4768]: I0217 13:55:28.128148 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/55c82341-24e7-4524-82c7-996a851af418-combined-ca-bundle\") pod \"placement-6c85df6c44-rr84t\" (UID: \"55c82341-24e7-4524-82c7-996a851af418\") " pod="openstack/placement-6c85df6c44-rr84t" Feb 17 13:55:28 crc kubenswrapper[4768]: I0217 13:55:28.128232 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/55c82341-24e7-4524-82c7-996a851af418-logs\") pod \"placement-6c85df6c44-rr84t\" (UID: \"55c82341-24e7-4524-82c7-996a851af418\") " pod="openstack/placement-6c85df6c44-rr84t" Feb 17 13:55:28 crc kubenswrapper[4768]: I0217 13:55:28.128262 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/55c82341-24e7-4524-82c7-996a851af418-scripts\") pod \"placement-6c85df6c44-rr84t\" (UID: \"55c82341-24e7-4524-82c7-996a851af418\") " pod="openstack/placement-6c85df6c44-rr84t" Feb 17 13:55:28 crc kubenswrapper[4768]: I0217 13:55:28.128297 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/55c82341-24e7-4524-82c7-996a851af418-config-data\") pod \"placement-6c85df6c44-rr84t\" (UID: \"55c82341-24e7-4524-82c7-996a851af418\") " pod="openstack/placement-6c85df6c44-rr84t" Feb 17 13:55:28 crc kubenswrapper[4768]: I0217 13:55:28.128318 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/55c82341-24e7-4524-82c7-996a851af418-public-tls-certs\") pod \"placement-6c85df6c44-rr84t\" (UID: \"55c82341-24e7-4524-82c7-996a851af418\") " pod="openstack/placement-6c85df6c44-rr84t" Feb 17 13:55:28 crc kubenswrapper[4768]: I0217 13:55:28.128347 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/55c82341-24e7-4524-82c7-996a851af418-internal-tls-certs\") pod \"placement-6c85df6c44-rr84t\" (UID: \"55c82341-24e7-4524-82c7-996a851af418\") " pod="openstack/placement-6c85df6c44-rr84t" Feb 17 13:55:28 crc kubenswrapper[4768]: I0217 13:55:28.128368 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nckcb\" (UniqueName: \"kubernetes.io/projected/55c82341-24e7-4524-82c7-996a851af418-kube-api-access-nckcb\") pod \"placement-6c85df6c44-rr84t\" (UID: \"55c82341-24e7-4524-82c7-996a851af418\") " pod="openstack/placement-6c85df6c44-rr84t" Feb 17 13:55:28 crc kubenswrapper[4768]: I0217 13:55:28.131838 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/55c82341-24e7-4524-82c7-996a851af418-logs\") pod \"placement-6c85df6c44-rr84t\" (UID: \"55c82341-24e7-4524-82c7-996a851af418\") " pod="openstack/placement-6c85df6c44-rr84t" Feb 17 13:55:28 crc kubenswrapper[4768]: I0217 13:55:28.140557 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/55c82341-24e7-4524-82c7-996a851af418-combined-ca-bundle\") pod \"placement-6c85df6c44-rr84t\" (UID: \"55c82341-24e7-4524-82c7-996a851af418\") " pod="openstack/placement-6c85df6c44-rr84t" Feb 17 13:55:28 crc kubenswrapper[4768]: I0217 13:55:28.140784 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ff892b26-a158-4942-85e5-6a657ffe4d4d-kube-api-access-jvsjv" (OuterVolumeSpecName: "kube-api-access-jvsjv") pod "ff892b26-a158-4942-85e5-6a657ffe4d4d" (UID: "ff892b26-a158-4942-85e5-6a657ffe4d4d"). InnerVolumeSpecName "kube-api-access-jvsjv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 13:55:28 crc kubenswrapper[4768]: I0217 13:55:28.144749 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/55c82341-24e7-4524-82c7-996a851af418-scripts\") pod \"placement-6c85df6c44-rr84t\" (UID: \"55c82341-24e7-4524-82c7-996a851af418\") " pod="openstack/placement-6c85df6c44-rr84t" Feb 17 13:55:28 crc kubenswrapper[4768]: I0217 13:55:28.147866 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/55c82341-24e7-4524-82c7-996a851af418-internal-tls-certs\") pod \"placement-6c85df6c44-rr84t\" (UID: \"55c82341-24e7-4524-82c7-996a851af418\") " pod="openstack/placement-6c85df6c44-rr84t" Feb 17 13:55:28 crc kubenswrapper[4768]: I0217 13:55:28.148222 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/55c82341-24e7-4524-82c7-996a851af418-public-tls-certs\") pod \"placement-6c85df6c44-rr84t\" (UID: \"55c82341-24e7-4524-82c7-996a851af418\") " pod="openstack/placement-6c85df6c44-rr84t" Feb 17 13:55:28 crc kubenswrapper[4768]: I0217 13:55:28.148736 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/55c82341-24e7-4524-82c7-996a851af418-config-data\") pod \"placement-6c85df6c44-rr84t\" (UID: \"55c82341-24e7-4524-82c7-996a851af418\") " pod="openstack/placement-6c85df6c44-rr84t" Feb 17 13:55:28 crc kubenswrapper[4768]: I0217 13:55:28.150739 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nckcb\" (UniqueName: \"kubernetes.io/projected/55c82341-24e7-4524-82c7-996a851af418-kube-api-access-nckcb\") pod \"placement-6c85df6c44-rr84t\" (UID: \"55c82341-24e7-4524-82c7-996a851af418\") " pod="openstack/placement-6c85df6c44-rr84t" Feb 17 13:55:28 crc kubenswrapper[4768]: I0217 13:55:28.188654 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ff892b26-a158-4942-85e5-6a657ffe4d4d-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "ff892b26-a158-4942-85e5-6a657ffe4d4d" (UID: "ff892b26-a158-4942-85e5-6a657ffe4d4d"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 13:55:28 crc kubenswrapper[4768]: I0217 13:55:28.191780 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ff892b26-a158-4942-85e5-6a657ffe4d4d-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "ff892b26-a158-4942-85e5-6a657ffe4d4d" (UID: "ff892b26-a158-4942-85e5-6a657ffe4d4d"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 13:55:28 crc kubenswrapper[4768]: I0217 13:55:28.193780 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-6c85df6c44-rr84t" Feb 17 13:55:28 crc kubenswrapper[4768]: I0217 13:55:28.194997 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ff892b26-a158-4942-85e5-6a657ffe4d4d-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "ff892b26-a158-4942-85e5-6a657ffe4d4d" (UID: "ff892b26-a158-4942-85e5-6a657ffe4d4d"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 13:55:28 crc kubenswrapper[4768]: I0217 13:55:28.200583 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ff892b26-a158-4942-85e5-6a657ffe4d4d-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "ff892b26-a158-4942-85e5-6a657ffe4d4d" (UID: "ff892b26-a158-4942-85e5-6a657ffe4d4d"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 13:55:28 crc kubenswrapper[4768]: I0217 13:55:28.209835 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ff892b26-a158-4942-85e5-6a657ffe4d4d-config" (OuterVolumeSpecName: "config") pod "ff892b26-a158-4942-85e5-6a657ffe4d4d" (UID: "ff892b26-a158-4942-85e5-6a657ffe4d4d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 13:55:28 crc kubenswrapper[4768]: I0217 13:55:28.230584 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/9bc3b9ad-1d13-4214-9824-af7003192ace-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"9bc3b9ad-1d13-4214-9824-af7003192ace\") " pod="openstack/glance-default-external-api-0" Feb 17 13:55:28 crc kubenswrapper[4768]: I0217 13:55:28.230631 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-external-api-0\" (UID: \"9bc3b9ad-1d13-4214-9824-af7003192ace\") " pod="openstack/glance-default-external-api-0" Feb 17 13:55:28 crc kubenswrapper[4768]: I0217 13:55:28.230677 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9bc3b9ad-1d13-4214-9824-af7003192ace-config-data\") pod \"glance-default-external-api-0\" (UID: \"9bc3b9ad-1d13-4214-9824-af7003192ace\") " pod="openstack/glance-default-external-api-0" Feb 17 13:55:28 crc kubenswrapper[4768]: I0217 13:55:28.230779 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9bc3b9ad-1d13-4214-9824-af7003192ace-logs\") pod \"glance-default-external-api-0\" (UID: \"9bc3b9ad-1d13-4214-9824-af7003192ace\") " pod="openstack/glance-default-external-api-0" Feb 17 13:55:28 crc kubenswrapper[4768]: I0217 13:55:28.230811 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9bc3b9ad-1d13-4214-9824-af7003192ace-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"9bc3b9ad-1d13-4214-9824-af7003192ace\") " pod="openstack/glance-default-external-api-0" Feb 17 13:55:28 crc kubenswrapper[4768]: I0217 13:55:28.230842 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5t7hz\" (UniqueName: \"kubernetes.io/projected/9bc3b9ad-1d13-4214-9824-af7003192ace-kube-api-access-5t7hz\") pod \"glance-default-external-api-0\" (UID: \"9bc3b9ad-1d13-4214-9824-af7003192ace\") " pod="openstack/glance-default-external-api-0" Feb 17 13:55:28 crc kubenswrapper[4768]: I0217 13:55:28.230877 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9bc3b9ad-1d13-4214-9824-af7003192ace-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"9bc3b9ad-1d13-4214-9824-af7003192ace\") " pod="openstack/glance-default-external-api-0" Feb 17 13:55:28 crc kubenswrapper[4768]: I0217 13:55:28.230906 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9bc3b9ad-1d13-4214-9824-af7003192ace-scripts\") pod \"glance-default-external-api-0\" (UID: \"9bc3b9ad-1d13-4214-9824-af7003192ace\") " pod="openstack/glance-default-external-api-0" Feb 17 13:55:28 crc kubenswrapper[4768]: I0217 13:55:28.230956 4768 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ff892b26-a158-4942-85e5-6a657ffe4d4d-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 17 13:55:28 crc kubenswrapper[4768]: I0217 13:55:28.230973 4768 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ff892b26-a158-4942-85e5-6a657ffe4d4d-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 17 13:55:28 crc kubenswrapper[4768]: I0217 13:55:28.230986 4768 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ff892b26-a158-4942-85e5-6a657ffe4d4d-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 17 13:55:28 crc kubenswrapper[4768]: I0217 13:55:28.230996 4768 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ff892b26-a158-4942-85e5-6a657ffe4d4d-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 17 13:55:28 crc kubenswrapper[4768]: I0217 13:55:28.231006 4768 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ff892b26-a158-4942-85e5-6a657ffe4d4d-config\") on node \"crc\" DevicePath \"\"" Feb 17 13:55:28 crc kubenswrapper[4768]: I0217 13:55:28.231016 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jvsjv\" (UniqueName: \"kubernetes.io/projected/ff892b26-a158-4942-85e5-6a657ffe4d4d-kube-api-access-jvsjv\") on node \"crc\" DevicePath \"\"" Feb 17 13:55:28 crc kubenswrapper[4768]: I0217 13:55:28.333162 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9bc3b9ad-1d13-4214-9824-af7003192ace-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"9bc3b9ad-1d13-4214-9824-af7003192ace\") " pod="openstack/glance-default-external-api-0" Feb 17 13:55:28 crc kubenswrapper[4768]: I0217 13:55:28.333244 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9bc3b9ad-1d13-4214-9824-af7003192ace-scripts\") pod \"glance-default-external-api-0\" (UID: \"9bc3b9ad-1d13-4214-9824-af7003192ace\") " pod="openstack/glance-default-external-api-0" Feb 17 13:55:28 crc kubenswrapper[4768]: I0217 13:55:28.333311 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/9bc3b9ad-1d13-4214-9824-af7003192ace-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"9bc3b9ad-1d13-4214-9824-af7003192ace\") " pod="openstack/glance-default-external-api-0" Feb 17 13:55:28 crc kubenswrapper[4768]: I0217 13:55:28.333346 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-external-api-0\" (UID: \"9bc3b9ad-1d13-4214-9824-af7003192ace\") " pod="openstack/glance-default-external-api-0" Feb 17 13:55:28 crc kubenswrapper[4768]: I0217 13:55:28.333392 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9bc3b9ad-1d13-4214-9824-af7003192ace-config-data\") pod \"glance-default-external-api-0\" (UID: \"9bc3b9ad-1d13-4214-9824-af7003192ace\") " pod="openstack/glance-default-external-api-0" Feb 17 13:55:28 crc kubenswrapper[4768]: I0217 13:55:28.333449 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9bc3b9ad-1d13-4214-9824-af7003192ace-logs\") pod \"glance-default-external-api-0\" (UID: \"9bc3b9ad-1d13-4214-9824-af7003192ace\") " pod="openstack/glance-default-external-api-0" Feb 17 13:55:28 crc kubenswrapper[4768]: I0217 13:55:28.333469 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9bc3b9ad-1d13-4214-9824-af7003192ace-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"9bc3b9ad-1d13-4214-9824-af7003192ace\") " pod="openstack/glance-default-external-api-0" Feb 17 13:55:28 crc kubenswrapper[4768]: I0217 13:55:28.333491 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5t7hz\" (UniqueName: \"kubernetes.io/projected/9bc3b9ad-1d13-4214-9824-af7003192ace-kube-api-access-5t7hz\") pod \"glance-default-external-api-0\" (UID: \"9bc3b9ad-1d13-4214-9824-af7003192ace\") " pod="openstack/glance-default-external-api-0" Feb 17 13:55:28 crc kubenswrapper[4768]: I0217 13:55:28.333890 4768 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-external-api-0\" (UID: \"9bc3b9ad-1d13-4214-9824-af7003192ace\") device mount path \"/mnt/openstack/pv11\"" pod="openstack/glance-default-external-api-0" Feb 17 13:55:28 crc kubenswrapper[4768]: I0217 13:55:28.333973 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9bc3b9ad-1d13-4214-9824-af7003192ace-logs\") pod \"glance-default-external-api-0\" (UID: \"9bc3b9ad-1d13-4214-9824-af7003192ace\") " pod="openstack/glance-default-external-api-0" Feb 17 13:55:28 crc kubenswrapper[4768]: I0217 13:55:28.337532 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/9bc3b9ad-1d13-4214-9824-af7003192ace-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"9bc3b9ad-1d13-4214-9824-af7003192ace\") " pod="openstack/glance-default-external-api-0" Feb 17 13:55:28 crc kubenswrapper[4768]: I0217 13:55:28.339597 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9bc3b9ad-1d13-4214-9824-af7003192ace-scripts\") pod \"glance-default-external-api-0\" (UID: \"9bc3b9ad-1d13-4214-9824-af7003192ace\") " pod="openstack/glance-default-external-api-0" Feb 17 13:55:28 crc kubenswrapper[4768]: I0217 13:55:28.352692 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9bc3b9ad-1d13-4214-9824-af7003192ace-config-data\") pod \"glance-default-external-api-0\" (UID: \"9bc3b9ad-1d13-4214-9824-af7003192ace\") " pod="openstack/glance-default-external-api-0" Feb 17 13:55:28 crc kubenswrapper[4768]: I0217 13:55:28.352752 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9bc3b9ad-1d13-4214-9824-af7003192ace-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"9bc3b9ad-1d13-4214-9824-af7003192ace\") " pod="openstack/glance-default-external-api-0" Feb 17 13:55:28 crc kubenswrapper[4768]: I0217 13:55:28.353258 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9bc3b9ad-1d13-4214-9824-af7003192ace-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"9bc3b9ad-1d13-4214-9824-af7003192ace\") " pod="openstack/glance-default-external-api-0" Feb 17 13:55:28 crc kubenswrapper[4768]: I0217 13:55:28.359126 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5t7hz\" (UniqueName: \"kubernetes.io/projected/9bc3b9ad-1d13-4214-9824-af7003192ace-kube-api-access-5t7hz\") pod \"glance-default-external-api-0\" (UID: \"9bc3b9ad-1d13-4214-9824-af7003192ace\") " pod="openstack/glance-default-external-api-0" Feb 17 13:55:28 crc kubenswrapper[4768]: I0217 13:55:28.400402 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-external-api-0\" (UID: \"9bc3b9ad-1d13-4214-9824-af7003192ace\") " pod="openstack/glance-default-external-api-0" Feb 17 13:55:28 crc kubenswrapper[4768]: I0217 13:55:28.418981 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 17 13:55:28 crc kubenswrapper[4768]: I0217 13:55:28.680683 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-77c78fc8c5-fgk9h"] Feb 17 13:55:28 crc kubenswrapper[4768]: W0217 13:55:28.683474 4768 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf8201b1d_afab_4fc2_bde1_bad212359f0a.slice/crio-483391b47f7f7de3ff15567f9ab59e31510388bc7dc9860b5dcdb74746047b2a WatchSource:0}: Error finding container 483391b47f7f7de3ff15567f9ab59e31510388bc7dc9860b5dcdb74746047b2a: Status 404 returned error can't find the container with id 483391b47f7f7de3ff15567f9ab59e31510388bc7dc9860b5dcdb74746047b2a Feb 17 13:55:28 crc kubenswrapper[4768]: I0217 13:55:28.773717 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-6c85df6c44-rr84t"] Feb 17 13:55:29 crc kubenswrapper[4768]: I0217 13:55:29.000128 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-785d8bcb8c-98n58" event={"ID":"ff892b26-a158-4942-85e5-6a657ffe4d4d","Type":"ContainerDied","Data":"9bb22b73f81198838da26a14d821336c98bc033857f4325f311e0c5aab723c83"} Feb 17 13:55:29 crc kubenswrapper[4768]: I0217 13:55:29.000505 4768 scope.go:117] "RemoveContainer" containerID="7542cb62164edc829a2341e05c3597d2591cf13f9fc3242f4775fab96d07162d" Feb 17 13:55:29 crc kubenswrapper[4768]: I0217 13:55:29.000645 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-785d8bcb8c-98n58" Feb 17 13:55:29 crc kubenswrapper[4768]: I0217 13:55:29.010716 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-6c85df6c44-rr84t" event={"ID":"55c82341-24e7-4524-82c7-996a851af418","Type":"ContainerStarted","Data":"a7fe683dc88a8ba1e575fb22233a035467a95a7dfafc297fcf47b6ad63b3d340"} Feb 17 13:55:29 crc kubenswrapper[4768]: I0217 13:55:29.016324 4768 generic.go:334] "Generic (PLEG): container finished" podID="10c685ba-8fe0-425c-958c-3fb6754d3d84" containerID="8c7bac4bbfa7a551b4bc123db2f23e406ad5c1983352def084482a277bb70005" exitCode=0 Feb 17 13:55:29 crc kubenswrapper[4768]: I0217 13:55:29.016454 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-p97z4" event={"ID":"10c685ba-8fe0-425c-958c-3fb6754d3d84","Type":"ContainerDied","Data":"8c7bac4bbfa7a551b4bc123db2f23e406ad5c1983352def084482a277bb70005"} Feb 17 13:55:29 crc kubenswrapper[4768]: I0217 13:55:29.016482 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-p97z4" event={"ID":"10c685ba-8fe0-425c-958c-3fb6754d3d84","Type":"ContainerStarted","Data":"f9b51566c32baca16b7c982a1f5be2bc77d96745c6b89bf249154277d12b15c6"} Feb 17 13:55:29 crc kubenswrapper[4768]: I0217 13:55:29.029440 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-77c78fc8c5-fgk9h" event={"ID":"f8201b1d-afab-4fc2-bde1-bad212359f0a","Type":"ContainerStarted","Data":"483391b47f7f7de3ff15567f9ab59e31510388bc7dc9860b5dcdb74746047b2a"} Feb 17 13:55:29 crc kubenswrapper[4768]: I0217 13:55:29.030223 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/keystone-77c78fc8c5-fgk9h" Feb 17 13:55:29 crc kubenswrapper[4768]: I0217 13:55:29.068037 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-77c78fc8c5-fgk9h" podStartSLOduration=2.068016278 podStartE2EDuration="2.068016278s" podCreationTimestamp="2026-02-17 13:55:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 13:55:29.057680221 +0000 UTC m=+1148.337066663" watchObservedRunningTime="2026-02-17 13:55:29.068016278 +0000 UTC m=+1148.347402720" Feb 17 13:55:29 crc kubenswrapper[4768]: I0217 13:55:29.091598 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-785d8bcb8c-98n58"] Feb 17 13:55:29 crc kubenswrapper[4768]: I0217 13:55:29.104979 4768 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-785d8bcb8c-98n58"] Feb 17 13:55:29 crc kubenswrapper[4768]: I0217 13:55:29.112056 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 17 13:55:29 crc kubenswrapper[4768]: W0217 13:55:29.181859 4768 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9bc3b9ad_1d13_4214_9824_af7003192ace.slice/crio-9f43c8a9e94381f110295c6e677e57f773cd763ab23b211206a735b951148d44 WatchSource:0}: Error finding container 9f43c8a9e94381f110295c6e677e57f773cd763ab23b211206a735b951148d44: Status 404 returned error can't find the container with id 9f43c8a9e94381f110295c6e677e57f773cd763ab23b211206a735b951148d44 Feb 17 13:55:29 crc kubenswrapper[4768]: I0217 13:55:29.221030 4768 scope.go:117] "RemoveContainer" containerID="749990bad12c40693f551fb747fcf632e5d190b264bd9ed0e9c495c87e405369" Feb 17 13:55:29 crc kubenswrapper[4768]: I0217 13:55:29.274509 4768 scope.go:117] "RemoveContainer" containerID="83ffe2b5d1ed0faaa82ed446a55b456fa3a71e8473ab304c756bbf132bdab653" Feb 17 13:55:29 crc kubenswrapper[4768]: I0217 13:55:29.559312 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b4f4a20b-94a7-4b16-bdca-a99d9440b74e" path="/var/lib/kubelet/pods/b4f4a20b-94a7-4b16-bdca-a99d9440b74e/volumes" Feb 17 13:55:29 crc kubenswrapper[4768]: I0217 13:55:29.560548 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ff892b26-a158-4942-85e5-6a657ffe4d4d" path="/var/lib/kubelet/pods/ff892b26-a158-4942-85e5-6a657ffe4d4d/volumes" Feb 17 13:55:30 crc kubenswrapper[4768]: I0217 13:55:30.042502 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-77c78fc8c5-fgk9h" event={"ID":"f8201b1d-afab-4fc2-bde1-bad212359f0a","Type":"ContainerStarted","Data":"685a19d457932bd76fc336735c819e78696ea3c076d28898691deaf67d0bc32c"} Feb 17 13:55:30 crc kubenswrapper[4768]: I0217 13:55:30.045407 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"9bc3b9ad-1d13-4214-9824-af7003192ace","Type":"ContainerStarted","Data":"9f43c8a9e94381f110295c6e677e57f773cd763ab23b211206a735b951148d44"} Feb 17 13:55:30 crc kubenswrapper[4768]: I0217 13:55:30.049269 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-6c85df6c44-rr84t" event={"ID":"55c82341-24e7-4524-82c7-996a851af418","Type":"ContainerStarted","Data":"94c3a76cef44422cbd71d3dcc9ce17fa67b716c5bd3df5a80fdd5c6b8ca23179"} Feb 17 13:55:30 crc kubenswrapper[4768]: I0217 13:55:30.049307 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-6c85df6c44-rr84t" event={"ID":"55c82341-24e7-4524-82c7-996a851af418","Type":"ContainerStarted","Data":"e0584fae020d78e354808a3803e647082acf2d6b1f5b1bf761a5c618a2e4b69a"} Feb 17 13:55:30 crc kubenswrapper[4768]: I0217 13:55:30.049840 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-6c85df6c44-rr84t" Feb 17 13:55:30 crc kubenswrapper[4768]: I0217 13:55:30.049883 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-6c85df6c44-rr84t" Feb 17 13:55:30 crc kubenswrapper[4768]: I0217 13:55:30.070816 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-6c85df6c44-rr84t" podStartSLOduration=3.070797917 podStartE2EDuration="3.070797917s" podCreationTimestamp="2026-02-17 13:55:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 13:55:30.06546366 +0000 UTC m=+1149.344850102" watchObservedRunningTime="2026-02-17 13:55:30.070797917 +0000 UTC m=+1149.350184359" Feb 17 13:55:31 crc kubenswrapper[4768]: I0217 13:55:31.081918 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"9bc3b9ad-1d13-4214-9824-af7003192ace","Type":"ContainerStarted","Data":"2d1c215245457bf85f991f88f31a0dca885fcb114dcec4dae932d7a001c6c78d"} Feb 17 13:55:31 crc kubenswrapper[4768]: I0217 13:55:31.105302 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-5c8md" event={"ID":"df7e53d7-b63b-41b4-b909-c6effd0dab0c","Type":"ContainerStarted","Data":"66f32dc57d820647119bc07c2c3ffc4dae0c504a4d8c1f693646f527e404d135"} Feb 17 13:55:31 crc kubenswrapper[4768]: I0217 13:55:31.126602 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Feb 17 13:55:31 crc kubenswrapper[4768]: I0217 13:55:31.126653 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Feb 17 13:55:31 crc kubenswrapper[4768]: I0217 13:55:31.126666 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Feb 17 13:55:31 crc kubenswrapper[4768]: I0217 13:55:31.126677 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Feb 17 13:55:31 crc kubenswrapper[4768]: I0217 13:55:31.187019 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-sync-5c8md" podStartSLOduration=4.626926587 podStartE2EDuration="46.186997798s" podCreationTimestamp="2026-02-17 13:54:45 +0000 UTC" firstStartedPulling="2026-02-17 13:54:47.665573924 +0000 UTC m=+1106.944960366" lastFinishedPulling="2026-02-17 13:55:29.225644925 +0000 UTC m=+1148.505031577" observedRunningTime="2026-02-17 13:55:31.145493468 +0000 UTC m=+1150.424879910" watchObservedRunningTime="2026-02-17 13:55:31.186997798 +0000 UTC m=+1150.466384240" Feb 17 13:55:31 crc kubenswrapper[4768]: I0217 13:55:31.221383 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Feb 17 13:55:31 crc kubenswrapper[4768]: I0217 13:55:31.278461 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Feb 17 13:55:31 crc kubenswrapper[4768]: I0217 13:55:31.682177 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-6f459487b8-6m6q4"] Feb 17 13:55:31 crc kubenswrapper[4768]: I0217 13:55:31.689961 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-6f459487b8-6m6q4" Feb 17 13:55:31 crc kubenswrapper[4768]: I0217 13:55:31.699068 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-6f459487b8-6m6q4"] Feb 17 13:55:31 crc kubenswrapper[4768]: I0217 13:55:31.828592 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/41aee306-e130-4ed4-ba8e-381531d03dc3-internal-tls-certs\") pod \"placement-6f459487b8-6m6q4\" (UID: \"41aee306-e130-4ed4-ba8e-381531d03dc3\") " pod="openstack/placement-6f459487b8-6m6q4" Feb 17 13:55:31 crc kubenswrapper[4768]: I0217 13:55:31.828668 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/41aee306-e130-4ed4-ba8e-381531d03dc3-public-tls-certs\") pod \"placement-6f459487b8-6m6q4\" (UID: \"41aee306-e130-4ed4-ba8e-381531d03dc3\") " pod="openstack/placement-6f459487b8-6m6q4" Feb 17 13:55:31 crc kubenswrapper[4768]: I0217 13:55:31.828692 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/41aee306-e130-4ed4-ba8e-381531d03dc3-scripts\") pod \"placement-6f459487b8-6m6q4\" (UID: \"41aee306-e130-4ed4-ba8e-381531d03dc3\") " pod="openstack/placement-6f459487b8-6m6q4" Feb 17 13:55:31 crc kubenswrapper[4768]: I0217 13:55:31.828711 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-75lv2\" (UniqueName: \"kubernetes.io/projected/41aee306-e130-4ed4-ba8e-381531d03dc3-kube-api-access-75lv2\") pod \"placement-6f459487b8-6m6q4\" (UID: \"41aee306-e130-4ed4-ba8e-381531d03dc3\") " pod="openstack/placement-6f459487b8-6m6q4" Feb 17 13:55:31 crc kubenswrapper[4768]: I0217 13:55:31.828728 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/41aee306-e130-4ed4-ba8e-381531d03dc3-config-data\") pod \"placement-6f459487b8-6m6q4\" (UID: \"41aee306-e130-4ed4-ba8e-381531d03dc3\") " pod="openstack/placement-6f459487b8-6m6q4" Feb 17 13:55:31 crc kubenswrapper[4768]: I0217 13:55:31.828766 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/41aee306-e130-4ed4-ba8e-381531d03dc3-combined-ca-bundle\") pod \"placement-6f459487b8-6m6q4\" (UID: \"41aee306-e130-4ed4-ba8e-381531d03dc3\") " pod="openstack/placement-6f459487b8-6m6q4" Feb 17 13:55:31 crc kubenswrapper[4768]: I0217 13:55:31.828780 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/41aee306-e130-4ed4-ba8e-381531d03dc3-logs\") pod \"placement-6f459487b8-6m6q4\" (UID: \"41aee306-e130-4ed4-ba8e-381531d03dc3\") " pod="openstack/placement-6f459487b8-6m6q4" Feb 17 13:55:31 crc kubenswrapper[4768]: I0217 13:55:31.930453 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/41aee306-e130-4ed4-ba8e-381531d03dc3-internal-tls-certs\") pod \"placement-6f459487b8-6m6q4\" (UID: \"41aee306-e130-4ed4-ba8e-381531d03dc3\") " pod="openstack/placement-6f459487b8-6m6q4" Feb 17 13:55:31 crc kubenswrapper[4768]: I0217 13:55:31.930604 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/41aee306-e130-4ed4-ba8e-381531d03dc3-public-tls-certs\") pod \"placement-6f459487b8-6m6q4\" (UID: \"41aee306-e130-4ed4-ba8e-381531d03dc3\") " pod="openstack/placement-6f459487b8-6m6q4" Feb 17 13:55:31 crc kubenswrapper[4768]: I0217 13:55:31.930652 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/41aee306-e130-4ed4-ba8e-381531d03dc3-scripts\") pod \"placement-6f459487b8-6m6q4\" (UID: \"41aee306-e130-4ed4-ba8e-381531d03dc3\") " pod="openstack/placement-6f459487b8-6m6q4" Feb 17 13:55:31 crc kubenswrapper[4768]: I0217 13:55:31.930692 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-75lv2\" (UniqueName: \"kubernetes.io/projected/41aee306-e130-4ed4-ba8e-381531d03dc3-kube-api-access-75lv2\") pod \"placement-6f459487b8-6m6q4\" (UID: \"41aee306-e130-4ed4-ba8e-381531d03dc3\") " pod="openstack/placement-6f459487b8-6m6q4" Feb 17 13:55:31 crc kubenswrapper[4768]: I0217 13:55:31.930733 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/41aee306-e130-4ed4-ba8e-381531d03dc3-config-data\") pod \"placement-6f459487b8-6m6q4\" (UID: \"41aee306-e130-4ed4-ba8e-381531d03dc3\") " pod="openstack/placement-6f459487b8-6m6q4" Feb 17 13:55:31 crc kubenswrapper[4768]: I0217 13:55:31.930823 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/41aee306-e130-4ed4-ba8e-381531d03dc3-combined-ca-bundle\") pod \"placement-6f459487b8-6m6q4\" (UID: \"41aee306-e130-4ed4-ba8e-381531d03dc3\") " pod="openstack/placement-6f459487b8-6m6q4" Feb 17 13:55:31 crc kubenswrapper[4768]: I0217 13:55:31.930857 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/41aee306-e130-4ed4-ba8e-381531d03dc3-logs\") pod \"placement-6f459487b8-6m6q4\" (UID: \"41aee306-e130-4ed4-ba8e-381531d03dc3\") " pod="openstack/placement-6f459487b8-6m6q4" Feb 17 13:55:31 crc kubenswrapper[4768]: I0217 13:55:31.931645 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/41aee306-e130-4ed4-ba8e-381531d03dc3-logs\") pod \"placement-6f459487b8-6m6q4\" (UID: \"41aee306-e130-4ed4-ba8e-381531d03dc3\") " pod="openstack/placement-6f459487b8-6m6q4" Feb 17 13:55:31 crc kubenswrapper[4768]: I0217 13:55:31.938456 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/41aee306-e130-4ed4-ba8e-381531d03dc3-scripts\") pod \"placement-6f459487b8-6m6q4\" (UID: \"41aee306-e130-4ed4-ba8e-381531d03dc3\") " pod="openstack/placement-6f459487b8-6m6q4" Feb 17 13:55:31 crc kubenswrapper[4768]: I0217 13:55:31.938698 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/41aee306-e130-4ed4-ba8e-381531d03dc3-internal-tls-certs\") pod \"placement-6f459487b8-6m6q4\" (UID: \"41aee306-e130-4ed4-ba8e-381531d03dc3\") " pod="openstack/placement-6f459487b8-6m6q4" Feb 17 13:55:31 crc kubenswrapper[4768]: I0217 13:55:31.946769 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/41aee306-e130-4ed4-ba8e-381531d03dc3-public-tls-certs\") pod \"placement-6f459487b8-6m6q4\" (UID: \"41aee306-e130-4ed4-ba8e-381531d03dc3\") " pod="openstack/placement-6f459487b8-6m6q4" Feb 17 13:55:31 crc kubenswrapper[4768]: I0217 13:55:31.947434 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/41aee306-e130-4ed4-ba8e-381531d03dc3-combined-ca-bundle\") pod \"placement-6f459487b8-6m6q4\" (UID: \"41aee306-e130-4ed4-ba8e-381531d03dc3\") " pod="openstack/placement-6f459487b8-6m6q4" Feb 17 13:55:31 crc kubenswrapper[4768]: I0217 13:55:31.947831 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/41aee306-e130-4ed4-ba8e-381531d03dc3-config-data\") pod \"placement-6f459487b8-6m6q4\" (UID: \"41aee306-e130-4ed4-ba8e-381531d03dc3\") " pod="openstack/placement-6f459487b8-6m6q4" Feb 17 13:55:31 crc kubenswrapper[4768]: I0217 13:55:31.951694 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-75lv2\" (UniqueName: \"kubernetes.io/projected/41aee306-e130-4ed4-ba8e-381531d03dc3-kube-api-access-75lv2\") pod \"placement-6f459487b8-6m6q4\" (UID: \"41aee306-e130-4ed4-ba8e-381531d03dc3\") " pod="openstack/placement-6f459487b8-6m6q4" Feb 17 13:55:32 crc kubenswrapper[4768]: I0217 13:55:32.013124 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-6f459487b8-6m6q4" Feb 17 13:55:32 crc kubenswrapper[4768]: I0217 13:55:32.127933 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"9bc3b9ad-1d13-4214-9824-af7003192ace","Type":"ContainerStarted","Data":"296d6aba5a4d2bd2541c1c1b3437ee8e6f346ee60ab706f0f22283ada21455de"} Feb 17 13:55:32 crc kubenswrapper[4768]: I0217 13:55:32.200040 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=4.20001365 podStartE2EDuration="4.20001365s" podCreationTimestamp="2026-02-17 13:55:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 13:55:32.173514656 +0000 UTC m=+1151.452901098" watchObservedRunningTime="2026-02-17 13:55:32.20001365 +0000 UTC m=+1151.479400102" Feb 17 13:55:32 crc kubenswrapper[4768]: I0217 13:55:32.518172 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-6f459487b8-6m6q4"] Feb 17 13:55:33 crc kubenswrapper[4768]: I0217 13:55:33.138171 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-jmq2h" event={"ID":"e23e418f-2c16-4fa8-94fb-5e575affd61b","Type":"ContainerStarted","Data":"e368e83be738ce819f9a99a41a85b5c0583f0baa2c6c1f5bd60a123d3eb716a7"} Feb 17 13:55:33 crc kubenswrapper[4768]: I0217 13:55:33.160212 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-sync-jmq2h" podStartSLOduration=2.880621363 podStartE2EDuration="47.160190028s" podCreationTimestamp="2026-02-17 13:54:46 +0000 UTC" firstStartedPulling="2026-02-17 13:54:47.926602825 +0000 UTC m=+1107.205989267" lastFinishedPulling="2026-02-17 13:55:32.20617149 +0000 UTC m=+1151.485557932" observedRunningTime="2026-02-17 13:55:33.154597603 +0000 UTC m=+1152.433984045" watchObservedRunningTime="2026-02-17 13:55:33.160190028 +0000 UTC m=+1152.439576470" Feb 17 13:55:34 crc kubenswrapper[4768]: I0217 13:55:34.287018 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Feb 17 13:55:34 crc kubenswrapper[4768]: I0217 13:55:34.287514 4768 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 17 13:55:34 crc kubenswrapper[4768]: I0217 13:55:34.509744 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Feb 17 13:55:36 crc kubenswrapper[4768]: I0217 13:55:36.024526 4768 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-684746c5d4-6lxfv" podUID="c20ad4a2-cf3e-4390-9141-1cc58518fd2b" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.147:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.147:8443: connect: connection refused" Feb 17 13:55:36 crc kubenswrapper[4768]: I0217 13:55:36.135531 4768 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-6584d79658-wtxrc" podUID="331a37d3-96b1-4065-9941-25acc64cc6c1" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.148:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.148:8443: connect: connection refused" Feb 17 13:55:37 crc kubenswrapper[4768]: I0217 13:55:37.172982 4768 generic.go:334] "Generic (PLEG): container finished" podID="e23e418f-2c16-4fa8-94fb-5e575affd61b" containerID="e368e83be738ce819f9a99a41a85b5c0583f0baa2c6c1f5bd60a123d3eb716a7" exitCode=0 Feb 17 13:55:37 crc kubenswrapper[4768]: I0217 13:55:37.173080 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-jmq2h" event={"ID":"e23e418f-2c16-4fa8-94fb-5e575affd61b","Type":"ContainerDied","Data":"e368e83be738ce819f9a99a41a85b5c0583f0baa2c6c1f5bd60a123d3eb716a7"} Feb 17 13:55:37 crc kubenswrapper[4768]: I0217 13:55:37.176378 4768 generic.go:334] "Generic (PLEG): container finished" podID="df7e53d7-b63b-41b4-b909-c6effd0dab0c" containerID="66f32dc57d820647119bc07c2c3ffc4dae0c504a4d8c1f693646f527e404d135" exitCode=0 Feb 17 13:55:37 crc kubenswrapper[4768]: I0217 13:55:37.176416 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-5c8md" event={"ID":"df7e53d7-b63b-41b4-b909-c6effd0dab0c","Type":"ContainerDied","Data":"66f32dc57d820647119bc07c2c3ffc4dae0c504a4d8c1f693646f527e404d135"} Feb 17 13:55:37 crc kubenswrapper[4768]: W0217 13:55:37.269383 4768 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod41aee306_e130_4ed4_ba8e_381531d03dc3.slice/crio-33bb7420dec7f5319ddea1c24fdac9759372516b4281f5f30336cf8b1f215b7a WatchSource:0}: Error finding container 33bb7420dec7f5319ddea1c24fdac9759372516b4281f5f30336cf8b1f215b7a: Status 404 returned error can't find the container with id 33bb7420dec7f5319ddea1c24fdac9759372516b4281f5f30336cf8b1f215b7a Feb 17 13:55:38 crc kubenswrapper[4768]: I0217 13:55:38.188071 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-6f459487b8-6m6q4" event={"ID":"41aee306-e130-4ed4-ba8e-381531d03dc3","Type":"ContainerStarted","Data":"33bb7420dec7f5319ddea1c24fdac9759372516b4281f5f30336cf8b1f215b7a"} Feb 17 13:55:38 crc kubenswrapper[4768]: I0217 13:55:38.419186 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Feb 17 13:55:38 crc kubenswrapper[4768]: I0217 13:55:38.419795 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Feb 17 13:55:38 crc kubenswrapper[4768]: I0217 13:55:38.464774 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Feb 17 13:55:38 crc kubenswrapper[4768]: I0217 13:55:38.492651 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Feb 17 13:55:38 crc kubenswrapper[4768]: I0217 13:55:38.585658 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-jmq2h" Feb 17 13:55:38 crc kubenswrapper[4768]: E0217 13:55:38.615663 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/ceilometer-0" podUID="1c0e296a-80f3-4efe-bb28-17fdfd153397" Feb 17 13:55:38 crc kubenswrapper[4768]: I0217 13:55:38.625503 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-5c8md" Feb 17 13:55:38 crc kubenswrapper[4768]: I0217 13:55:38.682979 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/df7e53d7-b63b-41b4-b909-c6effd0dab0c-combined-ca-bundle\") pod \"df7e53d7-b63b-41b4-b909-c6effd0dab0c\" (UID: \"df7e53d7-b63b-41b4-b909-c6effd0dab0c\") " Feb 17 13:55:38 crc kubenswrapper[4768]: I0217 13:55:38.683037 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tzfsj\" (UniqueName: \"kubernetes.io/projected/e23e418f-2c16-4fa8-94fb-5e575affd61b-kube-api-access-tzfsj\") pod \"e23e418f-2c16-4fa8-94fb-5e575affd61b\" (UID: \"e23e418f-2c16-4fa8-94fb-5e575affd61b\") " Feb 17 13:55:38 crc kubenswrapper[4768]: I0217 13:55:38.683085 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/df7e53d7-b63b-41b4-b909-c6effd0dab0c-etc-machine-id\") pod \"df7e53d7-b63b-41b4-b909-c6effd0dab0c\" (UID: \"df7e53d7-b63b-41b4-b909-c6effd0dab0c\") " Feb 17 13:55:38 crc kubenswrapper[4768]: I0217 13:55:38.683145 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/df7e53d7-b63b-41b4-b909-c6effd0dab0c-config-data\") pod \"df7e53d7-b63b-41b4-b909-c6effd0dab0c\" (UID: \"df7e53d7-b63b-41b4-b909-c6effd0dab0c\") " Feb 17 13:55:38 crc kubenswrapper[4768]: I0217 13:55:38.683238 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/e23e418f-2c16-4fa8-94fb-5e575affd61b-db-sync-config-data\") pod \"e23e418f-2c16-4fa8-94fb-5e575affd61b\" (UID: \"e23e418f-2c16-4fa8-94fb-5e575affd61b\") " Feb 17 13:55:38 crc kubenswrapper[4768]: I0217 13:55:38.683278 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l2kmg\" (UniqueName: \"kubernetes.io/projected/df7e53d7-b63b-41b4-b909-c6effd0dab0c-kube-api-access-l2kmg\") pod \"df7e53d7-b63b-41b4-b909-c6effd0dab0c\" (UID: \"df7e53d7-b63b-41b4-b909-c6effd0dab0c\") " Feb 17 13:55:38 crc kubenswrapper[4768]: I0217 13:55:38.683345 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e23e418f-2c16-4fa8-94fb-5e575affd61b-combined-ca-bundle\") pod \"e23e418f-2c16-4fa8-94fb-5e575affd61b\" (UID: \"e23e418f-2c16-4fa8-94fb-5e575affd61b\") " Feb 17 13:55:38 crc kubenswrapper[4768]: I0217 13:55:38.683374 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/df7e53d7-b63b-41b4-b909-c6effd0dab0c-scripts\") pod \"df7e53d7-b63b-41b4-b909-c6effd0dab0c\" (UID: \"df7e53d7-b63b-41b4-b909-c6effd0dab0c\") " Feb 17 13:55:38 crc kubenswrapper[4768]: I0217 13:55:38.683415 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/df7e53d7-b63b-41b4-b909-c6effd0dab0c-db-sync-config-data\") pod \"df7e53d7-b63b-41b4-b909-c6effd0dab0c\" (UID: \"df7e53d7-b63b-41b4-b909-c6effd0dab0c\") " Feb 17 13:55:38 crc kubenswrapper[4768]: I0217 13:55:38.684336 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/df7e53d7-b63b-41b4-b909-c6effd0dab0c-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "df7e53d7-b63b-41b4-b909-c6effd0dab0c" (UID: "df7e53d7-b63b-41b4-b909-c6effd0dab0c"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 13:55:38 crc kubenswrapper[4768]: I0217 13:55:38.690756 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/df7e53d7-b63b-41b4-b909-c6effd0dab0c-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "df7e53d7-b63b-41b4-b909-c6effd0dab0c" (UID: "df7e53d7-b63b-41b4-b909-c6effd0dab0c"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 13:55:38 crc kubenswrapper[4768]: I0217 13:55:38.691005 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/df7e53d7-b63b-41b4-b909-c6effd0dab0c-scripts" (OuterVolumeSpecName: "scripts") pod "df7e53d7-b63b-41b4-b909-c6effd0dab0c" (UID: "df7e53d7-b63b-41b4-b909-c6effd0dab0c"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 13:55:38 crc kubenswrapper[4768]: I0217 13:55:38.691079 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/df7e53d7-b63b-41b4-b909-c6effd0dab0c-kube-api-access-l2kmg" (OuterVolumeSpecName: "kube-api-access-l2kmg") pod "df7e53d7-b63b-41b4-b909-c6effd0dab0c" (UID: "df7e53d7-b63b-41b4-b909-c6effd0dab0c"). InnerVolumeSpecName "kube-api-access-l2kmg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 13:55:38 crc kubenswrapper[4768]: I0217 13:55:38.692328 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e23e418f-2c16-4fa8-94fb-5e575affd61b-kube-api-access-tzfsj" (OuterVolumeSpecName: "kube-api-access-tzfsj") pod "e23e418f-2c16-4fa8-94fb-5e575affd61b" (UID: "e23e418f-2c16-4fa8-94fb-5e575affd61b"). InnerVolumeSpecName "kube-api-access-tzfsj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 13:55:38 crc kubenswrapper[4768]: I0217 13:55:38.706250 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e23e418f-2c16-4fa8-94fb-5e575affd61b-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "e23e418f-2c16-4fa8-94fb-5e575affd61b" (UID: "e23e418f-2c16-4fa8-94fb-5e575affd61b"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 13:55:38 crc kubenswrapper[4768]: I0217 13:55:38.714333 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/df7e53d7-b63b-41b4-b909-c6effd0dab0c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "df7e53d7-b63b-41b4-b909-c6effd0dab0c" (UID: "df7e53d7-b63b-41b4-b909-c6effd0dab0c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 13:55:38 crc kubenswrapper[4768]: I0217 13:55:38.722251 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e23e418f-2c16-4fa8-94fb-5e575affd61b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e23e418f-2c16-4fa8-94fb-5e575affd61b" (UID: "e23e418f-2c16-4fa8-94fb-5e575affd61b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 13:55:38 crc kubenswrapper[4768]: I0217 13:55:38.739392 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/df7e53d7-b63b-41b4-b909-c6effd0dab0c-config-data" (OuterVolumeSpecName: "config-data") pod "df7e53d7-b63b-41b4-b909-c6effd0dab0c" (UID: "df7e53d7-b63b-41b4-b909-c6effd0dab0c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 13:55:38 crc kubenswrapper[4768]: I0217 13:55:38.784963 4768 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e23e418f-2c16-4fa8-94fb-5e575affd61b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 13:55:38 crc kubenswrapper[4768]: I0217 13:55:38.784994 4768 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/df7e53d7-b63b-41b4-b909-c6effd0dab0c-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 13:55:38 crc kubenswrapper[4768]: I0217 13:55:38.785002 4768 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/df7e53d7-b63b-41b4-b909-c6effd0dab0c-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 13:55:38 crc kubenswrapper[4768]: I0217 13:55:38.785010 4768 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/df7e53d7-b63b-41b4-b909-c6effd0dab0c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 13:55:38 crc kubenswrapper[4768]: I0217 13:55:38.785019 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tzfsj\" (UniqueName: \"kubernetes.io/projected/e23e418f-2c16-4fa8-94fb-5e575affd61b-kube-api-access-tzfsj\") on node \"crc\" DevicePath \"\"" Feb 17 13:55:38 crc kubenswrapper[4768]: I0217 13:55:38.785030 4768 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/df7e53d7-b63b-41b4-b909-c6effd0dab0c-etc-machine-id\") on node \"crc\" DevicePath \"\"" Feb 17 13:55:38 crc kubenswrapper[4768]: I0217 13:55:38.785037 4768 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/df7e53d7-b63b-41b4-b909-c6effd0dab0c-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 13:55:38 crc kubenswrapper[4768]: I0217 13:55:38.785045 4768 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/e23e418f-2c16-4fa8-94fb-5e575affd61b-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 13:55:38 crc kubenswrapper[4768]: I0217 13:55:38.785059 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l2kmg\" (UniqueName: \"kubernetes.io/projected/df7e53d7-b63b-41b4-b909-c6effd0dab0c-kube-api-access-l2kmg\") on node \"crc\" DevicePath \"\"" Feb 17 13:55:39 crc kubenswrapper[4768]: I0217 13:55:39.213356 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-5c8md" Feb 17 13:55:39 crc kubenswrapper[4768]: I0217 13:55:39.213474 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-5c8md" event={"ID":"df7e53d7-b63b-41b4-b909-c6effd0dab0c","Type":"ContainerDied","Data":"4769e51a18129497bb9e0b8bf6197904947b6c69bee9e91c741476f9a28892c5"} Feb 17 13:55:39 crc kubenswrapper[4768]: I0217 13:55:39.213528 4768 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4769e51a18129497bb9e0b8bf6197904947b6c69bee9e91c741476f9a28892c5" Feb 17 13:55:39 crc kubenswrapper[4768]: I0217 13:55:39.221239 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-6f459487b8-6m6q4" event={"ID":"41aee306-e130-4ed4-ba8e-381531d03dc3","Type":"ContainerStarted","Data":"4852b0f520e662f3f4e4e614d6973d3e80d70665f97e9cce13dd3a157843b8bf"} Feb 17 13:55:39 crc kubenswrapper[4768]: I0217 13:55:39.221320 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-6f459487b8-6m6q4" event={"ID":"41aee306-e130-4ed4-ba8e-381531d03dc3","Type":"ContainerStarted","Data":"6f947df71acd986e6bc86b6940c2485267309942607987801fce756eaf63feac"} Feb 17 13:55:39 crc kubenswrapper[4768]: I0217 13:55:39.222916 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-6f459487b8-6m6q4" Feb 17 13:55:39 crc kubenswrapper[4768]: I0217 13:55:39.222995 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-6f459487b8-6m6q4" Feb 17 13:55:39 crc kubenswrapper[4768]: I0217 13:55:39.239379 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1c0e296a-80f3-4efe-bb28-17fdfd153397","Type":"ContainerStarted","Data":"a801ca142c3d415f3635d2c9213aadfe9f2b4c13a454d7ca716b3a097448339f"} Feb 17 13:55:39 crc kubenswrapper[4768]: I0217 13:55:39.239613 4768 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="1c0e296a-80f3-4efe-bb28-17fdfd153397" containerName="ceilometer-notification-agent" containerID="cri-o://1076244baedb5276f95f29bd05ad24800133851d3eaaca34c9a31c87bf95c679" gracePeriod=30 Feb 17 13:55:39 crc kubenswrapper[4768]: I0217 13:55:39.239728 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 17 13:55:39 crc kubenswrapper[4768]: I0217 13:55:39.239789 4768 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="1c0e296a-80f3-4efe-bb28-17fdfd153397" containerName="proxy-httpd" containerID="cri-o://a801ca142c3d415f3635d2c9213aadfe9f2b4c13a454d7ca716b3a097448339f" gracePeriod=30 Feb 17 13:55:39 crc kubenswrapper[4768]: I0217 13:55:39.239857 4768 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="1c0e296a-80f3-4efe-bb28-17fdfd153397" containerName="sg-core" containerID="cri-o://ccd2128ae128dcd19682ea32114c0e6bfb94fa980bae79579791788a93fd9111" gracePeriod=30 Feb 17 13:55:39 crc kubenswrapper[4768]: I0217 13:55:39.257745 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-jmq2h" Feb 17 13:55:39 crc kubenswrapper[4768]: I0217 13:55:39.258390 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-jmq2h" event={"ID":"e23e418f-2c16-4fa8-94fb-5e575affd61b","Type":"ContainerDied","Data":"9e9636fc8b31b07ad1150758abaf2fe9fbb51a72dd958c4d5c56c48933b24f68"} Feb 17 13:55:39 crc kubenswrapper[4768]: I0217 13:55:39.258447 4768 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9e9636fc8b31b07ad1150758abaf2fe9fbb51a72dd958c4d5c56c48933b24f68" Feb 17 13:55:39 crc kubenswrapper[4768]: I0217 13:55:39.258903 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Feb 17 13:55:39 crc kubenswrapper[4768]: I0217 13:55:39.261651 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Feb 17 13:55:39 crc kubenswrapper[4768]: I0217 13:55:39.275161 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-6f459487b8-6m6q4" podStartSLOduration=8.275132641 podStartE2EDuration="8.275132641s" podCreationTimestamp="2026-02-17 13:55:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 13:55:39.259494909 +0000 UTC m=+1158.538881371" watchObservedRunningTime="2026-02-17 13:55:39.275132641 +0000 UTC m=+1158.554519093" Feb 17 13:55:39 crc kubenswrapper[4768]: I0217 13:55:39.546618 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-worker-68cd444875-wgnnm"] Feb 17 13:55:39 crc kubenswrapper[4768]: E0217 13:55:39.546960 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e23e418f-2c16-4fa8-94fb-5e575affd61b" containerName="barbican-db-sync" Feb 17 13:55:39 crc kubenswrapper[4768]: I0217 13:55:39.546975 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="e23e418f-2c16-4fa8-94fb-5e575affd61b" containerName="barbican-db-sync" Feb 17 13:55:39 crc kubenswrapper[4768]: E0217 13:55:39.547023 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="df7e53d7-b63b-41b4-b909-c6effd0dab0c" containerName="cinder-db-sync" Feb 17 13:55:39 crc kubenswrapper[4768]: I0217 13:55:39.547032 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="df7e53d7-b63b-41b4-b909-c6effd0dab0c" containerName="cinder-db-sync" Feb 17 13:55:39 crc kubenswrapper[4768]: I0217 13:55:39.547289 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="e23e418f-2c16-4fa8-94fb-5e575affd61b" containerName="barbican-db-sync" Feb 17 13:55:39 crc kubenswrapper[4768]: I0217 13:55:39.547329 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="df7e53d7-b63b-41b4-b909-c6effd0dab0c" containerName="cinder-db-sync" Feb 17 13:55:39 crc kubenswrapper[4768]: I0217 13:55:39.561870 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-68cd444875-wgnnm" Feb 17 13:55:39 crc kubenswrapper[4768]: I0217 13:55:39.568950 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-9nbg2" Feb 17 13:55:39 crc kubenswrapper[4768]: I0217 13:55:39.569728 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Feb 17 13:55:39 crc kubenswrapper[4768]: I0217 13:55:39.569861 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-keystone-listener-f5cff5694-mvlv5"] Feb 17 13:55:39 crc kubenswrapper[4768]: I0217 13:55:39.571426 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-f5cff5694-mvlv5" Feb 17 13:55:39 crc kubenswrapper[4768]: I0217 13:55:39.581207 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-keystone-listener-config-data" Feb 17 13:55:39 crc kubenswrapper[4768]: I0217 13:55:39.593480 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-worker-config-data" Feb 17 13:55:39 crc kubenswrapper[4768]: I0217 13:55:39.597966 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0ac0a67d-ed1c-4a10-8a5b-c2ecde6d3fc7-logs\") pod \"barbican-keystone-listener-f5cff5694-mvlv5\" (UID: \"0ac0a67d-ed1c-4a10-8a5b-c2ecde6d3fc7\") " pod="openstack/barbican-keystone-listener-f5cff5694-mvlv5" Feb 17 13:55:39 crc kubenswrapper[4768]: I0217 13:55:39.598002 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0ac0a67d-ed1c-4a10-8a5b-c2ecde6d3fc7-combined-ca-bundle\") pod \"barbican-keystone-listener-f5cff5694-mvlv5\" (UID: \"0ac0a67d-ed1c-4a10-8a5b-c2ecde6d3fc7\") " pod="openstack/barbican-keystone-listener-f5cff5694-mvlv5" Feb 17 13:55:39 crc kubenswrapper[4768]: I0217 13:55:39.598047 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0ac0a67d-ed1c-4a10-8a5b-c2ecde6d3fc7-config-data-custom\") pod \"barbican-keystone-listener-f5cff5694-mvlv5\" (UID: \"0ac0a67d-ed1c-4a10-8a5b-c2ecde6d3fc7\") " pod="openstack/barbican-keystone-listener-f5cff5694-mvlv5" Feb 17 13:55:39 crc kubenswrapper[4768]: I0217 13:55:39.598081 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0ac0a67d-ed1c-4a10-8a5b-c2ecde6d3fc7-config-data\") pod \"barbican-keystone-listener-f5cff5694-mvlv5\" (UID: \"0ac0a67d-ed1c-4a10-8a5b-c2ecde6d3fc7\") " pod="openstack/barbican-keystone-listener-f5cff5694-mvlv5" Feb 17 13:55:39 crc kubenswrapper[4768]: I0217 13:55:39.598143 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mzrfh\" (UniqueName: \"kubernetes.io/projected/486b688d-e9dd-4c6b-ae8d-c2e536172e53-kube-api-access-mzrfh\") pod \"barbican-worker-68cd444875-wgnnm\" (UID: \"486b688d-e9dd-4c6b-ae8d-c2e536172e53\") " pod="openstack/barbican-worker-68cd444875-wgnnm" Feb 17 13:55:39 crc kubenswrapper[4768]: I0217 13:55:39.598163 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/486b688d-e9dd-4c6b-ae8d-c2e536172e53-combined-ca-bundle\") pod \"barbican-worker-68cd444875-wgnnm\" (UID: \"486b688d-e9dd-4c6b-ae8d-c2e536172e53\") " pod="openstack/barbican-worker-68cd444875-wgnnm" Feb 17 13:55:39 crc kubenswrapper[4768]: I0217 13:55:39.598261 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g8z9c\" (UniqueName: \"kubernetes.io/projected/0ac0a67d-ed1c-4a10-8a5b-c2ecde6d3fc7-kube-api-access-g8z9c\") pod \"barbican-keystone-listener-f5cff5694-mvlv5\" (UID: \"0ac0a67d-ed1c-4a10-8a5b-c2ecde6d3fc7\") " pod="openstack/barbican-keystone-listener-f5cff5694-mvlv5" Feb 17 13:55:39 crc kubenswrapper[4768]: I0217 13:55:39.598289 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/486b688d-e9dd-4c6b-ae8d-c2e536172e53-config-data-custom\") pod \"barbican-worker-68cd444875-wgnnm\" (UID: \"486b688d-e9dd-4c6b-ae8d-c2e536172e53\") " pod="openstack/barbican-worker-68cd444875-wgnnm" Feb 17 13:55:39 crc kubenswrapper[4768]: I0217 13:55:39.598317 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/486b688d-e9dd-4c6b-ae8d-c2e536172e53-logs\") pod \"barbican-worker-68cd444875-wgnnm\" (UID: \"486b688d-e9dd-4c6b-ae8d-c2e536172e53\") " pod="openstack/barbican-worker-68cd444875-wgnnm" Feb 17 13:55:39 crc kubenswrapper[4768]: I0217 13:55:39.598336 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/486b688d-e9dd-4c6b-ae8d-c2e536172e53-config-data\") pod \"barbican-worker-68cd444875-wgnnm\" (UID: \"486b688d-e9dd-4c6b-ae8d-c2e536172e53\") " pod="openstack/barbican-worker-68cd444875-wgnnm" Feb 17 13:55:39 crc kubenswrapper[4768]: I0217 13:55:39.602798 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-b895b5785-jbx4n"] Feb 17 13:55:39 crc kubenswrapper[4768]: I0217 13:55:39.604689 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-b895b5785-jbx4n" Feb 17 13:55:39 crc kubenswrapper[4768]: I0217 13:55:39.628656 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Feb 17 13:55:39 crc kubenswrapper[4768]: I0217 13:55:39.636452 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 17 13:55:39 crc kubenswrapper[4768]: I0217 13:55:39.639849 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Feb 17 13:55:39 crc kubenswrapper[4768]: I0217 13:55:39.640013 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-rp4mv" Feb 17 13:55:39 crc kubenswrapper[4768]: I0217 13:55:39.641596 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Feb 17 13:55:39 crc kubenswrapper[4768]: I0217 13:55:39.641740 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Feb 17 13:55:39 crc kubenswrapper[4768]: I0217 13:55:39.647210 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-68cd444875-wgnnm"] Feb 17 13:55:39 crc kubenswrapper[4768]: I0217 13:55:39.679519 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-f5cff5694-mvlv5"] Feb 17 13:55:39 crc kubenswrapper[4768]: I0217 13:55:39.710443 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 17 13:55:39 crc kubenswrapper[4768]: I0217 13:55:39.711134 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g8z9c\" (UniqueName: \"kubernetes.io/projected/0ac0a67d-ed1c-4a10-8a5b-c2ecde6d3fc7-kube-api-access-g8z9c\") pod \"barbican-keystone-listener-f5cff5694-mvlv5\" (UID: \"0ac0a67d-ed1c-4a10-8a5b-c2ecde6d3fc7\") " pod="openstack/barbican-keystone-listener-f5cff5694-mvlv5" Feb 17 13:55:39 crc kubenswrapper[4768]: I0217 13:55:39.711672 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/486b688d-e9dd-4c6b-ae8d-c2e536172e53-config-data-custom\") pod \"barbican-worker-68cd444875-wgnnm\" (UID: \"486b688d-e9dd-4c6b-ae8d-c2e536172e53\") " pod="openstack/barbican-worker-68cd444875-wgnnm" Feb 17 13:55:39 crc kubenswrapper[4768]: I0217 13:55:39.712786 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/486b688d-e9dd-4c6b-ae8d-c2e536172e53-logs\") pod \"barbican-worker-68cd444875-wgnnm\" (UID: \"486b688d-e9dd-4c6b-ae8d-c2e536172e53\") " pod="openstack/barbican-worker-68cd444875-wgnnm" Feb 17 13:55:39 crc kubenswrapper[4768]: I0217 13:55:39.712856 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/486b688d-e9dd-4c6b-ae8d-c2e536172e53-config-data\") pod \"barbican-worker-68cd444875-wgnnm\" (UID: \"486b688d-e9dd-4c6b-ae8d-c2e536172e53\") " pod="openstack/barbican-worker-68cd444875-wgnnm" Feb 17 13:55:39 crc kubenswrapper[4768]: I0217 13:55:39.712934 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0ac0a67d-ed1c-4a10-8a5b-c2ecde6d3fc7-logs\") pod \"barbican-keystone-listener-f5cff5694-mvlv5\" (UID: \"0ac0a67d-ed1c-4a10-8a5b-c2ecde6d3fc7\") " pod="openstack/barbican-keystone-listener-f5cff5694-mvlv5" Feb 17 13:55:39 crc kubenswrapper[4768]: I0217 13:55:39.712953 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0ac0a67d-ed1c-4a10-8a5b-c2ecde6d3fc7-combined-ca-bundle\") pod \"barbican-keystone-listener-f5cff5694-mvlv5\" (UID: \"0ac0a67d-ed1c-4a10-8a5b-c2ecde6d3fc7\") " pod="openstack/barbican-keystone-listener-f5cff5694-mvlv5" Feb 17 13:55:39 crc kubenswrapper[4768]: I0217 13:55:39.713047 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0ac0a67d-ed1c-4a10-8a5b-c2ecde6d3fc7-config-data-custom\") pod \"barbican-keystone-listener-f5cff5694-mvlv5\" (UID: \"0ac0a67d-ed1c-4a10-8a5b-c2ecde6d3fc7\") " pod="openstack/barbican-keystone-listener-f5cff5694-mvlv5" Feb 17 13:55:39 crc kubenswrapper[4768]: I0217 13:55:39.713143 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0ac0a67d-ed1c-4a10-8a5b-c2ecde6d3fc7-config-data\") pod \"barbican-keystone-listener-f5cff5694-mvlv5\" (UID: \"0ac0a67d-ed1c-4a10-8a5b-c2ecde6d3fc7\") " pod="openstack/barbican-keystone-listener-f5cff5694-mvlv5" Feb 17 13:55:39 crc kubenswrapper[4768]: I0217 13:55:39.713243 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mzrfh\" (UniqueName: \"kubernetes.io/projected/486b688d-e9dd-4c6b-ae8d-c2e536172e53-kube-api-access-mzrfh\") pod \"barbican-worker-68cd444875-wgnnm\" (UID: \"486b688d-e9dd-4c6b-ae8d-c2e536172e53\") " pod="openstack/barbican-worker-68cd444875-wgnnm" Feb 17 13:55:39 crc kubenswrapper[4768]: I0217 13:55:39.713278 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/486b688d-e9dd-4c6b-ae8d-c2e536172e53-combined-ca-bundle\") pod \"barbican-worker-68cd444875-wgnnm\" (UID: \"486b688d-e9dd-4c6b-ae8d-c2e536172e53\") " pod="openstack/barbican-worker-68cd444875-wgnnm" Feb 17 13:55:39 crc kubenswrapper[4768]: I0217 13:55:39.716324 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0ac0a67d-ed1c-4a10-8a5b-c2ecde6d3fc7-logs\") pod \"barbican-keystone-listener-f5cff5694-mvlv5\" (UID: \"0ac0a67d-ed1c-4a10-8a5b-c2ecde6d3fc7\") " pod="openstack/barbican-keystone-listener-f5cff5694-mvlv5" Feb 17 13:55:39 crc kubenswrapper[4768]: I0217 13:55:39.716669 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/486b688d-e9dd-4c6b-ae8d-c2e536172e53-logs\") pod \"barbican-worker-68cd444875-wgnnm\" (UID: \"486b688d-e9dd-4c6b-ae8d-c2e536172e53\") " pod="openstack/barbican-worker-68cd444875-wgnnm" Feb 17 13:55:39 crc kubenswrapper[4768]: I0217 13:55:39.729166 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/486b688d-e9dd-4c6b-ae8d-c2e536172e53-combined-ca-bundle\") pod \"barbican-worker-68cd444875-wgnnm\" (UID: \"486b688d-e9dd-4c6b-ae8d-c2e536172e53\") " pod="openstack/barbican-worker-68cd444875-wgnnm" Feb 17 13:55:39 crc kubenswrapper[4768]: I0217 13:55:39.733828 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/486b688d-e9dd-4c6b-ae8d-c2e536172e53-config-data-custom\") pod \"barbican-worker-68cd444875-wgnnm\" (UID: \"486b688d-e9dd-4c6b-ae8d-c2e536172e53\") " pod="openstack/barbican-worker-68cd444875-wgnnm" Feb 17 13:55:39 crc kubenswrapper[4768]: I0217 13:55:39.740780 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-b895b5785-jbx4n"] Feb 17 13:55:39 crc kubenswrapper[4768]: I0217 13:55:39.751527 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0ac0a67d-ed1c-4a10-8a5b-c2ecde6d3fc7-config-data\") pod \"barbican-keystone-listener-f5cff5694-mvlv5\" (UID: \"0ac0a67d-ed1c-4a10-8a5b-c2ecde6d3fc7\") " pod="openstack/barbican-keystone-listener-f5cff5694-mvlv5" Feb 17 13:55:39 crc kubenswrapper[4768]: I0217 13:55:39.751695 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0ac0a67d-ed1c-4a10-8a5b-c2ecde6d3fc7-combined-ca-bundle\") pod \"barbican-keystone-listener-f5cff5694-mvlv5\" (UID: \"0ac0a67d-ed1c-4a10-8a5b-c2ecde6d3fc7\") " pod="openstack/barbican-keystone-listener-f5cff5694-mvlv5" Feb 17 13:55:39 crc kubenswrapper[4768]: I0217 13:55:39.752608 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/486b688d-e9dd-4c6b-ae8d-c2e536172e53-config-data\") pod \"barbican-worker-68cd444875-wgnnm\" (UID: \"486b688d-e9dd-4c6b-ae8d-c2e536172e53\") " pod="openstack/barbican-worker-68cd444875-wgnnm" Feb 17 13:55:39 crc kubenswrapper[4768]: I0217 13:55:39.758644 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g8z9c\" (UniqueName: \"kubernetes.io/projected/0ac0a67d-ed1c-4a10-8a5b-c2ecde6d3fc7-kube-api-access-g8z9c\") pod \"barbican-keystone-listener-f5cff5694-mvlv5\" (UID: \"0ac0a67d-ed1c-4a10-8a5b-c2ecde6d3fc7\") " pod="openstack/barbican-keystone-listener-f5cff5694-mvlv5" Feb 17 13:55:39 crc kubenswrapper[4768]: I0217 13:55:39.762673 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0ac0a67d-ed1c-4a10-8a5b-c2ecde6d3fc7-config-data-custom\") pod \"barbican-keystone-listener-f5cff5694-mvlv5\" (UID: \"0ac0a67d-ed1c-4a10-8a5b-c2ecde6d3fc7\") " pod="openstack/barbican-keystone-listener-f5cff5694-mvlv5" Feb 17 13:55:39 crc kubenswrapper[4768]: I0217 13:55:39.763126 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mzrfh\" (UniqueName: \"kubernetes.io/projected/486b688d-e9dd-4c6b-ae8d-c2e536172e53-kube-api-access-mzrfh\") pod \"barbican-worker-68cd444875-wgnnm\" (UID: \"486b688d-e9dd-4c6b-ae8d-c2e536172e53\") " pod="openstack/barbican-worker-68cd444875-wgnnm" Feb 17 13:55:39 crc kubenswrapper[4768]: I0217 13:55:39.816682 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/95cbd6d2-1337-43cd-8ee9-2b3bc5ce70ee-dns-svc\") pod \"dnsmasq-dns-b895b5785-jbx4n\" (UID: \"95cbd6d2-1337-43cd-8ee9-2b3bc5ce70ee\") " pod="openstack/dnsmasq-dns-b895b5785-jbx4n" Feb 17 13:55:39 crc kubenswrapper[4768]: I0217 13:55:39.816736 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/95cbd6d2-1337-43cd-8ee9-2b3bc5ce70ee-dns-swift-storage-0\") pod \"dnsmasq-dns-b895b5785-jbx4n\" (UID: \"95cbd6d2-1337-43cd-8ee9-2b3bc5ce70ee\") " pod="openstack/dnsmasq-dns-b895b5785-jbx4n" Feb 17 13:55:39 crc kubenswrapper[4768]: I0217 13:55:39.816767 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1ab3c1f4-c7cd-4a4b-b540-d5cf3c239785-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"1ab3c1f4-c7cd-4a4b-b540-d5cf3c239785\") " pod="openstack/cinder-scheduler-0" Feb 17 13:55:39 crc kubenswrapper[4768]: I0217 13:55:39.816787 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1ab3c1f4-c7cd-4a4b-b540-d5cf3c239785-scripts\") pod \"cinder-scheduler-0\" (UID: \"1ab3c1f4-c7cd-4a4b-b540-d5cf3c239785\") " pod="openstack/cinder-scheduler-0" Feb 17 13:55:39 crc kubenswrapper[4768]: I0217 13:55:39.816819 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/95cbd6d2-1337-43cd-8ee9-2b3bc5ce70ee-config\") pod \"dnsmasq-dns-b895b5785-jbx4n\" (UID: \"95cbd6d2-1337-43cd-8ee9-2b3bc5ce70ee\") " pod="openstack/dnsmasq-dns-b895b5785-jbx4n" Feb 17 13:55:39 crc kubenswrapper[4768]: I0217 13:55:39.816838 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fnm28\" (UniqueName: \"kubernetes.io/projected/95cbd6d2-1337-43cd-8ee9-2b3bc5ce70ee-kube-api-access-fnm28\") pod \"dnsmasq-dns-b895b5785-jbx4n\" (UID: \"95cbd6d2-1337-43cd-8ee9-2b3bc5ce70ee\") " pod="openstack/dnsmasq-dns-b895b5785-jbx4n" Feb 17 13:55:39 crc kubenswrapper[4768]: I0217 13:55:39.816874 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/1ab3c1f4-c7cd-4a4b-b540-d5cf3c239785-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"1ab3c1f4-c7cd-4a4b-b540-d5cf3c239785\") " pod="openstack/cinder-scheduler-0" Feb 17 13:55:39 crc kubenswrapper[4768]: I0217 13:55:39.816895 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/95cbd6d2-1337-43cd-8ee9-2b3bc5ce70ee-ovsdbserver-nb\") pod \"dnsmasq-dns-b895b5785-jbx4n\" (UID: \"95cbd6d2-1337-43cd-8ee9-2b3bc5ce70ee\") " pod="openstack/dnsmasq-dns-b895b5785-jbx4n" Feb 17 13:55:39 crc kubenswrapper[4768]: I0217 13:55:39.816922 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/95cbd6d2-1337-43cd-8ee9-2b3bc5ce70ee-ovsdbserver-sb\") pod \"dnsmasq-dns-b895b5785-jbx4n\" (UID: \"95cbd6d2-1337-43cd-8ee9-2b3bc5ce70ee\") " pod="openstack/dnsmasq-dns-b895b5785-jbx4n" Feb 17 13:55:39 crc kubenswrapper[4768]: I0217 13:55:39.816986 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r7854\" (UniqueName: \"kubernetes.io/projected/1ab3c1f4-c7cd-4a4b-b540-d5cf3c239785-kube-api-access-r7854\") pod \"cinder-scheduler-0\" (UID: \"1ab3c1f4-c7cd-4a4b-b540-d5cf3c239785\") " pod="openstack/cinder-scheduler-0" Feb 17 13:55:39 crc kubenswrapper[4768]: I0217 13:55:39.817020 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1ab3c1f4-c7cd-4a4b-b540-d5cf3c239785-config-data\") pod \"cinder-scheduler-0\" (UID: \"1ab3c1f4-c7cd-4a4b-b540-d5cf3c239785\") " pod="openstack/cinder-scheduler-0" Feb 17 13:55:39 crc kubenswrapper[4768]: I0217 13:55:39.817039 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1ab3c1f4-c7cd-4a4b-b540-d5cf3c239785-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"1ab3c1f4-c7cd-4a4b-b540-d5cf3c239785\") " pod="openstack/cinder-scheduler-0" Feb 17 13:55:39 crc kubenswrapper[4768]: I0217 13:55:39.862142 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-b895b5785-jbx4n"] Feb 17 13:55:39 crc kubenswrapper[4768]: E0217 13:55:39.878462 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[config dns-svc dns-swift-storage-0 kube-api-access-fnm28 ovsdbserver-nb ovsdbserver-sb], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openstack/dnsmasq-dns-b895b5785-jbx4n" podUID="95cbd6d2-1337-43cd-8ee9-2b3bc5ce70ee" Feb 17 13:55:39 crc kubenswrapper[4768]: I0217 13:55:39.908326 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-68cd444875-wgnnm" Feb 17 13:55:39 crc kubenswrapper[4768]: I0217 13:55:39.909874 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-kw2l4"] Feb 17 13:55:39 crc kubenswrapper[4768]: I0217 13:55:39.911634 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c9776ccc5-kw2l4" Feb 17 13:55:39 crc kubenswrapper[4768]: I0217 13:55:39.919168 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/1ab3c1f4-c7cd-4a4b-b540-d5cf3c239785-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"1ab3c1f4-c7cd-4a4b-b540-d5cf3c239785\") " pod="openstack/cinder-scheduler-0" Feb 17 13:55:39 crc kubenswrapper[4768]: I0217 13:55:39.919208 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/95cbd6d2-1337-43cd-8ee9-2b3bc5ce70ee-ovsdbserver-nb\") pod \"dnsmasq-dns-b895b5785-jbx4n\" (UID: \"95cbd6d2-1337-43cd-8ee9-2b3bc5ce70ee\") " pod="openstack/dnsmasq-dns-b895b5785-jbx4n" Feb 17 13:55:39 crc kubenswrapper[4768]: I0217 13:55:39.919254 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/95cbd6d2-1337-43cd-8ee9-2b3bc5ce70ee-ovsdbserver-sb\") pod \"dnsmasq-dns-b895b5785-jbx4n\" (UID: \"95cbd6d2-1337-43cd-8ee9-2b3bc5ce70ee\") " pod="openstack/dnsmasq-dns-b895b5785-jbx4n" Feb 17 13:55:39 crc kubenswrapper[4768]: I0217 13:55:39.919353 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r7854\" (UniqueName: \"kubernetes.io/projected/1ab3c1f4-c7cd-4a4b-b540-d5cf3c239785-kube-api-access-r7854\") pod \"cinder-scheduler-0\" (UID: \"1ab3c1f4-c7cd-4a4b-b540-d5cf3c239785\") " pod="openstack/cinder-scheduler-0" Feb 17 13:55:39 crc kubenswrapper[4768]: I0217 13:55:39.919408 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1ab3c1f4-c7cd-4a4b-b540-d5cf3c239785-config-data\") pod \"cinder-scheduler-0\" (UID: \"1ab3c1f4-c7cd-4a4b-b540-d5cf3c239785\") " pod="openstack/cinder-scheduler-0" Feb 17 13:55:39 crc kubenswrapper[4768]: I0217 13:55:39.919430 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1ab3c1f4-c7cd-4a4b-b540-d5cf3c239785-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"1ab3c1f4-c7cd-4a4b-b540-d5cf3c239785\") " pod="openstack/cinder-scheduler-0" Feb 17 13:55:39 crc kubenswrapper[4768]: I0217 13:55:39.919485 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/95cbd6d2-1337-43cd-8ee9-2b3bc5ce70ee-dns-svc\") pod \"dnsmasq-dns-b895b5785-jbx4n\" (UID: \"95cbd6d2-1337-43cd-8ee9-2b3bc5ce70ee\") " pod="openstack/dnsmasq-dns-b895b5785-jbx4n" Feb 17 13:55:39 crc kubenswrapper[4768]: I0217 13:55:39.919507 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/95cbd6d2-1337-43cd-8ee9-2b3bc5ce70ee-dns-swift-storage-0\") pod \"dnsmasq-dns-b895b5785-jbx4n\" (UID: \"95cbd6d2-1337-43cd-8ee9-2b3bc5ce70ee\") " pod="openstack/dnsmasq-dns-b895b5785-jbx4n" Feb 17 13:55:39 crc kubenswrapper[4768]: I0217 13:55:39.919530 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1ab3c1f4-c7cd-4a4b-b540-d5cf3c239785-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"1ab3c1f4-c7cd-4a4b-b540-d5cf3c239785\") " pod="openstack/cinder-scheduler-0" Feb 17 13:55:39 crc kubenswrapper[4768]: I0217 13:55:39.919568 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1ab3c1f4-c7cd-4a4b-b540-d5cf3c239785-scripts\") pod \"cinder-scheduler-0\" (UID: \"1ab3c1f4-c7cd-4a4b-b540-d5cf3c239785\") " pod="openstack/cinder-scheduler-0" Feb 17 13:55:39 crc kubenswrapper[4768]: I0217 13:55:39.919597 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/95cbd6d2-1337-43cd-8ee9-2b3bc5ce70ee-config\") pod \"dnsmasq-dns-b895b5785-jbx4n\" (UID: \"95cbd6d2-1337-43cd-8ee9-2b3bc5ce70ee\") " pod="openstack/dnsmasq-dns-b895b5785-jbx4n" Feb 17 13:55:39 crc kubenswrapper[4768]: I0217 13:55:39.919615 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fnm28\" (UniqueName: \"kubernetes.io/projected/95cbd6d2-1337-43cd-8ee9-2b3bc5ce70ee-kube-api-access-fnm28\") pod \"dnsmasq-dns-b895b5785-jbx4n\" (UID: \"95cbd6d2-1337-43cd-8ee9-2b3bc5ce70ee\") " pod="openstack/dnsmasq-dns-b895b5785-jbx4n" Feb 17 13:55:39 crc kubenswrapper[4768]: I0217 13:55:39.920278 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/95cbd6d2-1337-43cd-8ee9-2b3bc5ce70ee-ovsdbserver-nb\") pod \"dnsmasq-dns-b895b5785-jbx4n\" (UID: \"95cbd6d2-1337-43cd-8ee9-2b3bc5ce70ee\") " pod="openstack/dnsmasq-dns-b895b5785-jbx4n" Feb 17 13:55:39 crc kubenswrapper[4768]: I0217 13:55:39.920863 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/95cbd6d2-1337-43cd-8ee9-2b3bc5ce70ee-ovsdbserver-sb\") pod \"dnsmasq-dns-b895b5785-jbx4n\" (UID: \"95cbd6d2-1337-43cd-8ee9-2b3bc5ce70ee\") " pod="openstack/dnsmasq-dns-b895b5785-jbx4n" Feb 17 13:55:39 crc kubenswrapper[4768]: I0217 13:55:39.921244 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/95cbd6d2-1337-43cd-8ee9-2b3bc5ce70ee-dns-svc\") pod \"dnsmasq-dns-b895b5785-jbx4n\" (UID: \"95cbd6d2-1337-43cd-8ee9-2b3bc5ce70ee\") " pod="openstack/dnsmasq-dns-b895b5785-jbx4n" Feb 17 13:55:39 crc kubenswrapper[4768]: I0217 13:55:39.921486 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/95cbd6d2-1337-43cd-8ee9-2b3bc5ce70ee-dns-swift-storage-0\") pod \"dnsmasq-dns-b895b5785-jbx4n\" (UID: \"95cbd6d2-1337-43cd-8ee9-2b3bc5ce70ee\") " pod="openstack/dnsmasq-dns-b895b5785-jbx4n" Feb 17 13:55:39 crc kubenswrapper[4768]: I0217 13:55:39.922249 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/1ab3c1f4-c7cd-4a4b-b540-d5cf3c239785-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"1ab3c1f4-c7cd-4a4b-b540-d5cf3c239785\") " pod="openstack/cinder-scheduler-0" Feb 17 13:55:39 crc kubenswrapper[4768]: I0217 13:55:39.925886 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1ab3c1f4-c7cd-4a4b-b540-d5cf3c239785-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"1ab3c1f4-c7cd-4a4b-b540-d5cf3c239785\") " pod="openstack/cinder-scheduler-0" Feb 17 13:55:39 crc kubenswrapper[4768]: I0217 13:55:39.926196 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/95cbd6d2-1337-43cd-8ee9-2b3bc5ce70ee-config\") pod \"dnsmasq-dns-b895b5785-jbx4n\" (UID: \"95cbd6d2-1337-43cd-8ee9-2b3bc5ce70ee\") " pod="openstack/dnsmasq-dns-b895b5785-jbx4n" Feb 17 13:55:39 crc kubenswrapper[4768]: I0217 13:55:39.926988 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-6c9fbc7fd6-zqhzg"] Feb 17 13:55:39 crc kubenswrapper[4768]: I0217 13:55:39.928093 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1ab3c1f4-c7cd-4a4b-b540-d5cf3c239785-scripts\") pod \"cinder-scheduler-0\" (UID: \"1ab3c1f4-c7cd-4a4b-b540-d5cf3c239785\") " pod="openstack/cinder-scheduler-0" Feb 17 13:55:39 crc kubenswrapper[4768]: I0217 13:55:39.931680 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-6c9fbc7fd6-zqhzg" Feb 17 13:55:39 crc kubenswrapper[4768]: I0217 13:55:39.934215 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-api-config-data" Feb 17 13:55:39 crc kubenswrapper[4768]: I0217 13:55:39.935368 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1ab3c1f4-c7cd-4a4b-b540-d5cf3c239785-config-data\") pod \"cinder-scheduler-0\" (UID: \"1ab3c1f4-c7cd-4a4b-b540-d5cf3c239785\") " pod="openstack/cinder-scheduler-0" Feb 17 13:55:39 crc kubenswrapper[4768]: I0217 13:55:39.942345 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1ab3c1f4-c7cd-4a4b-b540-d5cf3c239785-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"1ab3c1f4-c7cd-4a4b-b540-d5cf3c239785\") " pod="openstack/cinder-scheduler-0" Feb 17 13:55:39 crc kubenswrapper[4768]: I0217 13:55:39.943021 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-f5cff5694-mvlv5" Feb 17 13:55:39 crc kubenswrapper[4768]: I0217 13:55:39.957158 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r7854\" (UniqueName: \"kubernetes.io/projected/1ab3c1f4-c7cd-4a4b-b540-d5cf3c239785-kube-api-access-r7854\") pod \"cinder-scheduler-0\" (UID: \"1ab3c1f4-c7cd-4a4b-b540-d5cf3c239785\") " pod="openstack/cinder-scheduler-0" Feb 17 13:55:39 crc kubenswrapper[4768]: I0217 13:55:39.967629 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-6c9fbc7fd6-zqhzg"] Feb 17 13:55:39 crc kubenswrapper[4768]: I0217 13:55:39.974990 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 17 13:55:39 crc kubenswrapper[4768]: I0217 13:55:39.977414 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fnm28\" (UniqueName: \"kubernetes.io/projected/95cbd6d2-1337-43cd-8ee9-2b3bc5ce70ee-kube-api-access-fnm28\") pod \"dnsmasq-dns-b895b5785-jbx4n\" (UID: \"95cbd6d2-1337-43cd-8ee9-2b3bc5ce70ee\") " pod="openstack/dnsmasq-dns-b895b5785-jbx4n" Feb 17 13:55:39 crc kubenswrapper[4768]: I0217 13:55:39.977469 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-kw2l4"] Feb 17 13:55:39 crc kubenswrapper[4768]: I0217 13:55:39.994825 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Feb 17 13:55:39 crc kubenswrapper[4768]: I0217 13:55:39.996691 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 17 13:55:40 crc kubenswrapper[4768]: I0217 13:55:40.001301 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Feb 17 13:55:40 crc kubenswrapper[4768]: I0217 13:55:40.003416 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Feb 17 13:55:40 crc kubenswrapper[4768]: I0217 13:55:40.022826 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e9696419-5c03-4d1d-bd0c-7bf7becd6239-dns-swift-storage-0\") pod \"dnsmasq-dns-5c9776ccc5-kw2l4\" (UID: \"e9696419-5c03-4d1d-bd0c-7bf7becd6239\") " pod="openstack/dnsmasq-dns-5c9776ccc5-kw2l4" Feb 17 13:55:40 crc kubenswrapper[4768]: I0217 13:55:40.022890 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e9696419-5c03-4d1d-bd0c-7bf7becd6239-ovsdbserver-sb\") pod \"dnsmasq-dns-5c9776ccc5-kw2l4\" (UID: \"e9696419-5c03-4d1d-bd0c-7bf7becd6239\") " pod="openstack/dnsmasq-dns-5c9776ccc5-kw2l4" Feb 17 13:55:40 crc kubenswrapper[4768]: I0217 13:55:40.022922 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e9696419-5c03-4d1d-bd0c-7bf7becd6239-ovsdbserver-nb\") pod \"dnsmasq-dns-5c9776ccc5-kw2l4\" (UID: \"e9696419-5c03-4d1d-bd0c-7bf7becd6239\") " pod="openstack/dnsmasq-dns-5c9776ccc5-kw2l4" Feb 17 13:55:40 crc kubenswrapper[4768]: I0217 13:55:40.022977 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s5zz2\" (UniqueName: \"kubernetes.io/projected/e9696419-5c03-4d1d-bd0c-7bf7becd6239-kube-api-access-s5zz2\") pod \"dnsmasq-dns-5c9776ccc5-kw2l4\" (UID: \"e9696419-5c03-4d1d-bd0c-7bf7becd6239\") " pod="openstack/dnsmasq-dns-5c9776ccc5-kw2l4" Feb 17 13:55:40 crc kubenswrapper[4768]: I0217 13:55:40.023314 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e9696419-5c03-4d1d-bd0c-7bf7becd6239-dns-svc\") pod \"dnsmasq-dns-5c9776ccc5-kw2l4\" (UID: \"e9696419-5c03-4d1d-bd0c-7bf7becd6239\") " pod="openstack/dnsmasq-dns-5c9776ccc5-kw2l4" Feb 17 13:55:40 crc kubenswrapper[4768]: I0217 13:55:40.023360 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e9696419-5c03-4d1d-bd0c-7bf7becd6239-config\") pod \"dnsmasq-dns-5c9776ccc5-kw2l4\" (UID: \"e9696419-5c03-4d1d-bd0c-7bf7becd6239\") " pod="openstack/dnsmasq-dns-5c9776ccc5-kw2l4" Feb 17 13:55:40 crc kubenswrapper[4768]: I0217 13:55:40.125312 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e9696419-5c03-4d1d-bd0c-7bf7becd6239-dns-swift-storage-0\") pod \"dnsmasq-dns-5c9776ccc5-kw2l4\" (UID: \"e9696419-5c03-4d1d-bd0c-7bf7becd6239\") " pod="openstack/dnsmasq-dns-5c9776ccc5-kw2l4" Feb 17 13:55:40 crc kubenswrapper[4768]: I0217 13:55:40.125361 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e9696419-5c03-4d1d-bd0c-7bf7becd6239-ovsdbserver-sb\") pod \"dnsmasq-dns-5c9776ccc5-kw2l4\" (UID: \"e9696419-5c03-4d1d-bd0c-7bf7becd6239\") " pod="openstack/dnsmasq-dns-5c9776ccc5-kw2l4" Feb 17 13:55:40 crc kubenswrapper[4768]: I0217 13:55:40.125380 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e9696419-5c03-4d1d-bd0c-7bf7becd6239-ovsdbserver-nb\") pod \"dnsmasq-dns-5c9776ccc5-kw2l4\" (UID: \"e9696419-5c03-4d1d-bd0c-7bf7becd6239\") " pod="openstack/dnsmasq-dns-5c9776ccc5-kw2l4" Feb 17 13:55:40 crc kubenswrapper[4768]: I0217 13:55:40.125412 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s5zz2\" (UniqueName: \"kubernetes.io/projected/e9696419-5c03-4d1d-bd0c-7bf7becd6239-kube-api-access-s5zz2\") pod \"dnsmasq-dns-5c9776ccc5-kw2l4\" (UID: \"e9696419-5c03-4d1d-bd0c-7bf7becd6239\") " pod="openstack/dnsmasq-dns-5c9776ccc5-kw2l4" Feb 17 13:55:40 crc kubenswrapper[4768]: I0217 13:55:40.125441 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-49g8l\" (UniqueName: \"kubernetes.io/projected/265e5b14-8f7f-49fb-9984-421898d607b4-kube-api-access-49g8l\") pod \"cinder-api-0\" (UID: \"265e5b14-8f7f-49fb-9984-421898d607b4\") " pod="openstack/cinder-api-0" Feb 17 13:55:40 crc kubenswrapper[4768]: I0217 13:55:40.125459 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/265e5b14-8f7f-49fb-9984-421898d607b4-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"265e5b14-8f7f-49fb-9984-421898d607b4\") " pod="openstack/cinder-api-0" Feb 17 13:55:40 crc kubenswrapper[4768]: I0217 13:55:40.125475 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/265e5b14-8f7f-49fb-9984-421898d607b4-scripts\") pod \"cinder-api-0\" (UID: \"265e5b14-8f7f-49fb-9984-421898d607b4\") " pod="openstack/cinder-api-0" Feb 17 13:55:40 crc kubenswrapper[4768]: I0217 13:55:40.125493 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/265e5b14-8f7f-49fb-9984-421898d607b4-logs\") pod \"cinder-api-0\" (UID: \"265e5b14-8f7f-49fb-9984-421898d607b4\") " pod="openstack/cinder-api-0" Feb 17 13:55:40 crc kubenswrapper[4768]: I0217 13:55:40.125538 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e9696419-5c03-4d1d-bd0c-7bf7becd6239-dns-svc\") pod \"dnsmasq-dns-5c9776ccc5-kw2l4\" (UID: \"e9696419-5c03-4d1d-bd0c-7bf7becd6239\") " pod="openstack/dnsmasq-dns-5c9776ccc5-kw2l4" Feb 17 13:55:40 crc kubenswrapper[4768]: I0217 13:55:40.125557 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e2a044ad-f31e-4c3b-9659-91650838f9da-config-data-custom\") pod \"barbican-api-6c9fbc7fd6-zqhzg\" (UID: \"e2a044ad-f31e-4c3b-9659-91650838f9da\") " pod="openstack/barbican-api-6c9fbc7fd6-zqhzg" Feb 17 13:55:40 crc kubenswrapper[4768]: I0217 13:55:40.125575 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/265e5b14-8f7f-49fb-9984-421898d607b4-config-data\") pod \"cinder-api-0\" (UID: \"265e5b14-8f7f-49fb-9984-421898d607b4\") " pod="openstack/cinder-api-0" Feb 17 13:55:40 crc kubenswrapper[4768]: I0217 13:55:40.125596 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/265e5b14-8f7f-49fb-9984-421898d607b4-etc-machine-id\") pod \"cinder-api-0\" (UID: \"265e5b14-8f7f-49fb-9984-421898d607b4\") " pod="openstack/cinder-api-0" Feb 17 13:55:40 crc kubenswrapper[4768]: I0217 13:55:40.125615 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e9696419-5c03-4d1d-bd0c-7bf7becd6239-config\") pod \"dnsmasq-dns-5c9776ccc5-kw2l4\" (UID: \"e9696419-5c03-4d1d-bd0c-7bf7becd6239\") " pod="openstack/dnsmasq-dns-5c9776ccc5-kw2l4" Feb 17 13:55:40 crc kubenswrapper[4768]: I0217 13:55:40.125635 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/265e5b14-8f7f-49fb-9984-421898d607b4-config-data-custom\") pod \"cinder-api-0\" (UID: \"265e5b14-8f7f-49fb-9984-421898d607b4\") " pod="openstack/cinder-api-0" Feb 17 13:55:40 crc kubenswrapper[4768]: I0217 13:55:40.125659 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w9rhb\" (UniqueName: \"kubernetes.io/projected/e2a044ad-f31e-4c3b-9659-91650838f9da-kube-api-access-w9rhb\") pod \"barbican-api-6c9fbc7fd6-zqhzg\" (UID: \"e2a044ad-f31e-4c3b-9659-91650838f9da\") " pod="openstack/barbican-api-6c9fbc7fd6-zqhzg" Feb 17 13:55:40 crc kubenswrapper[4768]: I0217 13:55:40.125677 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e2a044ad-f31e-4c3b-9659-91650838f9da-config-data\") pod \"barbican-api-6c9fbc7fd6-zqhzg\" (UID: \"e2a044ad-f31e-4c3b-9659-91650838f9da\") " pod="openstack/barbican-api-6c9fbc7fd6-zqhzg" Feb 17 13:55:40 crc kubenswrapper[4768]: I0217 13:55:40.125701 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e2a044ad-f31e-4c3b-9659-91650838f9da-logs\") pod \"barbican-api-6c9fbc7fd6-zqhzg\" (UID: \"e2a044ad-f31e-4c3b-9659-91650838f9da\") " pod="openstack/barbican-api-6c9fbc7fd6-zqhzg" Feb 17 13:55:40 crc kubenswrapper[4768]: I0217 13:55:40.125733 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e2a044ad-f31e-4c3b-9659-91650838f9da-combined-ca-bundle\") pod \"barbican-api-6c9fbc7fd6-zqhzg\" (UID: \"e2a044ad-f31e-4c3b-9659-91650838f9da\") " pod="openstack/barbican-api-6c9fbc7fd6-zqhzg" Feb 17 13:55:40 crc kubenswrapper[4768]: I0217 13:55:40.126564 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e9696419-5c03-4d1d-bd0c-7bf7becd6239-dns-swift-storage-0\") pod \"dnsmasq-dns-5c9776ccc5-kw2l4\" (UID: \"e9696419-5c03-4d1d-bd0c-7bf7becd6239\") " pod="openstack/dnsmasq-dns-5c9776ccc5-kw2l4" Feb 17 13:55:40 crc kubenswrapper[4768]: I0217 13:55:40.127039 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e9696419-5c03-4d1d-bd0c-7bf7becd6239-ovsdbserver-sb\") pod \"dnsmasq-dns-5c9776ccc5-kw2l4\" (UID: \"e9696419-5c03-4d1d-bd0c-7bf7becd6239\") " pod="openstack/dnsmasq-dns-5c9776ccc5-kw2l4" Feb 17 13:55:40 crc kubenswrapper[4768]: I0217 13:55:40.127546 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e9696419-5c03-4d1d-bd0c-7bf7becd6239-ovsdbserver-nb\") pod \"dnsmasq-dns-5c9776ccc5-kw2l4\" (UID: \"e9696419-5c03-4d1d-bd0c-7bf7becd6239\") " pod="openstack/dnsmasq-dns-5c9776ccc5-kw2l4" Feb 17 13:55:40 crc kubenswrapper[4768]: I0217 13:55:40.128736 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e9696419-5c03-4d1d-bd0c-7bf7becd6239-dns-svc\") pod \"dnsmasq-dns-5c9776ccc5-kw2l4\" (UID: \"e9696419-5c03-4d1d-bd0c-7bf7becd6239\") " pod="openstack/dnsmasq-dns-5c9776ccc5-kw2l4" Feb 17 13:55:40 crc kubenswrapper[4768]: I0217 13:55:40.129170 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e9696419-5c03-4d1d-bd0c-7bf7becd6239-config\") pod \"dnsmasq-dns-5c9776ccc5-kw2l4\" (UID: \"e9696419-5c03-4d1d-bd0c-7bf7becd6239\") " pod="openstack/dnsmasq-dns-5c9776ccc5-kw2l4" Feb 17 13:55:40 crc kubenswrapper[4768]: I0217 13:55:40.146638 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s5zz2\" (UniqueName: \"kubernetes.io/projected/e9696419-5c03-4d1d-bd0c-7bf7becd6239-kube-api-access-s5zz2\") pod \"dnsmasq-dns-5c9776ccc5-kw2l4\" (UID: \"e9696419-5c03-4d1d-bd0c-7bf7becd6239\") " pod="openstack/dnsmasq-dns-5c9776ccc5-kw2l4" Feb 17 13:55:40 crc kubenswrapper[4768]: I0217 13:55:40.228237 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-49g8l\" (UniqueName: \"kubernetes.io/projected/265e5b14-8f7f-49fb-9984-421898d607b4-kube-api-access-49g8l\") pod \"cinder-api-0\" (UID: \"265e5b14-8f7f-49fb-9984-421898d607b4\") " pod="openstack/cinder-api-0" Feb 17 13:55:40 crc kubenswrapper[4768]: I0217 13:55:40.228569 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/265e5b14-8f7f-49fb-9984-421898d607b4-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"265e5b14-8f7f-49fb-9984-421898d607b4\") " pod="openstack/cinder-api-0" Feb 17 13:55:40 crc kubenswrapper[4768]: I0217 13:55:40.228596 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/265e5b14-8f7f-49fb-9984-421898d607b4-scripts\") pod \"cinder-api-0\" (UID: \"265e5b14-8f7f-49fb-9984-421898d607b4\") " pod="openstack/cinder-api-0" Feb 17 13:55:40 crc kubenswrapper[4768]: I0217 13:55:40.228626 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/265e5b14-8f7f-49fb-9984-421898d607b4-logs\") pod \"cinder-api-0\" (UID: \"265e5b14-8f7f-49fb-9984-421898d607b4\") " pod="openstack/cinder-api-0" Feb 17 13:55:40 crc kubenswrapper[4768]: I0217 13:55:40.228716 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e2a044ad-f31e-4c3b-9659-91650838f9da-config-data-custom\") pod \"barbican-api-6c9fbc7fd6-zqhzg\" (UID: \"e2a044ad-f31e-4c3b-9659-91650838f9da\") " pod="openstack/barbican-api-6c9fbc7fd6-zqhzg" Feb 17 13:55:40 crc kubenswrapper[4768]: I0217 13:55:40.228740 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/265e5b14-8f7f-49fb-9984-421898d607b4-config-data\") pod \"cinder-api-0\" (UID: \"265e5b14-8f7f-49fb-9984-421898d607b4\") " pod="openstack/cinder-api-0" Feb 17 13:55:40 crc kubenswrapper[4768]: I0217 13:55:40.228775 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/265e5b14-8f7f-49fb-9984-421898d607b4-etc-machine-id\") pod \"cinder-api-0\" (UID: \"265e5b14-8f7f-49fb-9984-421898d607b4\") " pod="openstack/cinder-api-0" Feb 17 13:55:40 crc kubenswrapper[4768]: I0217 13:55:40.228801 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/265e5b14-8f7f-49fb-9984-421898d607b4-config-data-custom\") pod \"cinder-api-0\" (UID: \"265e5b14-8f7f-49fb-9984-421898d607b4\") " pod="openstack/cinder-api-0" Feb 17 13:55:40 crc kubenswrapper[4768]: I0217 13:55:40.228836 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w9rhb\" (UniqueName: \"kubernetes.io/projected/e2a044ad-f31e-4c3b-9659-91650838f9da-kube-api-access-w9rhb\") pod \"barbican-api-6c9fbc7fd6-zqhzg\" (UID: \"e2a044ad-f31e-4c3b-9659-91650838f9da\") " pod="openstack/barbican-api-6c9fbc7fd6-zqhzg" Feb 17 13:55:40 crc kubenswrapper[4768]: I0217 13:55:40.228864 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e2a044ad-f31e-4c3b-9659-91650838f9da-config-data\") pod \"barbican-api-6c9fbc7fd6-zqhzg\" (UID: \"e2a044ad-f31e-4c3b-9659-91650838f9da\") " pod="openstack/barbican-api-6c9fbc7fd6-zqhzg" Feb 17 13:55:40 crc kubenswrapper[4768]: I0217 13:55:40.228926 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e2a044ad-f31e-4c3b-9659-91650838f9da-logs\") pod \"barbican-api-6c9fbc7fd6-zqhzg\" (UID: \"e2a044ad-f31e-4c3b-9659-91650838f9da\") " pod="openstack/barbican-api-6c9fbc7fd6-zqhzg" Feb 17 13:55:40 crc kubenswrapper[4768]: I0217 13:55:40.228971 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e2a044ad-f31e-4c3b-9659-91650838f9da-combined-ca-bundle\") pod \"barbican-api-6c9fbc7fd6-zqhzg\" (UID: \"e2a044ad-f31e-4c3b-9659-91650838f9da\") " pod="openstack/barbican-api-6c9fbc7fd6-zqhzg" Feb 17 13:55:40 crc kubenswrapper[4768]: I0217 13:55:40.230749 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e2a044ad-f31e-4c3b-9659-91650838f9da-logs\") pod \"barbican-api-6c9fbc7fd6-zqhzg\" (UID: \"e2a044ad-f31e-4c3b-9659-91650838f9da\") " pod="openstack/barbican-api-6c9fbc7fd6-zqhzg" Feb 17 13:55:40 crc kubenswrapper[4768]: I0217 13:55:40.231088 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/265e5b14-8f7f-49fb-9984-421898d607b4-etc-machine-id\") pod \"cinder-api-0\" (UID: \"265e5b14-8f7f-49fb-9984-421898d607b4\") " pod="openstack/cinder-api-0" Feb 17 13:55:40 crc kubenswrapper[4768]: I0217 13:55:40.235324 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e2a044ad-f31e-4c3b-9659-91650838f9da-config-data\") pod \"barbican-api-6c9fbc7fd6-zqhzg\" (UID: \"e2a044ad-f31e-4c3b-9659-91650838f9da\") " pod="openstack/barbican-api-6c9fbc7fd6-zqhzg" Feb 17 13:55:40 crc kubenswrapper[4768]: I0217 13:55:40.235338 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/265e5b14-8f7f-49fb-9984-421898d607b4-logs\") pod \"cinder-api-0\" (UID: \"265e5b14-8f7f-49fb-9984-421898d607b4\") " pod="openstack/cinder-api-0" Feb 17 13:55:40 crc kubenswrapper[4768]: I0217 13:55:40.235831 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e2a044ad-f31e-4c3b-9659-91650838f9da-combined-ca-bundle\") pod \"barbican-api-6c9fbc7fd6-zqhzg\" (UID: \"e2a044ad-f31e-4c3b-9659-91650838f9da\") " pod="openstack/barbican-api-6c9fbc7fd6-zqhzg" Feb 17 13:55:40 crc kubenswrapper[4768]: I0217 13:55:40.237524 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/265e5b14-8f7f-49fb-9984-421898d607b4-scripts\") pod \"cinder-api-0\" (UID: \"265e5b14-8f7f-49fb-9984-421898d607b4\") " pod="openstack/cinder-api-0" Feb 17 13:55:40 crc kubenswrapper[4768]: I0217 13:55:40.237969 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/265e5b14-8f7f-49fb-9984-421898d607b4-config-data\") pod \"cinder-api-0\" (UID: \"265e5b14-8f7f-49fb-9984-421898d607b4\") " pod="openstack/cinder-api-0" Feb 17 13:55:40 crc kubenswrapper[4768]: I0217 13:55:40.240382 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/265e5b14-8f7f-49fb-9984-421898d607b4-config-data-custom\") pod \"cinder-api-0\" (UID: \"265e5b14-8f7f-49fb-9984-421898d607b4\") " pod="openstack/cinder-api-0" Feb 17 13:55:40 crc kubenswrapper[4768]: I0217 13:55:40.255269 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/265e5b14-8f7f-49fb-9984-421898d607b4-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"265e5b14-8f7f-49fb-9984-421898d607b4\") " pod="openstack/cinder-api-0" Feb 17 13:55:40 crc kubenswrapper[4768]: I0217 13:55:40.256038 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e2a044ad-f31e-4c3b-9659-91650838f9da-config-data-custom\") pod \"barbican-api-6c9fbc7fd6-zqhzg\" (UID: \"e2a044ad-f31e-4c3b-9659-91650838f9da\") " pod="openstack/barbican-api-6c9fbc7fd6-zqhzg" Feb 17 13:55:40 crc kubenswrapper[4768]: I0217 13:55:40.263139 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-49g8l\" (UniqueName: \"kubernetes.io/projected/265e5b14-8f7f-49fb-9984-421898d607b4-kube-api-access-49g8l\") pod \"cinder-api-0\" (UID: \"265e5b14-8f7f-49fb-9984-421898d607b4\") " pod="openstack/cinder-api-0" Feb 17 13:55:40 crc kubenswrapper[4768]: I0217 13:55:40.263540 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c9776ccc5-kw2l4" Feb 17 13:55:40 crc kubenswrapper[4768]: I0217 13:55:40.278681 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w9rhb\" (UniqueName: \"kubernetes.io/projected/e2a044ad-f31e-4c3b-9659-91650838f9da-kube-api-access-w9rhb\") pod \"barbican-api-6c9fbc7fd6-zqhzg\" (UID: \"e2a044ad-f31e-4c3b-9659-91650838f9da\") " pod="openstack/barbican-api-6c9fbc7fd6-zqhzg" Feb 17 13:55:40 crc kubenswrapper[4768]: I0217 13:55:40.314345 4768 generic.go:334] "Generic (PLEG): container finished" podID="1c0e296a-80f3-4efe-bb28-17fdfd153397" containerID="a801ca142c3d415f3635d2c9213aadfe9f2b4c13a454d7ca716b3a097448339f" exitCode=0 Feb 17 13:55:40 crc kubenswrapper[4768]: I0217 13:55:40.314386 4768 generic.go:334] "Generic (PLEG): container finished" podID="1c0e296a-80f3-4efe-bb28-17fdfd153397" containerID="ccd2128ae128dcd19682ea32114c0e6bfb94fa980bae79579791788a93fd9111" exitCode=2 Feb 17 13:55:40 crc kubenswrapper[4768]: I0217 13:55:40.318479 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-b895b5785-jbx4n" Feb 17 13:55:40 crc kubenswrapper[4768]: I0217 13:55:40.318652 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1c0e296a-80f3-4efe-bb28-17fdfd153397","Type":"ContainerDied","Data":"a801ca142c3d415f3635d2c9213aadfe9f2b4c13a454d7ca716b3a097448339f"} Feb 17 13:55:40 crc kubenswrapper[4768]: I0217 13:55:40.318695 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1c0e296a-80f3-4efe-bb28-17fdfd153397","Type":"ContainerDied","Data":"ccd2128ae128dcd19682ea32114c0e6bfb94fa980bae79579791788a93fd9111"} Feb 17 13:55:40 crc kubenswrapper[4768]: I0217 13:55:40.320886 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-6c9fbc7fd6-zqhzg" Feb 17 13:55:40 crc kubenswrapper[4768]: I0217 13:55:40.374318 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 17 13:55:40 crc kubenswrapper[4768]: I0217 13:55:40.534168 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-b895b5785-jbx4n" Feb 17 13:55:40 crc kubenswrapper[4768]: I0217 13:55:40.609950 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-68cd444875-wgnnm"] Feb 17 13:55:40 crc kubenswrapper[4768]: I0217 13:55:40.660729 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/95cbd6d2-1337-43cd-8ee9-2b3bc5ce70ee-config\") pod \"95cbd6d2-1337-43cd-8ee9-2b3bc5ce70ee\" (UID: \"95cbd6d2-1337-43cd-8ee9-2b3bc5ce70ee\") " Feb 17 13:55:40 crc kubenswrapper[4768]: I0217 13:55:40.660777 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/95cbd6d2-1337-43cd-8ee9-2b3bc5ce70ee-dns-swift-storage-0\") pod \"95cbd6d2-1337-43cd-8ee9-2b3bc5ce70ee\" (UID: \"95cbd6d2-1337-43cd-8ee9-2b3bc5ce70ee\") " Feb 17 13:55:40 crc kubenswrapper[4768]: I0217 13:55:40.660804 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/95cbd6d2-1337-43cd-8ee9-2b3bc5ce70ee-ovsdbserver-sb\") pod \"95cbd6d2-1337-43cd-8ee9-2b3bc5ce70ee\" (UID: \"95cbd6d2-1337-43cd-8ee9-2b3bc5ce70ee\") " Feb 17 13:55:40 crc kubenswrapper[4768]: I0217 13:55:40.660833 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fnm28\" (UniqueName: \"kubernetes.io/projected/95cbd6d2-1337-43cd-8ee9-2b3bc5ce70ee-kube-api-access-fnm28\") pod \"95cbd6d2-1337-43cd-8ee9-2b3bc5ce70ee\" (UID: \"95cbd6d2-1337-43cd-8ee9-2b3bc5ce70ee\") " Feb 17 13:55:40 crc kubenswrapper[4768]: I0217 13:55:40.660924 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/95cbd6d2-1337-43cd-8ee9-2b3bc5ce70ee-ovsdbserver-nb\") pod \"95cbd6d2-1337-43cd-8ee9-2b3bc5ce70ee\" (UID: \"95cbd6d2-1337-43cd-8ee9-2b3bc5ce70ee\") " Feb 17 13:55:40 crc kubenswrapper[4768]: I0217 13:55:40.660992 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/95cbd6d2-1337-43cd-8ee9-2b3bc5ce70ee-dns-svc\") pod \"95cbd6d2-1337-43cd-8ee9-2b3bc5ce70ee\" (UID: \"95cbd6d2-1337-43cd-8ee9-2b3bc5ce70ee\") " Feb 17 13:55:40 crc kubenswrapper[4768]: I0217 13:55:40.661862 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/95cbd6d2-1337-43cd-8ee9-2b3bc5ce70ee-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "95cbd6d2-1337-43cd-8ee9-2b3bc5ce70ee" (UID: "95cbd6d2-1337-43cd-8ee9-2b3bc5ce70ee"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 13:55:40 crc kubenswrapper[4768]: I0217 13:55:40.661930 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/95cbd6d2-1337-43cd-8ee9-2b3bc5ce70ee-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "95cbd6d2-1337-43cd-8ee9-2b3bc5ce70ee" (UID: "95cbd6d2-1337-43cd-8ee9-2b3bc5ce70ee"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 13:55:40 crc kubenswrapper[4768]: I0217 13:55:40.662575 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/95cbd6d2-1337-43cd-8ee9-2b3bc5ce70ee-config" (OuterVolumeSpecName: "config") pod "95cbd6d2-1337-43cd-8ee9-2b3bc5ce70ee" (UID: "95cbd6d2-1337-43cd-8ee9-2b3bc5ce70ee"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 13:55:40 crc kubenswrapper[4768]: I0217 13:55:40.662823 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/95cbd6d2-1337-43cd-8ee9-2b3bc5ce70ee-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "95cbd6d2-1337-43cd-8ee9-2b3bc5ce70ee" (UID: "95cbd6d2-1337-43cd-8ee9-2b3bc5ce70ee"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 13:55:40 crc kubenswrapper[4768]: I0217 13:55:40.663004 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/95cbd6d2-1337-43cd-8ee9-2b3bc5ce70ee-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "95cbd6d2-1337-43cd-8ee9-2b3bc5ce70ee" (UID: "95cbd6d2-1337-43cd-8ee9-2b3bc5ce70ee"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 13:55:40 crc kubenswrapper[4768]: I0217 13:55:40.674687 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/95cbd6d2-1337-43cd-8ee9-2b3bc5ce70ee-kube-api-access-fnm28" (OuterVolumeSpecName: "kube-api-access-fnm28") pod "95cbd6d2-1337-43cd-8ee9-2b3bc5ce70ee" (UID: "95cbd6d2-1337-43cd-8ee9-2b3bc5ce70ee"). InnerVolumeSpecName "kube-api-access-fnm28". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 13:55:40 crc kubenswrapper[4768]: I0217 13:55:40.763426 4768 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/95cbd6d2-1337-43cd-8ee9-2b3bc5ce70ee-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 17 13:55:40 crc kubenswrapper[4768]: I0217 13:55:40.763887 4768 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/95cbd6d2-1337-43cd-8ee9-2b3bc5ce70ee-config\") on node \"crc\" DevicePath \"\"" Feb 17 13:55:40 crc kubenswrapper[4768]: I0217 13:55:40.763923 4768 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/95cbd6d2-1337-43cd-8ee9-2b3bc5ce70ee-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 17 13:55:40 crc kubenswrapper[4768]: I0217 13:55:40.763935 4768 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/95cbd6d2-1337-43cd-8ee9-2b3bc5ce70ee-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 17 13:55:40 crc kubenswrapper[4768]: I0217 13:55:40.763944 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fnm28\" (UniqueName: \"kubernetes.io/projected/95cbd6d2-1337-43cd-8ee9-2b3bc5ce70ee-kube-api-access-fnm28\") on node \"crc\" DevicePath \"\"" Feb 17 13:55:40 crc kubenswrapper[4768]: I0217 13:55:40.763953 4768 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/95cbd6d2-1337-43cd-8ee9-2b3bc5ce70ee-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 17 13:55:40 crc kubenswrapper[4768]: I0217 13:55:40.767333 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-f5cff5694-mvlv5"] Feb 17 13:55:40 crc kubenswrapper[4768]: I0217 13:55:40.931895 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 17 13:55:40 crc kubenswrapper[4768]: I0217 13:55:40.938249 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-kw2l4"] Feb 17 13:55:41 crc kubenswrapper[4768]: I0217 13:55:41.027096 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-6c9fbc7fd6-zqhzg"] Feb 17 13:55:41 crc kubenswrapper[4768]: W0217 13:55:41.036711 4768 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod265e5b14_8f7f_49fb_9984_421898d607b4.slice/crio-d6f2cc30a1eb0ba75a836ccdfd357538430f658a70ca9f5826fb3ac514241316 WatchSource:0}: Error finding container d6f2cc30a1eb0ba75a836ccdfd357538430f658a70ca9f5826fb3ac514241316: Status 404 returned error can't find the container with id d6f2cc30a1eb0ba75a836ccdfd357538430f658a70ca9f5826fb3ac514241316 Feb 17 13:55:41 crc kubenswrapper[4768]: I0217 13:55:41.037633 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Feb 17 13:55:41 crc kubenswrapper[4768]: I0217 13:55:41.339719 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"265e5b14-8f7f-49fb-9984-421898d607b4","Type":"ContainerStarted","Data":"d6f2cc30a1eb0ba75a836ccdfd357538430f658a70ca9f5826fb3ac514241316"} Feb 17 13:55:41 crc kubenswrapper[4768]: I0217 13:55:41.342028 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"1ab3c1f4-c7cd-4a4b-b540-d5cf3c239785","Type":"ContainerStarted","Data":"d21534418276c9f8a7adcafdf40e136748eabc8b4ae771108b1f6db433f1d336"} Feb 17 13:55:41 crc kubenswrapper[4768]: I0217 13:55:41.343147 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-f5cff5694-mvlv5" event={"ID":"0ac0a67d-ed1c-4a10-8a5b-c2ecde6d3fc7","Type":"ContainerStarted","Data":"764876a3c9fd1c0b000ca5d9effea8a20d33fcfdfb2dc4dcce9028a26bc2a0e0"} Feb 17 13:55:41 crc kubenswrapper[4768]: I0217 13:55:41.344170 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9776ccc5-kw2l4" event={"ID":"e9696419-5c03-4d1d-bd0c-7bf7becd6239","Type":"ContainerStarted","Data":"b1a9808b9043a7cb44999e6eda0c42fa77e5a37a0b2810ca075660a49c6c30ba"} Feb 17 13:55:41 crc kubenswrapper[4768]: I0217 13:55:41.349411 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-68cd444875-wgnnm" event={"ID":"486b688d-e9dd-4c6b-ae8d-c2e536172e53","Type":"ContainerStarted","Data":"e4651fcfdbc33c16bf65078fa039dcd65c06b4deda94a9237ba9ca3e8e3eb5a1"} Feb 17 13:55:41 crc kubenswrapper[4768]: I0217 13:55:41.351711 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6c9fbc7fd6-zqhzg" event={"ID":"e2a044ad-f31e-4c3b-9659-91650838f9da","Type":"ContainerStarted","Data":"ed30c74ad3e19f911c9a27c219eff65767a6992c8f2c18d92b5ea9e88e5b4f43"} Feb 17 13:55:41 crc kubenswrapper[4768]: I0217 13:55:41.351757 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-b895b5785-jbx4n" Feb 17 13:55:41 crc kubenswrapper[4768]: I0217 13:55:41.439346 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-b895b5785-jbx4n"] Feb 17 13:55:41 crc kubenswrapper[4768]: I0217 13:55:41.462052 4768 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-b895b5785-jbx4n"] Feb 17 13:55:41 crc kubenswrapper[4768]: I0217 13:55:41.563788 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="95cbd6d2-1337-43cd-8ee9-2b3bc5ce70ee" path="/var/lib/kubelet/pods/95cbd6d2-1337-43cd-8ee9-2b3bc5ce70ee/volumes" Feb 17 13:55:42 crc kubenswrapper[4768]: I0217 13:55:42.009925 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Feb 17 13:55:42 crc kubenswrapper[4768]: I0217 13:55:42.234537 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Feb 17 13:55:42 crc kubenswrapper[4768]: I0217 13:55:42.234862 4768 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 17 13:55:42 crc kubenswrapper[4768]: I0217 13:55:42.307576 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Feb 17 13:55:42 crc kubenswrapper[4768]: I0217 13:55:42.484345 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"265e5b14-8f7f-49fb-9984-421898d607b4","Type":"ContainerStarted","Data":"321a36a8d6b337d3f0d53a1cf72cb83e9f05dda0a2c5eca5783e23af7c576b10"} Feb 17 13:55:42 crc kubenswrapper[4768]: I0217 13:55:42.497749 4768 generic.go:334] "Generic (PLEG): container finished" podID="e9696419-5c03-4d1d-bd0c-7bf7becd6239" containerID="b8127ed30e1421cfce5fe50c51f7a182f40811a6bfb6cc3814a69f88ed5b8d2f" exitCode=0 Feb 17 13:55:42 crc kubenswrapper[4768]: I0217 13:55:42.498089 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9776ccc5-kw2l4" event={"ID":"e9696419-5c03-4d1d-bd0c-7bf7becd6239","Type":"ContainerDied","Data":"b8127ed30e1421cfce5fe50c51f7a182f40811a6bfb6cc3814a69f88ed5b8d2f"} Feb 17 13:55:42 crc kubenswrapper[4768]: I0217 13:55:42.512545 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6c9fbc7fd6-zqhzg" event={"ID":"e2a044ad-f31e-4c3b-9659-91650838f9da","Type":"ContainerStarted","Data":"e07f63dbdb33f6fa1ef3c171110eaecf383514533db5e7ad803e547be67ad11c"} Feb 17 13:55:42 crc kubenswrapper[4768]: I0217 13:55:42.512608 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6c9fbc7fd6-zqhzg" event={"ID":"e2a044ad-f31e-4c3b-9659-91650838f9da","Type":"ContainerStarted","Data":"43e7ea3b3299e81c360a334cd1a4c79c1dde1801b98aa5377fc80a9a292e924e"} Feb 17 13:55:42 crc kubenswrapper[4768]: I0217 13:55:42.512982 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-6c9fbc7fd6-zqhzg" Feb 17 13:55:42 crc kubenswrapper[4768]: I0217 13:55:42.513028 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-6c9fbc7fd6-zqhzg" Feb 17 13:55:42 crc kubenswrapper[4768]: I0217 13:55:42.617709 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-6c9fbc7fd6-zqhzg" podStartSLOduration=3.617685955 podStartE2EDuration="3.617685955s" podCreationTimestamp="2026-02-17 13:55:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 13:55:42.547386637 +0000 UTC m=+1161.826773079" watchObservedRunningTime="2026-02-17 13:55:42.617685955 +0000 UTC m=+1161.897072397" Feb 17 13:55:43 crc kubenswrapper[4768]: I0217 13:55:43.540003 4768 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="265e5b14-8f7f-49fb-9984-421898d607b4" containerName="cinder-api-log" containerID="cri-o://321a36a8d6b337d3f0d53a1cf72cb83e9f05dda0a2c5eca5783e23af7c576b10" gracePeriod=30 Feb 17 13:55:43 crc kubenswrapper[4768]: I0217 13:55:43.540084 4768 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="265e5b14-8f7f-49fb-9984-421898d607b4" containerName="cinder-api" containerID="cri-o://40e9f091c855583dc044a018e43592010cc3fbd569a20b83b2be035fc21f8133" gracePeriod=30 Feb 17 13:55:43 crc kubenswrapper[4768]: I0217 13:55:43.565884 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Feb 17 13:55:43 crc kubenswrapper[4768]: I0217 13:55:43.565924 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"265e5b14-8f7f-49fb-9984-421898d607b4","Type":"ContainerStarted","Data":"40e9f091c855583dc044a018e43592010cc3fbd569a20b83b2be035fc21f8133"} Feb 17 13:55:43 crc kubenswrapper[4768]: I0217 13:55:43.566913 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=4.56689757 podStartE2EDuration="4.56689757s" podCreationTimestamp="2026-02-17 13:55:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 13:55:43.565906892 +0000 UTC m=+1162.845293334" watchObservedRunningTime="2026-02-17 13:55:43.56689757 +0000 UTC m=+1162.846284012" Feb 17 13:55:43 crc kubenswrapper[4768]: I0217 13:55:43.569216 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"1ab3c1f4-c7cd-4a4b-b540-d5cf3c239785","Type":"ContainerStarted","Data":"a36b16178e7cf5c2d82b2bbdfaa59b932c020e50559ffbebaa49f29346781024"} Feb 17 13:55:43 crc kubenswrapper[4768]: I0217 13:55:43.587468 4768 generic.go:334] "Generic (PLEG): container finished" podID="1c0e296a-80f3-4efe-bb28-17fdfd153397" containerID="1076244baedb5276f95f29bd05ad24800133851d3eaaca34c9a31c87bf95c679" exitCode=0 Feb 17 13:55:43 crc kubenswrapper[4768]: I0217 13:55:43.587542 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1c0e296a-80f3-4efe-bb28-17fdfd153397","Type":"ContainerDied","Data":"1076244baedb5276f95f29bd05ad24800133851d3eaaca34c9a31c87bf95c679"} Feb 17 13:55:44 crc kubenswrapper[4768]: I0217 13:55:44.051829 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 17 13:55:44 crc kubenswrapper[4768]: I0217 13:55:44.065403 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1c0e296a-80f3-4efe-bb28-17fdfd153397-run-httpd\") pod \"1c0e296a-80f3-4efe-bb28-17fdfd153397\" (UID: \"1c0e296a-80f3-4efe-bb28-17fdfd153397\") " Feb 17 13:55:44 crc kubenswrapper[4768]: I0217 13:55:44.065475 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1c0e296a-80f3-4efe-bb28-17fdfd153397-combined-ca-bundle\") pod \"1c0e296a-80f3-4efe-bb28-17fdfd153397\" (UID: \"1c0e296a-80f3-4efe-bb28-17fdfd153397\") " Feb 17 13:55:44 crc kubenswrapper[4768]: I0217 13:55:44.065521 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1c0e296a-80f3-4efe-bb28-17fdfd153397-log-httpd\") pod \"1c0e296a-80f3-4efe-bb28-17fdfd153397\" (UID: \"1c0e296a-80f3-4efe-bb28-17fdfd153397\") " Feb 17 13:55:44 crc kubenswrapper[4768]: I0217 13:55:44.065593 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1c0e296a-80f3-4efe-bb28-17fdfd153397-scripts\") pod \"1c0e296a-80f3-4efe-bb28-17fdfd153397\" (UID: \"1c0e296a-80f3-4efe-bb28-17fdfd153397\") " Feb 17 13:55:44 crc kubenswrapper[4768]: I0217 13:55:44.065630 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1c0e296a-80f3-4efe-bb28-17fdfd153397-config-data\") pod \"1c0e296a-80f3-4efe-bb28-17fdfd153397\" (UID: \"1c0e296a-80f3-4efe-bb28-17fdfd153397\") " Feb 17 13:55:44 crc kubenswrapper[4768]: I0217 13:55:44.065671 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jqxc7\" (UniqueName: \"kubernetes.io/projected/1c0e296a-80f3-4efe-bb28-17fdfd153397-kube-api-access-jqxc7\") pod \"1c0e296a-80f3-4efe-bb28-17fdfd153397\" (UID: \"1c0e296a-80f3-4efe-bb28-17fdfd153397\") " Feb 17 13:55:44 crc kubenswrapper[4768]: I0217 13:55:44.065707 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/1c0e296a-80f3-4efe-bb28-17fdfd153397-sg-core-conf-yaml\") pod \"1c0e296a-80f3-4efe-bb28-17fdfd153397\" (UID: \"1c0e296a-80f3-4efe-bb28-17fdfd153397\") " Feb 17 13:55:44 crc kubenswrapper[4768]: I0217 13:55:44.066822 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1c0e296a-80f3-4efe-bb28-17fdfd153397-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "1c0e296a-80f3-4efe-bb28-17fdfd153397" (UID: "1c0e296a-80f3-4efe-bb28-17fdfd153397"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 13:55:44 crc kubenswrapper[4768]: I0217 13:55:44.067071 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1c0e296a-80f3-4efe-bb28-17fdfd153397-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "1c0e296a-80f3-4efe-bb28-17fdfd153397" (UID: "1c0e296a-80f3-4efe-bb28-17fdfd153397"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 13:55:44 crc kubenswrapper[4768]: I0217 13:55:44.103617 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1c0e296a-80f3-4efe-bb28-17fdfd153397-scripts" (OuterVolumeSpecName: "scripts") pod "1c0e296a-80f3-4efe-bb28-17fdfd153397" (UID: "1c0e296a-80f3-4efe-bb28-17fdfd153397"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 13:55:44 crc kubenswrapper[4768]: I0217 13:55:44.116583 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1c0e296a-80f3-4efe-bb28-17fdfd153397-kube-api-access-jqxc7" (OuterVolumeSpecName: "kube-api-access-jqxc7") pod "1c0e296a-80f3-4efe-bb28-17fdfd153397" (UID: "1c0e296a-80f3-4efe-bb28-17fdfd153397"). InnerVolumeSpecName "kube-api-access-jqxc7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 13:55:44 crc kubenswrapper[4768]: I0217 13:55:44.167940 4768 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1c0e296a-80f3-4efe-bb28-17fdfd153397-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 17 13:55:44 crc kubenswrapper[4768]: I0217 13:55:44.167978 4768 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1c0e296a-80f3-4efe-bb28-17fdfd153397-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 13:55:44 crc kubenswrapper[4768]: I0217 13:55:44.167990 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jqxc7\" (UniqueName: \"kubernetes.io/projected/1c0e296a-80f3-4efe-bb28-17fdfd153397-kube-api-access-jqxc7\") on node \"crc\" DevicePath \"\"" Feb 17 13:55:44 crc kubenswrapper[4768]: I0217 13:55:44.168002 4768 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1c0e296a-80f3-4efe-bb28-17fdfd153397-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 17 13:55:44 crc kubenswrapper[4768]: I0217 13:55:44.194012 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1c0e296a-80f3-4efe-bb28-17fdfd153397-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "1c0e296a-80f3-4efe-bb28-17fdfd153397" (UID: "1c0e296a-80f3-4efe-bb28-17fdfd153397"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 13:55:44 crc kubenswrapper[4768]: I0217 13:55:44.252716 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1c0e296a-80f3-4efe-bb28-17fdfd153397-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1c0e296a-80f3-4efe-bb28-17fdfd153397" (UID: "1c0e296a-80f3-4efe-bb28-17fdfd153397"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 13:55:44 crc kubenswrapper[4768]: I0217 13:55:44.269270 4768 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/1c0e296a-80f3-4efe-bb28-17fdfd153397-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 17 13:55:44 crc kubenswrapper[4768]: I0217 13:55:44.269311 4768 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1c0e296a-80f3-4efe-bb28-17fdfd153397-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 13:55:44 crc kubenswrapper[4768]: I0217 13:55:44.269513 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 17 13:55:44 crc kubenswrapper[4768]: I0217 13:55:44.301140 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1c0e296a-80f3-4efe-bb28-17fdfd153397-config-data" (OuterVolumeSpecName: "config-data") pod "1c0e296a-80f3-4efe-bb28-17fdfd153397" (UID: "1c0e296a-80f3-4efe-bb28-17fdfd153397"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 13:55:44 crc kubenswrapper[4768]: I0217 13:55:44.370745 4768 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1c0e296a-80f3-4efe-bb28-17fdfd153397-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 13:55:44 crc kubenswrapper[4768]: I0217 13:55:44.471537 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/265e5b14-8f7f-49fb-9984-421898d607b4-scripts\") pod \"265e5b14-8f7f-49fb-9984-421898d607b4\" (UID: \"265e5b14-8f7f-49fb-9984-421898d607b4\") " Feb 17 13:55:44 crc kubenswrapper[4768]: I0217 13:55:44.471597 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/265e5b14-8f7f-49fb-9984-421898d607b4-config-data-custom\") pod \"265e5b14-8f7f-49fb-9984-421898d607b4\" (UID: \"265e5b14-8f7f-49fb-9984-421898d607b4\") " Feb 17 13:55:44 crc kubenswrapper[4768]: I0217 13:55:44.471684 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/265e5b14-8f7f-49fb-9984-421898d607b4-etc-machine-id\") pod \"265e5b14-8f7f-49fb-9984-421898d607b4\" (UID: \"265e5b14-8f7f-49fb-9984-421898d607b4\") " Feb 17 13:55:44 crc kubenswrapper[4768]: I0217 13:55:44.471756 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/265e5b14-8f7f-49fb-9984-421898d607b4-combined-ca-bundle\") pod \"265e5b14-8f7f-49fb-9984-421898d607b4\" (UID: \"265e5b14-8f7f-49fb-9984-421898d607b4\") " Feb 17 13:55:44 crc kubenswrapper[4768]: I0217 13:55:44.471791 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/265e5b14-8f7f-49fb-9984-421898d607b4-config-data\") pod \"265e5b14-8f7f-49fb-9984-421898d607b4\" (UID: \"265e5b14-8f7f-49fb-9984-421898d607b4\") " Feb 17 13:55:44 crc kubenswrapper[4768]: I0217 13:55:44.471839 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/265e5b14-8f7f-49fb-9984-421898d607b4-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "265e5b14-8f7f-49fb-9984-421898d607b4" (UID: "265e5b14-8f7f-49fb-9984-421898d607b4"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 13:55:44 crc kubenswrapper[4768]: I0217 13:55:44.471884 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-49g8l\" (UniqueName: \"kubernetes.io/projected/265e5b14-8f7f-49fb-9984-421898d607b4-kube-api-access-49g8l\") pod \"265e5b14-8f7f-49fb-9984-421898d607b4\" (UID: \"265e5b14-8f7f-49fb-9984-421898d607b4\") " Feb 17 13:55:44 crc kubenswrapper[4768]: I0217 13:55:44.471958 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/265e5b14-8f7f-49fb-9984-421898d607b4-logs\") pod \"265e5b14-8f7f-49fb-9984-421898d607b4\" (UID: \"265e5b14-8f7f-49fb-9984-421898d607b4\") " Feb 17 13:55:44 crc kubenswrapper[4768]: I0217 13:55:44.472289 4768 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/265e5b14-8f7f-49fb-9984-421898d607b4-etc-machine-id\") on node \"crc\" DevicePath \"\"" Feb 17 13:55:44 crc kubenswrapper[4768]: I0217 13:55:44.472557 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/265e5b14-8f7f-49fb-9984-421898d607b4-logs" (OuterVolumeSpecName: "logs") pod "265e5b14-8f7f-49fb-9984-421898d607b4" (UID: "265e5b14-8f7f-49fb-9984-421898d607b4"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 13:55:44 crc kubenswrapper[4768]: I0217 13:55:44.481796 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/265e5b14-8f7f-49fb-9984-421898d607b4-kube-api-access-49g8l" (OuterVolumeSpecName: "kube-api-access-49g8l") pod "265e5b14-8f7f-49fb-9984-421898d607b4" (UID: "265e5b14-8f7f-49fb-9984-421898d607b4"). InnerVolumeSpecName "kube-api-access-49g8l". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 13:55:44 crc kubenswrapper[4768]: I0217 13:55:44.482322 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/265e5b14-8f7f-49fb-9984-421898d607b4-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "265e5b14-8f7f-49fb-9984-421898d607b4" (UID: "265e5b14-8f7f-49fb-9984-421898d607b4"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 13:55:44 crc kubenswrapper[4768]: I0217 13:55:44.483557 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/265e5b14-8f7f-49fb-9984-421898d607b4-scripts" (OuterVolumeSpecName: "scripts") pod "265e5b14-8f7f-49fb-9984-421898d607b4" (UID: "265e5b14-8f7f-49fb-9984-421898d607b4"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 13:55:44 crc kubenswrapper[4768]: I0217 13:55:44.510040 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/265e5b14-8f7f-49fb-9984-421898d607b4-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "265e5b14-8f7f-49fb-9984-421898d607b4" (UID: "265e5b14-8f7f-49fb-9984-421898d607b4"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 13:55:44 crc kubenswrapper[4768]: I0217 13:55:44.556056 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/265e5b14-8f7f-49fb-9984-421898d607b4-config-data" (OuterVolumeSpecName: "config-data") pod "265e5b14-8f7f-49fb-9984-421898d607b4" (UID: "265e5b14-8f7f-49fb-9984-421898d607b4"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 13:55:44 crc kubenswrapper[4768]: I0217 13:55:44.575432 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-49g8l\" (UniqueName: \"kubernetes.io/projected/265e5b14-8f7f-49fb-9984-421898d607b4-kube-api-access-49g8l\") on node \"crc\" DevicePath \"\"" Feb 17 13:55:44 crc kubenswrapper[4768]: I0217 13:55:44.575462 4768 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/265e5b14-8f7f-49fb-9984-421898d607b4-logs\") on node \"crc\" DevicePath \"\"" Feb 17 13:55:44 crc kubenswrapper[4768]: I0217 13:55:44.575471 4768 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/265e5b14-8f7f-49fb-9984-421898d607b4-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 13:55:44 crc kubenswrapper[4768]: I0217 13:55:44.575479 4768 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/265e5b14-8f7f-49fb-9984-421898d607b4-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 17 13:55:44 crc kubenswrapper[4768]: I0217 13:55:44.575488 4768 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/265e5b14-8f7f-49fb-9984-421898d607b4-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 13:55:44 crc kubenswrapper[4768]: I0217 13:55:44.575495 4768 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/265e5b14-8f7f-49fb-9984-421898d607b4-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 13:55:44 crc kubenswrapper[4768]: I0217 13:55:44.600492 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 17 13:55:44 crc kubenswrapper[4768]: I0217 13:55:44.600505 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1c0e296a-80f3-4efe-bb28-17fdfd153397","Type":"ContainerDied","Data":"02908c2a342a5a281b0209631f62c4cf5bc9e21166549d9dff4194a298c6a659"} Feb 17 13:55:44 crc kubenswrapper[4768]: I0217 13:55:44.600559 4768 scope.go:117] "RemoveContainer" containerID="a801ca142c3d415f3635d2c9213aadfe9f2b4c13a454d7ca716b3a097448339f" Feb 17 13:55:44 crc kubenswrapper[4768]: I0217 13:55:44.604197 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-68cd444875-wgnnm" event={"ID":"486b688d-e9dd-4c6b-ae8d-c2e536172e53","Type":"ContainerStarted","Data":"14e81ead56da854dc006dc1271bf9ada44aa203cc58eecaecd5e6929124df45f"} Feb 17 13:55:44 crc kubenswrapper[4768]: I0217 13:55:44.615824 4768 generic.go:334] "Generic (PLEG): container finished" podID="265e5b14-8f7f-49fb-9984-421898d607b4" containerID="40e9f091c855583dc044a018e43592010cc3fbd569a20b83b2be035fc21f8133" exitCode=0 Feb 17 13:55:44 crc kubenswrapper[4768]: I0217 13:55:44.615863 4768 generic.go:334] "Generic (PLEG): container finished" podID="265e5b14-8f7f-49fb-9984-421898d607b4" containerID="321a36a8d6b337d3f0d53a1cf72cb83e9f05dda0a2c5eca5783e23af7c576b10" exitCode=143 Feb 17 13:55:44 crc kubenswrapper[4768]: I0217 13:55:44.615918 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"265e5b14-8f7f-49fb-9984-421898d607b4","Type":"ContainerDied","Data":"40e9f091c855583dc044a018e43592010cc3fbd569a20b83b2be035fc21f8133"} Feb 17 13:55:44 crc kubenswrapper[4768]: I0217 13:55:44.615949 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"265e5b14-8f7f-49fb-9984-421898d607b4","Type":"ContainerDied","Data":"321a36a8d6b337d3f0d53a1cf72cb83e9f05dda0a2c5eca5783e23af7c576b10"} Feb 17 13:55:44 crc kubenswrapper[4768]: I0217 13:55:44.615964 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"265e5b14-8f7f-49fb-9984-421898d607b4","Type":"ContainerDied","Data":"d6f2cc30a1eb0ba75a836ccdfd357538430f658a70ca9f5826fb3ac514241316"} Feb 17 13:55:44 crc kubenswrapper[4768]: I0217 13:55:44.616022 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 17 13:55:44 crc kubenswrapper[4768]: I0217 13:55:44.621089 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9776ccc5-kw2l4" event={"ID":"e9696419-5c03-4d1d-bd0c-7bf7becd6239","Type":"ContainerStarted","Data":"1516be8385d9e8af36ef4572a00ff3fc2ea3aff8702de8ab771ba1170ab97327"} Feb 17 13:55:44 crc kubenswrapper[4768]: I0217 13:55:44.621237 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5c9776ccc5-kw2l4" Feb 17 13:55:44 crc kubenswrapper[4768]: I0217 13:55:44.628229 4768 scope.go:117] "RemoveContainer" containerID="ccd2128ae128dcd19682ea32114c0e6bfb94fa980bae79579791788a93fd9111" Feb 17 13:55:44 crc kubenswrapper[4768]: I0217 13:55:44.643517 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-f5cff5694-mvlv5" event={"ID":"0ac0a67d-ed1c-4a10-8a5b-c2ecde6d3fc7","Type":"ContainerStarted","Data":"4d495ec501b4e932d57289870f9deee365fcba8b5a0b5dc2447aaa1742a2a4da"} Feb 17 13:55:44 crc kubenswrapper[4768]: I0217 13:55:44.694788 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 17 13:55:44 crc kubenswrapper[4768]: I0217 13:55:44.702173 4768 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 17 13:55:44 crc kubenswrapper[4768]: I0217 13:55:44.723618 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 17 13:55:44 crc kubenswrapper[4768]: E0217 13:55:44.724030 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1c0e296a-80f3-4efe-bb28-17fdfd153397" containerName="ceilometer-notification-agent" Feb 17 13:55:44 crc kubenswrapper[4768]: I0217 13:55:44.724056 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="1c0e296a-80f3-4efe-bb28-17fdfd153397" containerName="ceilometer-notification-agent" Feb 17 13:55:44 crc kubenswrapper[4768]: E0217 13:55:44.724077 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="265e5b14-8f7f-49fb-9984-421898d607b4" containerName="cinder-api" Feb 17 13:55:44 crc kubenswrapper[4768]: I0217 13:55:44.724085 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="265e5b14-8f7f-49fb-9984-421898d607b4" containerName="cinder-api" Feb 17 13:55:44 crc kubenswrapper[4768]: E0217 13:55:44.724097 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1c0e296a-80f3-4efe-bb28-17fdfd153397" containerName="sg-core" Feb 17 13:55:44 crc kubenswrapper[4768]: I0217 13:55:44.724124 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="1c0e296a-80f3-4efe-bb28-17fdfd153397" containerName="sg-core" Feb 17 13:55:44 crc kubenswrapper[4768]: E0217 13:55:44.724162 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="265e5b14-8f7f-49fb-9984-421898d607b4" containerName="cinder-api-log" Feb 17 13:55:44 crc kubenswrapper[4768]: I0217 13:55:44.724171 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="265e5b14-8f7f-49fb-9984-421898d607b4" containerName="cinder-api-log" Feb 17 13:55:44 crc kubenswrapper[4768]: E0217 13:55:44.724188 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1c0e296a-80f3-4efe-bb28-17fdfd153397" containerName="proxy-httpd" Feb 17 13:55:44 crc kubenswrapper[4768]: I0217 13:55:44.724196 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="1c0e296a-80f3-4efe-bb28-17fdfd153397" containerName="proxy-httpd" Feb 17 13:55:44 crc kubenswrapper[4768]: I0217 13:55:44.724422 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="1c0e296a-80f3-4efe-bb28-17fdfd153397" containerName="sg-core" Feb 17 13:55:44 crc kubenswrapper[4768]: I0217 13:55:44.724451 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="1c0e296a-80f3-4efe-bb28-17fdfd153397" containerName="proxy-httpd" Feb 17 13:55:44 crc kubenswrapper[4768]: I0217 13:55:44.724470 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="1c0e296a-80f3-4efe-bb28-17fdfd153397" containerName="ceilometer-notification-agent" Feb 17 13:55:44 crc kubenswrapper[4768]: I0217 13:55:44.724481 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="265e5b14-8f7f-49fb-9984-421898d607b4" containerName="cinder-api" Feb 17 13:55:44 crc kubenswrapper[4768]: I0217 13:55:44.724494 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="265e5b14-8f7f-49fb-9984-421898d607b4" containerName="cinder-api-log" Feb 17 13:55:44 crc kubenswrapper[4768]: I0217 13:55:44.726418 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 17 13:55:44 crc kubenswrapper[4768]: I0217 13:55:44.732460 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 17 13:55:44 crc kubenswrapper[4768]: I0217 13:55:44.732700 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 17 13:55:44 crc kubenswrapper[4768]: I0217 13:55:44.738708 4768 scope.go:117] "RemoveContainer" containerID="1076244baedb5276f95f29bd05ad24800133851d3eaaca34c9a31c87bf95c679" Feb 17 13:55:44 crc kubenswrapper[4768]: I0217 13:55:44.740144 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 17 13:55:44 crc kubenswrapper[4768]: I0217 13:55:44.746256 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5c9776ccc5-kw2l4" podStartSLOduration=5.746241739 podStartE2EDuration="5.746241739s" podCreationTimestamp="2026-02-17 13:55:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 13:55:44.711378314 +0000 UTC m=+1163.990764756" watchObservedRunningTime="2026-02-17 13:55:44.746241739 +0000 UTC m=+1164.025628181" Feb 17 13:55:44 crc kubenswrapper[4768]: I0217 13:55:44.762489 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Feb 17 13:55:44 crc kubenswrapper[4768]: I0217 13:55:44.776218 4768 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-api-0"] Feb 17 13:55:44 crc kubenswrapper[4768]: I0217 13:55:44.785999 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Feb 17 13:55:44 crc kubenswrapper[4768]: I0217 13:55:44.787316 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 17 13:55:44 crc kubenswrapper[4768]: I0217 13:55:44.791624 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-public-svc" Feb 17 13:55:44 crc kubenswrapper[4768]: I0217 13:55:44.791818 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Feb 17 13:55:44 crc kubenswrapper[4768]: I0217 13:55:44.791931 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-internal-svc" Feb 17 13:55:44 crc kubenswrapper[4768]: I0217 13:55:44.807041 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Feb 17 13:55:44 crc kubenswrapper[4768]: I0217 13:55:44.814655 4768 scope.go:117] "RemoveContainer" containerID="40e9f091c855583dc044a018e43592010cc3fbd569a20b83b2be035fc21f8133" Feb 17 13:55:44 crc kubenswrapper[4768]: I0217 13:55:44.881971 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8dde7e08-dc91-4904-9e22-5e77b459a138-run-httpd\") pod \"ceilometer-0\" (UID: \"8dde7e08-dc91-4904-9e22-5e77b459a138\") " pod="openstack/ceilometer-0" Feb 17 13:55:44 crc kubenswrapper[4768]: I0217 13:55:44.882063 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8dde7e08-dc91-4904-9e22-5e77b459a138-config-data\") pod \"ceilometer-0\" (UID: \"8dde7e08-dc91-4904-9e22-5e77b459a138\") " pod="openstack/ceilometer-0" Feb 17 13:55:44 crc kubenswrapper[4768]: I0217 13:55:44.882149 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8dde7e08-dc91-4904-9e22-5e77b459a138-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"8dde7e08-dc91-4904-9e22-5e77b459a138\") " pod="openstack/ceilometer-0" Feb 17 13:55:44 crc kubenswrapper[4768]: I0217 13:55:44.882183 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8dde7e08-dc91-4904-9e22-5e77b459a138-scripts\") pod \"ceilometer-0\" (UID: \"8dde7e08-dc91-4904-9e22-5e77b459a138\") " pod="openstack/ceilometer-0" Feb 17 13:55:44 crc kubenswrapper[4768]: I0217 13:55:44.882219 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6vktv\" (UniqueName: \"kubernetes.io/projected/8dde7e08-dc91-4904-9e22-5e77b459a138-kube-api-access-6vktv\") pod \"ceilometer-0\" (UID: \"8dde7e08-dc91-4904-9e22-5e77b459a138\") " pod="openstack/ceilometer-0" Feb 17 13:55:44 crc kubenswrapper[4768]: I0217 13:55:44.882285 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8dde7e08-dc91-4904-9e22-5e77b459a138-log-httpd\") pod \"ceilometer-0\" (UID: \"8dde7e08-dc91-4904-9e22-5e77b459a138\") " pod="openstack/ceilometer-0" Feb 17 13:55:44 crc kubenswrapper[4768]: I0217 13:55:44.882388 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/8dde7e08-dc91-4904-9e22-5e77b459a138-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"8dde7e08-dc91-4904-9e22-5e77b459a138\") " pod="openstack/ceilometer-0" Feb 17 13:55:44 crc kubenswrapper[4768]: I0217 13:55:44.897344 4768 scope.go:117] "RemoveContainer" containerID="321a36a8d6b337d3f0d53a1cf72cb83e9f05dda0a2c5eca5783e23af7c576b10" Feb 17 13:55:44 crc kubenswrapper[4768]: I0217 13:55:44.913505 4768 scope.go:117] "RemoveContainer" containerID="40e9f091c855583dc044a018e43592010cc3fbd569a20b83b2be035fc21f8133" Feb 17 13:55:44 crc kubenswrapper[4768]: E0217 13:55:44.913959 4768 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"40e9f091c855583dc044a018e43592010cc3fbd569a20b83b2be035fc21f8133\": container with ID starting with 40e9f091c855583dc044a018e43592010cc3fbd569a20b83b2be035fc21f8133 not found: ID does not exist" containerID="40e9f091c855583dc044a018e43592010cc3fbd569a20b83b2be035fc21f8133" Feb 17 13:55:44 crc kubenswrapper[4768]: I0217 13:55:44.914008 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"40e9f091c855583dc044a018e43592010cc3fbd569a20b83b2be035fc21f8133"} err="failed to get container status \"40e9f091c855583dc044a018e43592010cc3fbd569a20b83b2be035fc21f8133\": rpc error: code = NotFound desc = could not find container \"40e9f091c855583dc044a018e43592010cc3fbd569a20b83b2be035fc21f8133\": container with ID starting with 40e9f091c855583dc044a018e43592010cc3fbd569a20b83b2be035fc21f8133 not found: ID does not exist" Feb 17 13:55:44 crc kubenswrapper[4768]: I0217 13:55:44.914034 4768 scope.go:117] "RemoveContainer" containerID="321a36a8d6b337d3f0d53a1cf72cb83e9f05dda0a2c5eca5783e23af7c576b10" Feb 17 13:55:44 crc kubenswrapper[4768]: E0217 13:55:44.914364 4768 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"321a36a8d6b337d3f0d53a1cf72cb83e9f05dda0a2c5eca5783e23af7c576b10\": container with ID starting with 321a36a8d6b337d3f0d53a1cf72cb83e9f05dda0a2c5eca5783e23af7c576b10 not found: ID does not exist" containerID="321a36a8d6b337d3f0d53a1cf72cb83e9f05dda0a2c5eca5783e23af7c576b10" Feb 17 13:55:44 crc kubenswrapper[4768]: I0217 13:55:44.914403 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"321a36a8d6b337d3f0d53a1cf72cb83e9f05dda0a2c5eca5783e23af7c576b10"} err="failed to get container status \"321a36a8d6b337d3f0d53a1cf72cb83e9f05dda0a2c5eca5783e23af7c576b10\": rpc error: code = NotFound desc = could not find container \"321a36a8d6b337d3f0d53a1cf72cb83e9f05dda0a2c5eca5783e23af7c576b10\": container with ID starting with 321a36a8d6b337d3f0d53a1cf72cb83e9f05dda0a2c5eca5783e23af7c576b10 not found: ID does not exist" Feb 17 13:55:44 crc kubenswrapper[4768]: I0217 13:55:44.914444 4768 scope.go:117] "RemoveContainer" containerID="40e9f091c855583dc044a018e43592010cc3fbd569a20b83b2be035fc21f8133" Feb 17 13:55:44 crc kubenswrapper[4768]: I0217 13:55:44.914864 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"40e9f091c855583dc044a018e43592010cc3fbd569a20b83b2be035fc21f8133"} err="failed to get container status \"40e9f091c855583dc044a018e43592010cc3fbd569a20b83b2be035fc21f8133\": rpc error: code = NotFound desc = could not find container \"40e9f091c855583dc044a018e43592010cc3fbd569a20b83b2be035fc21f8133\": container with ID starting with 40e9f091c855583dc044a018e43592010cc3fbd569a20b83b2be035fc21f8133 not found: ID does not exist" Feb 17 13:55:44 crc kubenswrapper[4768]: I0217 13:55:44.914888 4768 scope.go:117] "RemoveContainer" containerID="321a36a8d6b337d3f0d53a1cf72cb83e9f05dda0a2c5eca5783e23af7c576b10" Feb 17 13:55:44 crc kubenswrapper[4768]: I0217 13:55:44.915117 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"321a36a8d6b337d3f0d53a1cf72cb83e9f05dda0a2c5eca5783e23af7c576b10"} err="failed to get container status \"321a36a8d6b337d3f0d53a1cf72cb83e9f05dda0a2c5eca5783e23af7c576b10\": rpc error: code = NotFound desc = could not find container \"321a36a8d6b337d3f0d53a1cf72cb83e9f05dda0a2c5eca5783e23af7c576b10\": container with ID starting with 321a36a8d6b337d3f0d53a1cf72cb83e9f05dda0a2c5eca5783e23af7c576b10 not found: ID does not exist" Feb 17 13:55:44 crc kubenswrapper[4768]: I0217 13:55:44.983975 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8dde7e08-dc91-4904-9e22-5e77b459a138-log-httpd\") pod \"ceilometer-0\" (UID: \"8dde7e08-dc91-4904-9e22-5e77b459a138\") " pod="openstack/ceilometer-0" Feb 17 13:55:44 crc kubenswrapper[4768]: I0217 13:55:44.984037 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/797f85b4-f933-4b20-b7a5-e2f3b17a5b56-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"797f85b4-f933-4b20-b7a5-e2f3b17a5b56\") " pod="openstack/cinder-api-0" Feb 17 13:55:44 crc kubenswrapper[4768]: I0217 13:55:44.984115 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/797f85b4-f933-4b20-b7a5-e2f3b17a5b56-public-tls-certs\") pod \"cinder-api-0\" (UID: \"797f85b4-f933-4b20-b7a5-e2f3b17a5b56\") " pod="openstack/cinder-api-0" Feb 17 13:55:44 crc kubenswrapper[4768]: I0217 13:55:44.984145 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/8dde7e08-dc91-4904-9e22-5e77b459a138-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"8dde7e08-dc91-4904-9e22-5e77b459a138\") " pod="openstack/ceilometer-0" Feb 17 13:55:44 crc kubenswrapper[4768]: I0217 13:55:44.984163 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/797f85b4-f933-4b20-b7a5-e2f3b17a5b56-config-data-custom\") pod \"cinder-api-0\" (UID: \"797f85b4-f933-4b20-b7a5-e2f3b17a5b56\") " pod="openstack/cinder-api-0" Feb 17 13:55:44 crc kubenswrapper[4768]: I0217 13:55:44.984183 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/797f85b4-f933-4b20-b7a5-e2f3b17a5b56-etc-machine-id\") pod \"cinder-api-0\" (UID: \"797f85b4-f933-4b20-b7a5-e2f3b17a5b56\") " pod="openstack/cinder-api-0" Feb 17 13:55:44 crc kubenswrapper[4768]: I0217 13:55:44.984201 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8dde7e08-dc91-4904-9e22-5e77b459a138-run-httpd\") pod \"ceilometer-0\" (UID: \"8dde7e08-dc91-4904-9e22-5e77b459a138\") " pod="openstack/ceilometer-0" Feb 17 13:55:44 crc kubenswrapper[4768]: I0217 13:55:44.984216 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/797f85b4-f933-4b20-b7a5-e2f3b17a5b56-scripts\") pod \"cinder-api-0\" (UID: \"797f85b4-f933-4b20-b7a5-e2f3b17a5b56\") " pod="openstack/cinder-api-0" Feb 17 13:55:44 crc kubenswrapper[4768]: I0217 13:55:44.984233 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8dde7e08-dc91-4904-9e22-5e77b459a138-config-data\") pod \"ceilometer-0\" (UID: \"8dde7e08-dc91-4904-9e22-5e77b459a138\") " pod="openstack/ceilometer-0" Feb 17 13:55:44 crc kubenswrapper[4768]: I0217 13:55:44.984253 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/797f85b4-f933-4b20-b7a5-e2f3b17a5b56-logs\") pod \"cinder-api-0\" (UID: \"797f85b4-f933-4b20-b7a5-e2f3b17a5b56\") " pod="openstack/cinder-api-0" Feb 17 13:55:44 crc kubenswrapper[4768]: I0217 13:55:44.984278 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8dde7e08-dc91-4904-9e22-5e77b459a138-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"8dde7e08-dc91-4904-9e22-5e77b459a138\") " pod="openstack/ceilometer-0" Feb 17 13:55:44 crc kubenswrapper[4768]: I0217 13:55:44.984295 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/797f85b4-f933-4b20-b7a5-e2f3b17a5b56-config-data\") pod \"cinder-api-0\" (UID: \"797f85b4-f933-4b20-b7a5-e2f3b17a5b56\") " pod="openstack/cinder-api-0" Feb 17 13:55:44 crc kubenswrapper[4768]: I0217 13:55:44.984319 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/797f85b4-f933-4b20-b7a5-e2f3b17a5b56-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"797f85b4-f933-4b20-b7a5-e2f3b17a5b56\") " pod="openstack/cinder-api-0" Feb 17 13:55:44 crc kubenswrapper[4768]: I0217 13:55:44.984341 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8dde7e08-dc91-4904-9e22-5e77b459a138-scripts\") pod \"ceilometer-0\" (UID: \"8dde7e08-dc91-4904-9e22-5e77b459a138\") " pod="openstack/ceilometer-0" Feb 17 13:55:44 crc kubenswrapper[4768]: I0217 13:55:44.984367 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ppq57\" (UniqueName: \"kubernetes.io/projected/797f85b4-f933-4b20-b7a5-e2f3b17a5b56-kube-api-access-ppq57\") pod \"cinder-api-0\" (UID: \"797f85b4-f933-4b20-b7a5-e2f3b17a5b56\") " pod="openstack/cinder-api-0" Feb 17 13:55:44 crc kubenswrapper[4768]: I0217 13:55:44.984387 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6vktv\" (UniqueName: \"kubernetes.io/projected/8dde7e08-dc91-4904-9e22-5e77b459a138-kube-api-access-6vktv\") pod \"ceilometer-0\" (UID: \"8dde7e08-dc91-4904-9e22-5e77b459a138\") " pod="openstack/ceilometer-0" Feb 17 13:55:44 crc kubenswrapper[4768]: I0217 13:55:44.984522 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8dde7e08-dc91-4904-9e22-5e77b459a138-log-httpd\") pod \"ceilometer-0\" (UID: \"8dde7e08-dc91-4904-9e22-5e77b459a138\") " pod="openstack/ceilometer-0" Feb 17 13:55:44 crc kubenswrapper[4768]: I0217 13:55:44.985161 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8dde7e08-dc91-4904-9e22-5e77b459a138-run-httpd\") pod \"ceilometer-0\" (UID: \"8dde7e08-dc91-4904-9e22-5e77b459a138\") " pod="openstack/ceilometer-0" Feb 17 13:55:44 crc kubenswrapper[4768]: I0217 13:55:44.991026 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8dde7e08-dc91-4904-9e22-5e77b459a138-scripts\") pod \"ceilometer-0\" (UID: \"8dde7e08-dc91-4904-9e22-5e77b459a138\") " pod="openstack/ceilometer-0" Feb 17 13:55:44 crc kubenswrapper[4768]: I0217 13:55:44.991336 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8dde7e08-dc91-4904-9e22-5e77b459a138-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"8dde7e08-dc91-4904-9e22-5e77b459a138\") " pod="openstack/ceilometer-0" Feb 17 13:55:44 crc kubenswrapper[4768]: I0217 13:55:44.994902 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/8dde7e08-dc91-4904-9e22-5e77b459a138-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"8dde7e08-dc91-4904-9e22-5e77b459a138\") " pod="openstack/ceilometer-0" Feb 17 13:55:45 crc kubenswrapper[4768]: I0217 13:55:45.001598 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6vktv\" (UniqueName: \"kubernetes.io/projected/8dde7e08-dc91-4904-9e22-5e77b459a138-kube-api-access-6vktv\") pod \"ceilometer-0\" (UID: \"8dde7e08-dc91-4904-9e22-5e77b459a138\") " pod="openstack/ceilometer-0" Feb 17 13:55:45 crc kubenswrapper[4768]: I0217 13:55:45.003295 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8dde7e08-dc91-4904-9e22-5e77b459a138-config-data\") pod \"ceilometer-0\" (UID: \"8dde7e08-dc91-4904-9e22-5e77b459a138\") " pod="openstack/ceilometer-0" Feb 17 13:55:45 crc kubenswrapper[4768]: I0217 13:55:45.063998 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 17 13:55:45 crc kubenswrapper[4768]: I0217 13:55:45.086050 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/797f85b4-f933-4b20-b7a5-e2f3b17a5b56-public-tls-certs\") pod \"cinder-api-0\" (UID: \"797f85b4-f933-4b20-b7a5-e2f3b17a5b56\") " pod="openstack/cinder-api-0" Feb 17 13:55:45 crc kubenswrapper[4768]: I0217 13:55:45.086122 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/797f85b4-f933-4b20-b7a5-e2f3b17a5b56-config-data-custom\") pod \"cinder-api-0\" (UID: \"797f85b4-f933-4b20-b7a5-e2f3b17a5b56\") " pod="openstack/cinder-api-0" Feb 17 13:55:45 crc kubenswrapper[4768]: I0217 13:55:45.086141 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/797f85b4-f933-4b20-b7a5-e2f3b17a5b56-etc-machine-id\") pod \"cinder-api-0\" (UID: \"797f85b4-f933-4b20-b7a5-e2f3b17a5b56\") " pod="openstack/cinder-api-0" Feb 17 13:55:45 crc kubenswrapper[4768]: I0217 13:55:45.086163 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/797f85b4-f933-4b20-b7a5-e2f3b17a5b56-scripts\") pod \"cinder-api-0\" (UID: \"797f85b4-f933-4b20-b7a5-e2f3b17a5b56\") " pod="openstack/cinder-api-0" Feb 17 13:55:45 crc kubenswrapper[4768]: I0217 13:55:45.086187 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/797f85b4-f933-4b20-b7a5-e2f3b17a5b56-logs\") pod \"cinder-api-0\" (UID: \"797f85b4-f933-4b20-b7a5-e2f3b17a5b56\") " pod="openstack/cinder-api-0" Feb 17 13:55:45 crc kubenswrapper[4768]: I0217 13:55:45.086222 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/797f85b4-f933-4b20-b7a5-e2f3b17a5b56-config-data\") pod \"cinder-api-0\" (UID: \"797f85b4-f933-4b20-b7a5-e2f3b17a5b56\") " pod="openstack/cinder-api-0" Feb 17 13:55:45 crc kubenswrapper[4768]: I0217 13:55:45.086253 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/797f85b4-f933-4b20-b7a5-e2f3b17a5b56-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"797f85b4-f933-4b20-b7a5-e2f3b17a5b56\") " pod="openstack/cinder-api-0" Feb 17 13:55:45 crc kubenswrapper[4768]: I0217 13:55:45.086295 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ppq57\" (UniqueName: \"kubernetes.io/projected/797f85b4-f933-4b20-b7a5-e2f3b17a5b56-kube-api-access-ppq57\") pod \"cinder-api-0\" (UID: \"797f85b4-f933-4b20-b7a5-e2f3b17a5b56\") " pod="openstack/cinder-api-0" Feb 17 13:55:45 crc kubenswrapper[4768]: I0217 13:55:45.086343 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/797f85b4-f933-4b20-b7a5-e2f3b17a5b56-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"797f85b4-f933-4b20-b7a5-e2f3b17a5b56\") " pod="openstack/cinder-api-0" Feb 17 13:55:45 crc kubenswrapper[4768]: I0217 13:55:45.086859 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/797f85b4-f933-4b20-b7a5-e2f3b17a5b56-etc-machine-id\") pod \"cinder-api-0\" (UID: \"797f85b4-f933-4b20-b7a5-e2f3b17a5b56\") " pod="openstack/cinder-api-0" Feb 17 13:55:45 crc kubenswrapper[4768]: I0217 13:55:45.087525 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/797f85b4-f933-4b20-b7a5-e2f3b17a5b56-logs\") pod \"cinder-api-0\" (UID: \"797f85b4-f933-4b20-b7a5-e2f3b17a5b56\") " pod="openstack/cinder-api-0" Feb 17 13:55:45 crc kubenswrapper[4768]: I0217 13:55:45.095021 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/797f85b4-f933-4b20-b7a5-e2f3b17a5b56-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"797f85b4-f933-4b20-b7a5-e2f3b17a5b56\") " pod="openstack/cinder-api-0" Feb 17 13:55:45 crc kubenswrapper[4768]: I0217 13:55:45.095111 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/797f85b4-f933-4b20-b7a5-e2f3b17a5b56-config-data-custom\") pod \"cinder-api-0\" (UID: \"797f85b4-f933-4b20-b7a5-e2f3b17a5b56\") " pod="openstack/cinder-api-0" Feb 17 13:55:45 crc kubenswrapper[4768]: I0217 13:55:45.095422 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/797f85b4-f933-4b20-b7a5-e2f3b17a5b56-config-data\") pod \"cinder-api-0\" (UID: \"797f85b4-f933-4b20-b7a5-e2f3b17a5b56\") " pod="openstack/cinder-api-0" Feb 17 13:55:45 crc kubenswrapper[4768]: I0217 13:55:45.099398 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/797f85b4-f933-4b20-b7a5-e2f3b17a5b56-public-tls-certs\") pod \"cinder-api-0\" (UID: \"797f85b4-f933-4b20-b7a5-e2f3b17a5b56\") " pod="openstack/cinder-api-0" Feb 17 13:55:45 crc kubenswrapper[4768]: I0217 13:55:45.103162 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/797f85b4-f933-4b20-b7a5-e2f3b17a5b56-scripts\") pod \"cinder-api-0\" (UID: \"797f85b4-f933-4b20-b7a5-e2f3b17a5b56\") " pod="openstack/cinder-api-0" Feb 17 13:55:45 crc kubenswrapper[4768]: I0217 13:55:45.117833 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/797f85b4-f933-4b20-b7a5-e2f3b17a5b56-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"797f85b4-f933-4b20-b7a5-e2f3b17a5b56\") " pod="openstack/cinder-api-0" Feb 17 13:55:45 crc kubenswrapper[4768]: I0217 13:55:45.119461 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ppq57\" (UniqueName: \"kubernetes.io/projected/797f85b4-f933-4b20-b7a5-e2f3b17a5b56-kube-api-access-ppq57\") pod \"cinder-api-0\" (UID: \"797f85b4-f933-4b20-b7a5-e2f3b17a5b56\") " pod="openstack/cinder-api-0" Feb 17 13:55:45 crc kubenswrapper[4768]: I0217 13:55:45.193264 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 17 13:55:45 crc kubenswrapper[4768]: I0217 13:55:45.544043 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1c0e296a-80f3-4efe-bb28-17fdfd153397" path="/var/lib/kubelet/pods/1c0e296a-80f3-4efe-bb28-17fdfd153397/volumes" Feb 17 13:55:45 crc kubenswrapper[4768]: I0217 13:55:45.545113 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="265e5b14-8f7f-49fb-9984-421898d607b4" path="/var/lib/kubelet/pods/265e5b14-8f7f-49fb-9984-421898d607b4/volumes" Feb 17 13:55:45 crc kubenswrapper[4768]: I0217 13:55:45.560696 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 17 13:55:45 crc kubenswrapper[4768]: W0217 13:55:45.565211 4768 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8dde7e08_dc91_4904_9e22_5e77b459a138.slice/crio-284a74bcc4e1b1e803bb852ffe4ad241cd0b5b84c3b72e0cd2a5058be6163ce3 WatchSource:0}: Error finding container 284a74bcc4e1b1e803bb852ffe4ad241cd0b5b84c3b72e0cd2a5058be6163ce3: Status 404 returned error can't find the container with id 284a74bcc4e1b1e803bb852ffe4ad241cd0b5b84c3b72e0cd2a5058be6163ce3 Feb 17 13:55:45 crc kubenswrapper[4768]: I0217 13:55:45.655201 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-f5cff5694-mvlv5" event={"ID":"0ac0a67d-ed1c-4a10-8a5b-c2ecde6d3fc7","Type":"ContainerStarted","Data":"22d9d178930fc4f5c3fe0e83ff9ce8c073660dbf88d710db9bece0d4d6786bfc"} Feb 17 13:55:45 crc kubenswrapper[4768]: I0217 13:55:45.660530 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-68cd444875-wgnnm" event={"ID":"486b688d-e9dd-4c6b-ae8d-c2e536172e53","Type":"ContainerStarted","Data":"f0a51f276e2c1095e6a88656ec0004b8ea196d7bab7a5bce7baf47d2f6666707"} Feb 17 13:55:45 crc kubenswrapper[4768]: I0217 13:55:45.664920 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8dde7e08-dc91-4904-9e22-5e77b459a138","Type":"ContainerStarted","Data":"284a74bcc4e1b1e803bb852ffe4ad241cd0b5b84c3b72e0cd2a5058be6163ce3"} Feb 17 13:55:45 crc kubenswrapper[4768]: I0217 13:55:45.668543 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"1ab3c1f4-c7cd-4a4b-b540-d5cf3c239785","Type":"ContainerStarted","Data":"971ae71742d842e6e5629b4b0bdbfb502de9857f6e4b5ce43ede79eef76d2899"} Feb 17 13:55:45 crc kubenswrapper[4768]: I0217 13:55:45.672214 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Feb 17 13:55:45 crc kubenswrapper[4768]: I0217 13:55:45.684386 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-keystone-listener-f5cff5694-mvlv5" podStartSLOduration=3.502501094 podStartE2EDuration="6.684365167s" podCreationTimestamp="2026-02-17 13:55:39 +0000 UTC" firstStartedPulling="2026-02-17 13:55:40.768264193 +0000 UTC m=+1160.047650635" lastFinishedPulling="2026-02-17 13:55:43.950128266 +0000 UTC m=+1163.229514708" observedRunningTime="2026-02-17 13:55:45.677787415 +0000 UTC m=+1164.957173877" watchObservedRunningTime="2026-02-17 13:55:45.684365167 +0000 UTC m=+1164.963751609" Feb 17 13:55:45 crc kubenswrapper[4768]: I0217 13:55:45.716458 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=5.787809101 podStartE2EDuration="6.716438265s" podCreationTimestamp="2026-02-17 13:55:39 +0000 UTC" firstStartedPulling="2026-02-17 13:55:40.949966907 +0000 UTC m=+1160.229353349" lastFinishedPulling="2026-02-17 13:55:41.878596071 +0000 UTC m=+1161.157982513" observedRunningTime="2026-02-17 13:55:45.705438711 +0000 UTC m=+1164.984825173" watchObservedRunningTime="2026-02-17 13:55:45.716438265 +0000 UTC m=+1164.995824717" Feb 17 13:55:45 crc kubenswrapper[4768]: I0217 13:55:45.736781 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-worker-68cd444875-wgnnm" podStartSLOduration=3.429600415 podStartE2EDuration="6.736709797s" podCreationTimestamp="2026-02-17 13:55:39 +0000 UTC" firstStartedPulling="2026-02-17 13:55:40.641620535 +0000 UTC m=+1159.921006977" lastFinishedPulling="2026-02-17 13:55:43.948729917 +0000 UTC m=+1163.228116359" observedRunningTime="2026-02-17 13:55:45.723275825 +0000 UTC m=+1165.002662277" watchObservedRunningTime="2026-02-17 13:55:45.736709797 +0000 UTC m=+1165.016096249" Feb 17 13:55:46 crc kubenswrapper[4768]: I0217 13:55:46.257899 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-5f5954c4f6-p5w62"] Feb 17 13:55:46 crc kubenswrapper[4768]: I0217 13:55:46.262211 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-5f5954c4f6-p5w62" Feb 17 13:55:46 crc kubenswrapper[4768]: I0217 13:55:46.266933 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-public-svc" Feb 17 13:55:46 crc kubenswrapper[4768]: I0217 13:55:46.269325 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-internal-svc" Feb 17 13:55:46 crc kubenswrapper[4768]: I0217 13:55:46.302090 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-5f5954c4f6-p5w62"] Feb 17 13:55:46 crc kubenswrapper[4768]: I0217 13:55:46.418839 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3d5e5fc2-44f3-45d7-848c-ed40f1ea1401-internal-tls-certs\") pod \"barbican-api-5f5954c4f6-p5w62\" (UID: \"3d5e5fc2-44f3-45d7-848c-ed40f1ea1401\") " pod="openstack/barbican-api-5f5954c4f6-p5w62" Feb 17 13:55:46 crc kubenswrapper[4768]: I0217 13:55:46.418885 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3d5e5fc2-44f3-45d7-848c-ed40f1ea1401-config-data-custom\") pod \"barbican-api-5f5954c4f6-p5w62\" (UID: \"3d5e5fc2-44f3-45d7-848c-ed40f1ea1401\") " pod="openstack/barbican-api-5f5954c4f6-p5w62" Feb 17 13:55:46 crc kubenswrapper[4768]: I0217 13:55:46.418921 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3d5e5fc2-44f3-45d7-848c-ed40f1ea1401-logs\") pod \"barbican-api-5f5954c4f6-p5w62\" (UID: \"3d5e5fc2-44f3-45d7-848c-ed40f1ea1401\") " pod="openstack/barbican-api-5f5954c4f6-p5w62" Feb 17 13:55:46 crc kubenswrapper[4768]: I0217 13:55:46.418951 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3d5e5fc2-44f3-45d7-848c-ed40f1ea1401-public-tls-certs\") pod \"barbican-api-5f5954c4f6-p5w62\" (UID: \"3d5e5fc2-44f3-45d7-848c-ed40f1ea1401\") " pod="openstack/barbican-api-5f5954c4f6-p5w62" Feb 17 13:55:46 crc kubenswrapper[4768]: I0217 13:55:46.418988 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9r8w6\" (UniqueName: \"kubernetes.io/projected/3d5e5fc2-44f3-45d7-848c-ed40f1ea1401-kube-api-access-9r8w6\") pod \"barbican-api-5f5954c4f6-p5w62\" (UID: \"3d5e5fc2-44f3-45d7-848c-ed40f1ea1401\") " pod="openstack/barbican-api-5f5954c4f6-p5w62" Feb 17 13:55:46 crc kubenswrapper[4768]: I0217 13:55:46.419011 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3d5e5fc2-44f3-45d7-848c-ed40f1ea1401-combined-ca-bundle\") pod \"barbican-api-5f5954c4f6-p5w62\" (UID: \"3d5e5fc2-44f3-45d7-848c-ed40f1ea1401\") " pod="openstack/barbican-api-5f5954c4f6-p5w62" Feb 17 13:55:46 crc kubenswrapper[4768]: I0217 13:55:46.419062 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3d5e5fc2-44f3-45d7-848c-ed40f1ea1401-config-data\") pod \"barbican-api-5f5954c4f6-p5w62\" (UID: \"3d5e5fc2-44f3-45d7-848c-ed40f1ea1401\") " pod="openstack/barbican-api-5f5954c4f6-p5w62" Feb 17 13:55:46 crc kubenswrapper[4768]: I0217 13:55:46.520131 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3d5e5fc2-44f3-45d7-848c-ed40f1ea1401-config-data\") pod \"barbican-api-5f5954c4f6-p5w62\" (UID: \"3d5e5fc2-44f3-45d7-848c-ed40f1ea1401\") " pod="openstack/barbican-api-5f5954c4f6-p5w62" Feb 17 13:55:46 crc kubenswrapper[4768]: I0217 13:55:46.520206 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3d5e5fc2-44f3-45d7-848c-ed40f1ea1401-internal-tls-certs\") pod \"barbican-api-5f5954c4f6-p5w62\" (UID: \"3d5e5fc2-44f3-45d7-848c-ed40f1ea1401\") " pod="openstack/barbican-api-5f5954c4f6-p5w62" Feb 17 13:55:46 crc kubenswrapper[4768]: I0217 13:55:46.520228 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3d5e5fc2-44f3-45d7-848c-ed40f1ea1401-config-data-custom\") pod \"barbican-api-5f5954c4f6-p5w62\" (UID: \"3d5e5fc2-44f3-45d7-848c-ed40f1ea1401\") " pod="openstack/barbican-api-5f5954c4f6-p5w62" Feb 17 13:55:46 crc kubenswrapper[4768]: I0217 13:55:46.520258 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3d5e5fc2-44f3-45d7-848c-ed40f1ea1401-logs\") pod \"barbican-api-5f5954c4f6-p5w62\" (UID: \"3d5e5fc2-44f3-45d7-848c-ed40f1ea1401\") " pod="openstack/barbican-api-5f5954c4f6-p5w62" Feb 17 13:55:46 crc kubenswrapper[4768]: I0217 13:55:46.520290 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3d5e5fc2-44f3-45d7-848c-ed40f1ea1401-public-tls-certs\") pod \"barbican-api-5f5954c4f6-p5w62\" (UID: \"3d5e5fc2-44f3-45d7-848c-ed40f1ea1401\") " pod="openstack/barbican-api-5f5954c4f6-p5w62" Feb 17 13:55:46 crc kubenswrapper[4768]: I0217 13:55:46.520330 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9r8w6\" (UniqueName: \"kubernetes.io/projected/3d5e5fc2-44f3-45d7-848c-ed40f1ea1401-kube-api-access-9r8w6\") pod \"barbican-api-5f5954c4f6-p5w62\" (UID: \"3d5e5fc2-44f3-45d7-848c-ed40f1ea1401\") " pod="openstack/barbican-api-5f5954c4f6-p5w62" Feb 17 13:55:46 crc kubenswrapper[4768]: I0217 13:55:46.520355 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3d5e5fc2-44f3-45d7-848c-ed40f1ea1401-combined-ca-bundle\") pod \"barbican-api-5f5954c4f6-p5w62\" (UID: \"3d5e5fc2-44f3-45d7-848c-ed40f1ea1401\") " pod="openstack/barbican-api-5f5954c4f6-p5w62" Feb 17 13:55:46 crc kubenswrapper[4768]: I0217 13:55:46.520975 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3d5e5fc2-44f3-45d7-848c-ed40f1ea1401-logs\") pod \"barbican-api-5f5954c4f6-p5w62\" (UID: \"3d5e5fc2-44f3-45d7-848c-ed40f1ea1401\") " pod="openstack/barbican-api-5f5954c4f6-p5w62" Feb 17 13:55:46 crc kubenswrapper[4768]: I0217 13:55:46.529593 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3d5e5fc2-44f3-45d7-848c-ed40f1ea1401-internal-tls-certs\") pod \"barbican-api-5f5954c4f6-p5w62\" (UID: \"3d5e5fc2-44f3-45d7-848c-ed40f1ea1401\") " pod="openstack/barbican-api-5f5954c4f6-p5w62" Feb 17 13:55:46 crc kubenswrapper[4768]: I0217 13:55:46.530015 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3d5e5fc2-44f3-45d7-848c-ed40f1ea1401-combined-ca-bundle\") pod \"barbican-api-5f5954c4f6-p5w62\" (UID: \"3d5e5fc2-44f3-45d7-848c-ed40f1ea1401\") " pod="openstack/barbican-api-5f5954c4f6-p5w62" Feb 17 13:55:46 crc kubenswrapper[4768]: I0217 13:55:46.530124 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3d5e5fc2-44f3-45d7-848c-ed40f1ea1401-config-data\") pod \"barbican-api-5f5954c4f6-p5w62\" (UID: \"3d5e5fc2-44f3-45d7-848c-ed40f1ea1401\") " pod="openstack/barbican-api-5f5954c4f6-p5w62" Feb 17 13:55:46 crc kubenswrapper[4768]: I0217 13:55:46.533002 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3d5e5fc2-44f3-45d7-848c-ed40f1ea1401-config-data-custom\") pod \"barbican-api-5f5954c4f6-p5w62\" (UID: \"3d5e5fc2-44f3-45d7-848c-ed40f1ea1401\") " pod="openstack/barbican-api-5f5954c4f6-p5w62" Feb 17 13:55:46 crc kubenswrapper[4768]: I0217 13:55:46.537163 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3d5e5fc2-44f3-45d7-848c-ed40f1ea1401-public-tls-certs\") pod \"barbican-api-5f5954c4f6-p5w62\" (UID: \"3d5e5fc2-44f3-45d7-848c-ed40f1ea1401\") " pod="openstack/barbican-api-5f5954c4f6-p5w62" Feb 17 13:55:46 crc kubenswrapper[4768]: I0217 13:55:46.541921 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9r8w6\" (UniqueName: \"kubernetes.io/projected/3d5e5fc2-44f3-45d7-848c-ed40f1ea1401-kube-api-access-9r8w6\") pod \"barbican-api-5f5954c4f6-p5w62\" (UID: \"3d5e5fc2-44f3-45d7-848c-ed40f1ea1401\") " pod="openstack/barbican-api-5f5954c4f6-p5w62" Feb 17 13:55:46 crc kubenswrapper[4768]: I0217 13:55:46.612370 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-5f5954c4f6-p5w62" Feb 17 13:55:46 crc kubenswrapper[4768]: I0217 13:55:46.731544 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"797f85b4-f933-4b20-b7a5-e2f3b17a5b56","Type":"ContainerStarted","Data":"90cfd8b96909483e870d99f33bf173362f02e520fcb03855aa60145d6a55a0a6"} Feb 17 13:55:46 crc kubenswrapper[4768]: I0217 13:55:46.967440 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-5f5954c4f6-p5w62"] Feb 17 13:55:46 crc kubenswrapper[4768]: W0217 13:55:46.975362 4768 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3d5e5fc2_44f3_45d7_848c_ed40f1ea1401.slice/crio-ecbb0f467c511a3a31ad439bd31792308ab0dba2541cac3b29dd705e41eabad7 WatchSource:0}: Error finding container ecbb0f467c511a3a31ad439bd31792308ab0dba2541cac3b29dd705e41eabad7: Status 404 returned error can't find the container with id ecbb0f467c511a3a31ad439bd31792308ab0dba2541cac3b29dd705e41eabad7 Feb 17 13:55:47 crc kubenswrapper[4768]: I0217 13:55:47.590610 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-b89745fbd-lcjtt" Feb 17 13:55:47 crc kubenswrapper[4768]: I0217 13:55:47.741841 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"797f85b4-f933-4b20-b7a5-e2f3b17a5b56","Type":"ContainerStarted","Data":"e3c0dc4043e9b04aacad59b6de8322a9ebfd7658847ee4cbd09e9dee1729edb1"} Feb 17 13:55:47 crc kubenswrapper[4768]: I0217 13:55:47.741906 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"797f85b4-f933-4b20-b7a5-e2f3b17a5b56","Type":"ContainerStarted","Data":"902d2aae7715f8f402193d121c4dd34a3cdd7e9be833a44682ea03978e1c85e8"} Feb 17 13:55:47 crc kubenswrapper[4768]: I0217 13:55:47.751883 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Feb 17 13:55:47 crc kubenswrapper[4768]: I0217 13:55:47.751930 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8dde7e08-dc91-4904-9e22-5e77b459a138","Type":"ContainerStarted","Data":"a5dcbb194c7459e74d972ad1984296c67f3ba78a3c11fb4a40c9945ea4d35993"} Feb 17 13:55:47 crc kubenswrapper[4768]: I0217 13:55:47.751955 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8dde7e08-dc91-4904-9e22-5e77b459a138","Type":"ContainerStarted","Data":"58411a6ca16b335c9514c9172eb72f31124cc761fb33338cfd030519d6e8465a"} Feb 17 13:55:47 crc kubenswrapper[4768]: I0217 13:55:47.753617 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-5f5954c4f6-p5w62" event={"ID":"3d5e5fc2-44f3-45d7-848c-ed40f1ea1401","Type":"ContainerStarted","Data":"8ab0e56000d1f48b492e67d99f62568544e1cace86c60eaf1cc283a76f48317f"} Feb 17 13:55:47 crc kubenswrapper[4768]: I0217 13:55:47.753646 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-5f5954c4f6-p5w62" event={"ID":"3d5e5fc2-44f3-45d7-848c-ed40f1ea1401","Type":"ContainerStarted","Data":"7847e98e47a3aaddad115b88c35ff69f49b2fb8366f0879ba91f51b0f8f77ce4"} Feb 17 13:55:47 crc kubenswrapper[4768]: I0217 13:55:47.753658 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-5f5954c4f6-p5w62" event={"ID":"3d5e5fc2-44f3-45d7-848c-ed40f1ea1401","Type":"ContainerStarted","Data":"ecbb0f467c511a3a31ad439bd31792308ab0dba2541cac3b29dd705e41eabad7"} Feb 17 13:55:47 crc kubenswrapper[4768]: I0217 13:55:47.753921 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-5f5954c4f6-p5w62" Feb 17 13:55:47 crc kubenswrapper[4768]: I0217 13:55:47.754229 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-5f5954c4f6-p5w62" Feb 17 13:55:47 crc kubenswrapper[4768]: I0217 13:55:47.775470 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=3.775451103 podStartE2EDuration="3.775451103s" podCreationTimestamp="2026-02-17 13:55:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 13:55:47.76738952 +0000 UTC m=+1167.046775962" watchObservedRunningTime="2026-02-17 13:55:47.775451103 +0000 UTC m=+1167.054837545" Feb 17 13:55:47 crc kubenswrapper[4768]: I0217 13:55:47.816241 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-5f5954c4f6-p5w62" podStartSLOduration=1.816220443 podStartE2EDuration="1.816220443s" podCreationTimestamp="2026-02-17 13:55:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 13:55:47.81144973 +0000 UTC m=+1167.090836182" watchObservedRunningTime="2026-02-17 13:55:47.816220443 +0000 UTC m=+1167.095606885" Feb 17 13:55:48 crc kubenswrapper[4768]: I0217 13:55:48.116908 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-686d5745f-p8vdx"] Feb 17 13:55:48 crc kubenswrapper[4768]: I0217 13:55:48.117437 4768 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-686d5745f-p8vdx" podUID="2de38494-6385-477a-9ec8-2383ad286611" containerName="neutron-api" containerID="cri-o://fcbc857d76003d061ae2ada53814e510b340c56d8f411b185e781d8fc8783777" gracePeriod=30 Feb 17 13:55:48 crc kubenswrapper[4768]: I0217 13:55:48.117501 4768 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-686d5745f-p8vdx" podUID="2de38494-6385-477a-9ec8-2383ad286611" containerName="neutron-httpd" containerID="cri-o://bf84fafe4fc7f26ccccf6d413c8d3e7af106f1cc068189749c02e1aa10b69fe3" gracePeriod=30 Feb 17 13:55:48 crc kubenswrapper[4768]: I0217 13:55:48.162471 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-686d5745f-p8vdx" Feb 17 13:55:48 crc kubenswrapper[4768]: I0217 13:55:48.179897 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-85664fc4b9-7bclg"] Feb 17 13:55:48 crc kubenswrapper[4768]: I0217 13:55:48.181395 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-85664fc4b9-7bclg" Feb 17 13:55:48 crc kubenswrapper[4768]: I0217 13:55:48.208250 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-85664fc4b9-7bclg"] Feb 17 13:55:48 crc kubenswrapper[4768]: I0217 13:55:48.355205 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f0bb15c9-ac11-47c0-893f-5f0f36554f2b-public-tls-certs\") pod \"neutron-85664fc4b9-7bclg\" (UID: \"f0bb15c9-ac11-47c0-893f-5f0f36554f2b\") " pod="openstack/neutron-85664fc4b9-7bclg" Feb 17 13:55:48 crc kubenswrapper[4768]: I0217 13:55:48.355272 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/f0bb15c9-ac11-47c0-893f-5f0f36554f2b-httpd-config\") pod \"neutron-85664fc4b9-7bclg\" (UID: \"f0bb15c9-ac11-47c0-893f-5f0f36554f2b\") " pod="openstack/neutron-85664fc4b9-7bclg" Feb 17 13:55:48 crc kubenswrapper[4768]: I0217 13:55:48.355394 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f0bb15c9-ac11-47c0-893f-5f0f36554f2b-combined-ca-bundle\") pod \"neutron-85664fc4b9-7bclg\" (UID: \"f0bb15c9-ac11-47c0-893f-5f0f36554f2b\") " pod="openstack/neutron-85664fc4b9-7bclg" Feb 17 13:55:48 crc kubenswrapper[4768]: I0217 13:55:48.355441 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/f0bb15c9-ac11-47c0-893f-5f0f36554f2b-config\") pod \"neutron-85664fc4b9-7bclg\" (UID: \"f0bb15c9-ac11-47c0-893f-5f0f36554f2b\") " pod="openstack/neutron-85664fc4b9-7bclg" Feb 17 13:55:48 crc kubenswrapper[4768]: I0217 13:55:48.355487 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/f0bb15c9-ac11-47c0-893f-5f0f36554f2b-ovndb-tls-certs\") pod \"neutron-85664fc4b9-7bclg\" (UID: \"f0bb15c9-ac11-47c0-893f-5f0f36554f2b\") " pod="openstack/neutron-85664fc4b9-7bclg" Feb 17 13:55:48 crc kubenswrapper[4768]: I0217 13:55:48.355528 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f0bb15c9-ac11-47c0-893f-5f0f36554f2b-internal-tls-certs\") pod \"neutron-85664fc4b9-7bclg\" (UID: \"f0bb15c9-ac11-47c0-893f-5f0f36554f2b\") " pod="openstack/neutron-85664fc4b9-7bclg" Feb 17 13:55:48 crc kubenswrapper[4768]: I0217 13:55:48.355578 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jp5k2\" (UniqueName: \"kubernetes.io/projected/f0bb15c9-ac11-47c0-893f-5f0f36554f2b-kube-api-access-jp5k2\") pod \"neutron-85664fc4b9-7bclg\" (UID: \"f0bb15c9-ac11-47c0-893f-5f0f36554f2b\") " pod="openstack/neutron-85664fc4b9-7bclg" Feb 17 13:55:48 crc kubenswrapper[4768]: I0217 13:55:48.457398 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f0bb15c9-ac11-47c0-893f-5f0f36554f2b-combined-ca-bundle\") pod \"neutron-85664fc4b9-7bclg\" (UID: \"f0bb15c9-ac11-47c0-893f-5f0f36554f2b\") " pod="openstack/neutron-85664fc4b9-7bclg" Feb 17 13:55:48 crc kubenswrapper[4768]: I0217 13:55:48.457471 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/f0bb15c9-ac11-47c0-893f-5f0f36554f2b-config\") pod \"neutron-85664fc4b9-7bclg\" (UID: \"f0bb15c9-ac11-47c0-893f-5f0f36554f2b\") " pod="openstack/neutron-85664fc4b9-7bclg" Feb 17 13:55:48 crc kubenswrapper[4768]: I0217 13:55:48.457522 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/f0bb15c9-ac11-47c0-893f-5f0f36554f2b-ovndb-tls-certs\") pod \"neutron-85664fc4b9-7bclg\" (UID: \"f0bb15c9-ac11-47c0-893f-5f0f36554f2b\") " pod="openstack/neutron-85664fc4b9-7bclg" Feb 17 13:55:48 crc kubenswrapper[4768]: I0217 13:55:48.457560 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f0bb15c9-ac11-47c0-893f-5f0f36554f2b-internal-tls-certs\") pod \"neutron-85664fc4b9-7bclg\" (UID: \"f0bb15c9-ac11-47c0-893f-5f0f36554f2b\") " pod="openstack/neutron-85664fc4b9-7bclg" Feb 17 13:55:48 crc kubenswrapper[4768]: I0217 13:55:48.457607 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jp5k2\" (UniqueName: \"kubernetes.io/projected/f0bb15c9-ac11-47c0-893f-5f0f36554f2b-kube-api-access-jp5k2\") pod \"neutron-85664fc4b9-7bclg\" (UID: \"f0bb15c9-ac11-47c0-893f-5f0f36554f2b\") " pod="openstack/neutron-85664fc4b9-7bclg" Feb 17 13:55:48 crc kubenswrapper[4768]: I0217 13:55:48.457658 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f0bb15c9-ac11-47c0-893f-5f0f36554f2b-public-tls-certs\") pod \"neutron-85664fc4b9-7bclg\" (UID: \"f0bb15c9-ac11-47c0-893f-5f0f36554f2b\") " pod="openstack/neutron-85664fc4b9-7bclg" Feb 17 13:55:48 crc kubenswrapper[4768]: I0217 13:55:48.457702 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/f0bb15c9-ac11-47c0-893f-5f0f36554f2b-httpd-config\") pod \"neutron-85664fc4b9-7bclg\" (UID: \"f0bb15c9-ac11-47c0-893f-5f0f36554f2b\") " pod="openstack/neutron-85664fc4b9-7bclg" Feb 17 13:55:48 crc kubenswrapper[4768]: I0217 13:55:48.466848 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f0bb15c9-ac11-47c0-893f-5f0f36554f2b-internal-tls-certs\") pod \"neutron-85664fc4b9-7bclg\" (UID: \"f0bb15c9-ac11-47c0-893f-5f0f36554f2b\") " pod="openstack/neutron-85664fc4b9-7bclg" Feb 17 13:55:48 crc kubenswrapper[4768]: I0217 13:55:48.466957 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/f0bb15c9-ac11-47c0-893f-5f0f36554f2b-config\") pod \"neutron-85664fc4b9-7bclg\" (UID: \"f0bb15c9-ac11-47c0-893f-5f0f36554f2b\") " pod="openstack/neutron-85664fc4b9-7bclg" Feb 17 13:55:48 crc kubenswrapper[4768]: I0217 13:55:48.467724 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/f0bb15c9-ac11-47c0-893f-5f0f36554f2b-ovndb-tls-certs\") pod \"neutron-85664fc4b9-7bclg\" (UID: \"f0bb15c9-ac11-47c0-893f-5f0f36554f2b\") " pod="openstack/neutron-85664fc4b9-7bclg" Feb 17 13:55:48 crc kubenswrapper[4768]: I0217 13:55:48.469532 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f0bb15c9-ac11-47c0-893f-5f0f36554f2b-public-tls-certs\") pod \"neutron-85664fc4b9-7bclg\" (UID: \"f0bb15c9-ac11-47c0-893f-5f0f36554f2b\") " pod="openstack/neutron-85664fc4b9-7bclg" Feb 17 13:55:48 crc kubenswrapper[4768]: I0217 13:55:48.470770 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/f0bb15c9-ac11-47c0-893f-5f0f36554f2b-httpd-config\") pod \"neutron-85664fc4b9-7bclg\" (UID: \"f0bb15c9-ac11-47c0-893f-5f0f36554f2b\") " pod="openstack/neutron-85664fc4b9-7bclg" Feb 17 13:55:48 crc kubenswrapper[4768]: I0217 13:55:48.479470 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f0bb15c9-ac11-47c0-893f-5f0f36554f2b-combined-ca-bundle\") pod \"neutron-85664fc4b9-7bclg\" (UID: \"f0bb15c9-ac11-47c0-893f-5f0f36554f2b\") " pod="openstack/neutron-85664fc4b9-7bclg" Feb 17 13:55:48 crc kubenswrapper[4768]: I0217 13:55:48.482915 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jp5k2\" (UniqueName: \"kubernetes.io/projected/f0bb15c9-ac11-47c0-893f-5f0f36554f2b-kube-api-access-jp5k2\") pod \"neutron-85664fc4b9-7bclg\" (UID: \"f0bb15c9-ac11-47c0-893f-5f0f36554f2b\") " pod="openstack/neutron-85664fc4b9-7bclg" Feb 17 13:55:48 crc kubenswrapper[4768]: I0217 13:55:48.502027 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-85664fc4b9-7bclg" Feb 17 13:55:48 crc kubenswrapper[4768]: I0217 13:55:48.549307 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/horizon-684746c5d4-6lxfv" Feb 17 13:55:48 crc kubenswrapper[4768]: I0217 13:55:48.819644 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/horizon-6584d79658-wtxrc" Feb 17 13:55:49 crc kubenswrapper[4768]: I0217 13:55:49.056168 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-85664fc4b9-7bclg"] Feb 17 13:55:49 crc kubenswrapper[4768]: I0217 13:55:49.779716 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-85664fc4b9-7bclg" event={"ID":"f0bb15c9-ac11-47c0-893f-5f0f36554f2b","Type":"ContainerStarted","Data":"9b1a00cc31cffae8408fcbf38ccb511d6109238d23c2f96d775c63fee659ea0f"} Feb 17 13:55:49 crc kubenswrapper[4768]: I0217 13:55:49.779752 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-85664fc4b9-7bclg" event={"ID":"f0bb15c9-ac11-47c0-893f-5f0f36554f2b","Type":"ContainerStarted","Data":"29b6783cad920d9f60b3be0225c909c89216ce08f0f9fe664d517a1e954ea752"} Feb 17 13:55:49 crc kubenswrapper[4768]: I0217 13:55:49.779763 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-85664fc4b9-7bclg" event={"ID":"f0bb15c9-ac11-47c0-893f-5f0f36554f2b","Type":"ContainerStarted","Data":"26c2157a7e862376f5a9a316f2130337884ea4248f214684afee110650560d3e"} Feb 17 13:55:49 crc kubenswrapper[4768]: I0217 13:55:49.782306 4768 generic.go:334] "Generic (PLEG): container finished" podID="2de38494-6385-477a-9ec8-2383ad286611" containerID="bf84fafe4fc7f26ccccf6d413c8d3e7af106f1cc068189749c02e1aa10b69fe3" exitCode=0 Feb 17 13:55:49 crc kubenswrapper[4768]: I0217 13:55:49.782394 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-686d5745f-p8vdx" event={"ID":"2de38494-6385-477a-9ec8-2383ad286611","Type":"ContainerDied","Data":"bf84fafe4fc7f26ccccf6d413c8d3e7af106f1cc068189749c02e1aa10b69fe3"} Feb 17 13:55:49 crc kubenswrapper[4768]: I0217 13:55:49.800083 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-85664fc4b9-7bclg" podStartSLOduration=1.800062018 podStartE2EDuration="1.800062018s" podCreationTimestamp="2026-02-17 13:55:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 13:55:49.797395374 +0000 UTC m=+1169.076781826" watchObservedRunningTime="2026-02-17 13:55:49.800062018 +0000 UTC m=+1169.079448460" Feb 17 13:55:49 crc kubenswrapper[4768]: I0217 13:55:49.814817 4768 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/neutron-686d5745f-p8vdx" podUID="2de38494-6385-477a-9ec8-2383ad286611" containerName="neutron-httpd" probeResult="failure" output="Get \"https://10.217.0.153:9696/\": dial tcp 10.217.0.153:9696: connect: connection refused" Feb 17 13:55:49 crc kubenswrapper[4768]: I0217 13:55:49.978643 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Feb 17 13:55:50 crc kubenswrapper[4768]: I0217 13:55:50.267118 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5c9776ccc5-kw2l4" Feb 17 13:55:50 crc kubenswrapper[4768]: I0217 13:55:50.324298 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Feb 17 13:55:50 crc kubenswrapper[4768]: I0217 13:55:50.356713 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-55f844cf75-x5846"] Feb 17 13:55:50 crc kubenswrapper[4768]: I0217 13:55:50.356926 4768 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-55f844cf75-x5846" podUID="9ce4d08f-4b2c-4831-acce-546ddff7277a" containerName="dnsmasq-dns" containerID="cri-o://006722482ae4b4fe8cbbb77abdaf4c58cb033909d2a1b6b5a0cdcb756fd45af2" gracePeriod=10 Feb 17 13:55:50 crc kubenswrapper[4768]: I0217 13:55:50.614997 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/horizon-6584d79658-wtxrc" Feb 17 13:55:50 crc kubenswrapper[4768]: I0217 13:55:50.717036 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-684746c5d4-6lxfv"] Feb 17 13:55:50 crc kubenswrapper[4768]: I0217 13:55:50.717270 4768 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-684746c5d4-6lxfv" podUID="c20ad4a2-cf3e-4390-9141-1cc58518fd2b" containerName="horizon-log" containerID="cri-o://90e9f664f863f155e20726677206c7a01e93a7a33ecb475dfebd4929f9cabf08" gracePeriod=30 Feb 17 13:55:50 crc kubenswrapper[4768]: I0217 13:55:50.717649 4768 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-684746c5d4-6lxfv" podUID="c20ad4a2-cf3e-4390-9141-1cc58518fd2b" containerName="horizon" containerID="cri-o://4725ccba78631322acecbc5f57357e815cbf090efacbc802aac47a86ce2bbbf1" gracePeriod=30 Feb 17 13:55:50 crc kubenswrapper[4768]: I0217 13:55:50.736716 4768 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-684746c5d4-6lxfv" podUID="c20ad4a2-cf3e-4390-9141-1cc58518fd2b" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.147:8443/dashboard/auth/login/?next=/dashboard/\": EOF" Feb 17 13:55:50 crc kubenswrapper[4768]: I0217 13:55:50.837390 4768 generic.go:334] "Generic (PLEG): container finished" podID="9ce4d08f-4b2c-4831-acce-546ddff7277a" containerID="006722482ae4b4fe8cbbb77abdaf4c58cb033909d2a1b6b5a0cdcb756fd45af2" exitCode=0 Feb 17 13:55:50 crc kubenswrapper[4768]: I0217 13:55:50.837490 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55f844cf75-x5846" event={"ID":"9ce4d08f-4b2c-4831-acce-546ddff7277a","Type":"ContainerDied","Data":"006722482ae4b4fe8cbbb77abdaf4c58cb033909d2a1b6b5a0cdcb756fd45af2"} Feb 17 13:55:50 crc kubenswrapper[4768]: I0217 13:55:50.858165 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8dde7e08-dc91-4904-9e22-5e77b459a138","Type":"ContainerStarted","Data":"1b6e2d4b982055d878e5d006e51c15bdef9eb88065475bfa61afa025bf52a48b"} Feb 17 13:55:50 crc kubenswrapper[4768]: I0217 13:55:50.858697 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-85664fc4b9-7bclg" Feb 17 13:55:50 crc kubenswrapper[4768]: I0217 13:55:50.967732 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 17 13:55:51 crc kubenswrapper[4768]: I0217 13:55:51.078610 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55f844cf75-x5846" Feb 17 13:55:51 crc kubenswrapper[4768]: I0217 13:55:51.245627 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9ce4d08f-4b2c-4831-acce-546ddff7277a-ovsdbserver-nb\") pod \"9ce4d08f-4b2c-4831-acce-546ddff7277a\" (UID: \"9ce4d08f-4b2c-4831-acce-546ddff7277a\") " Feb 17 13:55:51 crc kubenswrapper[4768]: I0217 13:55:51.245701 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9ce4d08f-4b2c-4831-acce-546ddff7277a-config\") pod \"9ce4d08f-4b2c-4831-acce-546ddff7277a\" (UID: \"9ce4d08f-4b2c-4831-acce-546ddff7277a\") " Feb 17 13:55:51 crc kubenswrapper[4768]: I0217 13:55:51.245823 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pbvz8\" (UniqueName: \"kubernetes.io/projected/9ce4d08f-4b2c-4831-acce-546ddff7277a-kube-api-access-pbvz8\") pod \"9ce4d08f-4b2c-4831-acce-546ddff7277a\" (UID: \"9ce4d08f-4b2c-4831-acce-546ddff7277a\") " Feb 17 13:55:51 crc kubenswrapper[4768]: I0217 13:55:51.245911 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9ce4d08f-4b2c-4831-acce-546ddff7277a-ovsdbserver-sb\") pod \"9ce4d08f-4b2c-4831-acce-546ddff7277a\" (UID: \"9ce4d08f-4b2c-4831-acce-546ddff7277a\") " Feb 17 13:55:51 crc kubenswrapper[4768]: I0217 13:55:51.245957 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9ce4d08f-4b2c-4831-acce-546ddff7277a-dns-svc\") pod \"9ce4d08f-4b2c-4831-acce-546ddff7277a\" (UID: \"9ce4d08f-4b2c-4831-acce-546ddff7277a\") " Feb 17 13:55:51 crc kubenswrapper[4768]: I0217 13:55:51.246021 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/9ce4d08f-4b2c-4831-acce-546ddff7277a-dns-swift-storage-0\") pod \"9ce4d08f-4b2c-4831-acce-546ddff7277a\" (UID: \"9ce4d08f-4b2c-4831-acce-546ddff7277a\") " Feb 17 13:55:51 crc kubenswrapper[4768]: I0217 13:55:51.257551 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9ce4d08f-4b2c-4831-acce-546ddff7277a-kube-api-access-pbvz8" (OuterVolumeSpecName: "kube-api-access-pbvz8") pod "9ce4d08f-4b2c-4831-acce-546ddff7277a" (UID: "9ce4d08f-4b2c-4831-acce-546ddff7277a"). InnerVolumeSpecName "kube-api-access-pbvz8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 13:55:51 crc kubenswrapper[4768]: I0217 13:55:51.326897 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9ce4d08f-4b2c-4831-acce-546ddff7277a-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "9ce4d08f-4b2c-4831-acce-546ddff7277a" (UID: "9ce4d08f-4b2c-4831-acce-546ddff7277a"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 13:55:51 crc kubenswrapper[4768]: I0217 13:55:51.340352 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9ce4d08f-4b2c-4831-acce-546ddff7277a-config" (OuterVolumeSpecName: "config") pod "9ce4d08f-4b2c-4831-acce-546ddff7277a" (UID: "9ce4d08f-4b2c-4831-acce-546ddff7277a"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 13:55:51 crc kubenswrapper[4768]: I0217 13:55:51.348076 4768 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9ce4d08f-4b2c-4831-acce-546ddff7277a-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 17 13:55:51 crc kubenswrapper[4768]: I0217 13:55:51.348117 4768 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9ce4d08f-4b2c-4831-acce-546ddff7277a-config\") on node \"crc\" DevicePath \"\"" Feb 17 13:55:51 crc kubenswrapper[4768]: I0217 13:55:51.348127 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pbvz8\" (UniqueName: \"kubernetes.io/projected/9ce4d08f-4b2c-4831-acce-546ddff7277a-kube-api-access-pbvz8\") on node \"crc\" DevicePath \"\"" Feb 17 13:55:51 crc kubenswrapper[4768]: I0217 13:55:51.356216 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9ce4d08f-4b2c-4831-acce-546ddff7277a-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "9ce4d08f-4b2c-4831-acce-546ddff7277a" (UID: "9ce4d08f-4b2c-4831-acce-546ddff7277a"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 13:55:51 crc kubenswrapper[4768]: I0217 13:55:51.385013 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9ce4d08f-4b2c-4831-acce-546ddff7277a-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "9ce4d08f-4b2c-4831-acce-546ddff7277a" (UID: "9ce4d08f-4b2c-4831-acce-546ddff7277a"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 13:55:51 crc kubenswrapper[4768]: I0217 13:55:51.409228 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9ce4d08f-4b2c-4831-acce-546ddff7277a-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "9ce4d08f-4b2c-4831-acce-546ddff7277a" (UID: "9ce4d08f-4b2c-4831-acce-546ddff7277a"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 13:55:51 crc kubenswrapper[4768]: I0217 13:55:51.449422 4768 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9ce4d08f-4b2c-4831-acce-546ddff7277a-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 17 13:55:51 crc kubenswrapper[4768]: I0217 13:55:51.449460 4768 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9ce4d08f-4b2c-4831-acce-546ddff7277a-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 17 13:55:51 crc kubenswrapper[4768]: I0217 13:55:51.449474 4768 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/9ce4d08f-4b2c-4831-acce-546ddff7277a-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 17 13:55:51 crc kubenswrapper[4768]: I0217 13:55:51.677687 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-686d5745f-p8vdx" Feb 17 13:55:51 crc kubenswrapper[4768]: I0217 13:55:51.754293 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/2de38494-6385-477a-9ec8-2383ad286611-config\") pod \"2de38494-6385-477a-9ec8-2383ad286611\" (UID: \"2de38494-6385-477a-9ec8-2383ad286611\") " Feb 17 13:55:51 crc kubenswrapper[4768]: I0217 13:55:51.754415 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/2de38494-6385-477a-9ec8-2383ad286611-ovndb-tls-certs\") pod \"2de38494-6385-477a-9ec8-2383ad286611\" (UID: \"2de38494-6385-477a-9ec8-2383ad286611\") " Feb 17 13:55:51 crc kubenswrapper[4768]: I0217 13:55:51.754484 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/2de38494-6385-477a-9ec8-2383ad286611-httpd-config\") pod \"2de38494-6385-477a-9ec8-2383ad286611\" (UID: \"2de38494-6385-477a-9ec8-2383ad286611\") " Feb 17 13:55:51 crc kubenswrapper[4768]: I0217 13:55:51.754555 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2de38494-6385-477a-9ec8-2383ad286611-internal-tls-certs\") pod \"2de38494-6385-477a-9ec8-2383ad286611\" (UID: \"2de38494-6385-477a-9ec8-2383ad286611\") " Feb 17 13:55:51 crc kubenswrapper[4768]: I0217 13:55:51.754624 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rdx7f\" (UniqueName: \"kubernetes.io/projected/2de38494-6385-477a-9ec8-2383ad286611-kube-api-access-rdx7f\") pod \"2de38494-6385-477a-9ec8-2383ad286611\" (UID: \"2de38494-6385-477a-9ec8-2383ad286611\") " Feb 17 13:55:51 crc kubenswrapper[4768]: I0217 13:55:51.754731 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2de38494-6385-477a-9ec8-2383ad286611-public-tls-certs\") pod \"2de38494-6385-477a-9ec8-2383ad286611\" (UID: \"2de38494-6385-477a-9ec8-2383ad286611\") " Feb 17 13:55:51 crc kubenswrapper[4768]: I0217 13:55:51.754779 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2de38494-6385-477a-9ec8-2383ad286611-combined-ca-bundle\") pod \"2de38494-6385-477a-9ec8-2383ad286611\" (UID: \"2de38494-6385-477a-9ec8-2383ad286611\") " Feb 17 13:55:51 crc kubenswrapper[4768]: I0217 13:55:51.759415 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2de38494-6385-477a-9ec8-2383ad286611-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "2de38494-6385-477a-9ec8-2383ad286611" (UID: "2de38494-6385-477a-9ec8-2383ad286611"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 13:55:51 crc kubenswrapper[4768]: I0217 13:55:51.760018 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2de38494-6385-477a-9ec8-2383ad286611-kube-api-access-rdx7f" (OuterVolumeSpecName: "kube-api-access-rdx7f") pod "2de38494-6385-477a-9ec8-2383ad286611" (UID: "2de38494-6385-477a-9ec8-2383ad286611"). InnerVolumeSpecName "kube-api-access-rdx7f". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 13:55:51 crc kubenswrapper[4768]: I0217 13:55:51.830700 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2de38494-6385-477a-9ec8-2383ad286611-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2de38494-6385-477a-9ec8-2383ad286611" (UID: "2de38494-6385-477a-9ec8-2383ad286611"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 13:55:51 crc kubenswrapper[4768]: I0217 13:55:51.846866 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2de38494-6385-477a-9ec8-2383ad286611-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "2de38494-6385-477a-9ec8-2383ad286611" (UID: "2de38494-6385-477a-9ec8-2383ad286611"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 13:55:51 crc kubenswrapper[4768]: I0217 13:55:51.852377 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2de38494-6385-477a-9ec8-2383ad286611-config" (OuterVolumeSpecName: "config") pod "2de38494-6385-477a-9ec8-2383ad286611" (UID: "2de38494-6385-477a-9ec8-2383ad286611"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 13:55:51 crc kubenswrapper[4768]: I0217 13:55:51.856741 4768 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/2de38494-6385-477a-9ec8-2383ad286611-httpd-config\") on node \"crc\" DevicePath \"\"" Feb 17 13:55:51 crc kubenswrapper[4768]: I0217 13:55:51.857037 4768 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2de38494-6385-477a-9ec8-2383ad286611-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 17 13:55:51 crc kubenswrapper[4768]: I0217 13:55:51.857110 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rdx7f\" (UniqueName: \"kubernetes.io/projected/2de38494-6385-477a-9ec8-2383ad286611-kube-api-access-rdx7f\") on node \"crc\" DevicePath \"\"" Feb 17 13:55:51 crc kubenswrapper[4768]: I0217 13:55:51.857124 4768 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2de38494-6385-477a-9ec8-2383ad286611-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 13:55:51 crc kubenswrapper[4768]: I0217 13:55:51.857138 4768 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/2de38494-6385-477a-9ec8-2383ad286611-config\") on node \"crc\" DevicePath \"\"" Feb 17 13:55:51 crc kubenswrapper[4768]: I0217 13:55:51.858276 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2de38494-6385-477a-9ec8-2383ad286611-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "2de38494-6385-477a-9ec8-2383ad286611" (UID: "2de38494-6385-477a-9ec8-2383ad286611"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 13:55:51 crc kubenswrapper[4768]: I0217 13:55:51.882410 4768 generic.go:334] "Generic (PLEG): container finished" podID="2de38494-6385-477a-9ec8-2383ad286611" containerID="fcbc857d76003d061ae2ada53814e510b340c56d8f411b185e781d8fc8783777" exitCode=0 Feb 17 13:55:51 crc kubenswrapper[4768]: I0217 13:55:51.882512 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-686d5745f-p8vdx" event={"ID":"2de38494-6385-477a-9ec8-2383ad286611","Type":"ContainerDied","Data":"fcbc857d76003d061ae2ada53814e510b340c56d8f411b185e781d8fc8783777"} Feb 17 13:55:51 crc kubenswrapper[4768]: I0217 13:55:51.882542 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-686d5745f-p8vdx" event={"ID":"2de38494-6385-477a-9ec8-2383ad286611","Type":"ContainerDied","Data":"6f92d699f2bc25182ef672b461cd4434c814f8d9f57ce67b159fb40da99ec8ed"} Feb 17 13:55:51 crc kubenswrapper[4768]: I0217 13:55:51.882560 4768 scope.go:117] "RemoveContainer" containerID="bf84fafe4fc7f26ccccf6d413c8d3e7af106f1cc068189749c02e1aa10b69fe3" Feb 17 13:55:51 crc kubenswrapper[4768]: I0217 13:55:51.882669 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-686d5745f-p8vdx" Feb 17 13:55:51 crc kubenswrapper[4768]: I0217 13:55:51.888983 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8dde7e08-dc91-4904-9e22-5e77b459a138","Type":"ContainerStarted","Data":"62831d24c9b06763090b377ac82922512e8105844cc0693394a6ea50479e6e3e"} Feb 17 13:55:51 crc kubenswrapper[4768]: I0217 13:55:51.889456 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 17 13:55:51 crc kubenswrapper[4768]: I0217 13:55:51.891760 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2de38494-6385-477a-9ec8-2383ad286611-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "2de38494-6385-477a-9ec8-2383ad286611" (UID: "2de38494-6385-477a-9ec8-2383ad286611"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 13:55:51 crc kubenswrapper[4768]: I0217 13:55:51.899609 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55f844cf75-x5846" event={"ID":"9ce4d08f-4b2c-4831-acce-546ddff7277a","Type":"ContainerDied","Data":"1b70669dbe1d78fe68e25e287f4ffd999ee11277612a4732005a3526475d34c3"} Feb 17 13:55:51 crc kubenswrapper[4768]: I0217 13:55:51.899810 4768 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="1ab3c1f4-c7cd-4a4b-b540-d5cf3c239785" containerName="cinder-scheduler" containerID="cri-o://a36b16178e7cf5c2d82b2bbdfaa59b932c020e50559ffbebaa49f29346781024" gracePeriod=30 Feb 17 13:55:51 crc kubenswrapper[4768]: I0217 13:55:51.899956 4768 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="1ab3c1f4-c7cd-4a4b-b540-d5cf3c239785" containerName="probe" containerID="cri-o://971ae71742d842e6e5629b4b0bdbfb502de9857f6e4b5ce43ede79eef76d2899" gracePeriod=30 Feb 17 13:55:51 crc kubenswrapper[4768]: I0217 13:55:51.900168 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55f844cf75-x5846" Feb 17 13:55:51 crc kubenswrapper[4768]: I0217 13:55:51.933066 4768 scope.go:117] "RemoveContainer" containerID="fcbc857d76003d061ae2ada53814e510b340c56d8f411b185e781d8fc8783777" Feb 17 13:55:51 crc kubenswrapper[4768]: I0217 13:55:51.951378 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=1.932703505 podStartE2EDuration="7.951352552s" podCreationTimestamp="2026-02-17 13:55:44 +0000 UTC" firstStartedPulling="2026-02-17 13:55:45.567439057 +0000 UTC m=+1164.846825499" lastFinishedPulling="2026-02-17 13:55:51.586088104 +0000 UTC m=+1170.865474546" observedRunningTime="2026-02-17 13:55:51.912915807 +0000 UTC m=+1171.192302259" watchObservedRunningTime="2026-02-17 13:55:51.951352552 +0000 UTC m=+1171.230738994" Feb 17 13:55:51 crc kubenswrapper[4768]: I0217 13:55:51.958389 4768 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/2de38494-6385-477a-9ec8-2383ad286611-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 17 13:55:51 crc kubenswrapper[4768]: I0217 13:55:51.958417 4768 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2de38494-6385-477a-9ec8-2383ad286611-public-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 17 13:55:51 crc kubenswrapper[4768]: I0217 13:55:51.974869 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-55f844cf75-x5846"] Feb 17 13:55:51 crc kubenswrapper[4768]: I0217 13:55:51.990912 4768 scope.go:117] "RemoveContainer" containerID="bf84fafe4fc7f26ccccf6d413c8d3e7af106f1cc068189749c02e1aa10b69fe3" Feb 17 13:55:51 crc kubenswrapper[4768]: E0217 13:55:51.992023 4768 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bf84fafe4fc7f26ccccf6d413c8d3e7af106f1cc068189749c02e1aa10b69fe3\": container with ID starting with bf84fafe4fc7f26ccccf6d413c8d3e7af106f1cc068189749c02e1aa10b69fe3 not found: ID does not exist" containerID="bf84fafe4fc7f26ccccf6d413c8d3e7af106f1cc068189749c02e1aa10b69fe3" Feb 17 13:55:51 crc kubenswrapper[4768]: I0217 13:55:51.992049 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bf84fafe4fc7f26ccccf6d413c8d3e7af106f1cc068189749c02e1aa10b69fe3"} err="failed to get container status \"bf84fafe4fc7f26ccccf6d413c8d3e7af106f1cc068189749c02e1aa10b69fe3\": rpc error: code = NotFound desc = could not find container \"bf84fafe4fc7f26ccccf6d413c8d3e7af106f1cc068189749c02e1aa10b69fe3\": container with ID starting with bf84fafe4fc7f26ccccf6d413c8d3e7af106f1cc068189749c02e1aa10b69fe3 not found: ID does not exist" Feb 17 13:55:51 crc kubenswrapper[4768]: I0217 13:55:51.992068 4768 scope.go:117] "RemoveContainer" containerID="fcbc857d76003d061ae2ada53814e510b340c56d8f411b185e781d8fc8783777" Feb 17 13:55:51 crc kubenswrapper[4768]: E0217 13:55:51.992305 4768 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fcbc857d76003d061ae2ada53814e510b340c56d8f411b185e781d8fc8783777\": container with ID starting with fcbc857d76003d061ae2ada53814e510b340c56d8f411b185e781d8fc8783777 not found: ID does not exist" containerID="fcbc857d76003d061ae2ada53814e510b340c56d8f411b185e781d8fc8783777" Feb 17 13:55:51 crc kubenswrapper[4768]: I0217 13:55:51.992323 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fcbc857d76003d061ae2ada53814e510b340c56d8f411b185e781d8fc8783777"} err="failed to get container status \"fcbc857d76003d061ae2ada53814e510b340c56d8f411b185e781d8fc8783777\": rpc error: code = NotFound desc = could not find container \"fcbc857d76003d061ae2ada53814e510b340c56d8f411b185e781d8fc8783777\": container with ID starting with fcbc857d76003d061ae2ada53814e510b340c56d8f411b185e781d8fc8783777 not found: ID does not exist" Feb 17 13:55:51 crc kubenswrapper[4768]: I0217 13:55:51.992334 4768 scope.go:117] "RemoveContainer" containerID="006722482ae4b4fe8cbbb77abdaf4c58cb033909d2a1b6b5a0cdcb756fd45af2" Feb 17 13:55:51 crc kubenswrapper[4768]: I0217 13:55:51.997892 4768 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-55f844cf75-x5846"] Feb 17 13:55:52 crc kubenswrapper[4768]: I0217 13:55:52.013898 4768 scope.go:117] "RemoveContainer" containerID="2b489a10eb1f6fd897082fc1b09044c15df00c13e18d839bde03c47b74d55153" Feb 17 13:55:52 crc kubenswrapper[4768]: I0217 13:55:52.177058 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-6c9fbc7fd6-zqhzg" Feb 17 13:55:52 crc kubenswrapper[4768]: I0217 13:55:52.244899 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-686d5745f-p8vdx"] Feb 17 13:55:52 crc kubenswrapper[4768]: I0217 13:55:52.254190 4768 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-686d5745f-p8vdx"] Feb 17 13:55:52 crc kubenswrapper[4768]: I0217 13:55:52.492042 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-6c9fbc7fd6-zqhzg" Feb 17 13:55:52 crc kubenswrapper[4768]: I0217 13:55:52.910436 4768 generic.go:334] "Generic (PLEG): container finished" podID="1ab3c1f4-c7cd-4a4b-b540-d5cf3c239785" containerID="971ae71742d842e6e5629b4b0bdbfb502de9857f6e4b5ce43ede79eef76d2899" exitCode=0 Feb 17 13:55:52 crc kubenswrapper[4768]: I0217 13:55:52.910513 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"1ab3c1f4-c7cd-4a4b-b540-d5cf3c239785","Type":"ContainerDied","Data":"971ae71742d842e6e5629b4b0bdbfb502de9857f6e4b5ce43ede79eef76d2899"} Feb 17 13:55:53 crc kubenswrapper[4768]: I0217 13:55:53.546246 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2de38494-6385-477a-9ec8-2383ad286611" path="/var/lib/kubelet/pods/2de38494-6385-477a-9ec8-2383ad286611/volumes" Feb 17 13:55:53 crc kubenswrapper[4768]: I0217 13:55:53.547021 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9ce4d08f-4b2c-4831-acce-546ddff7277a" path="/var/lib/kubelet/pods/9ce4d08f-4b2c-4831-acce-546ddff7277a/volumes" Feb 17 13:55:53 crc kubenswrapper[4768]: I0217 13:55:53.866956 4768 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-684746c5d4-6lxfv" podUID="c20ad4a2-cf3e-4390-9141-1cc58518fd2b" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.147:8443/dashboard/auth/login/?next=/dashboard/\": read tcp 10.217.0.2:51674->10.217.0.147:8443: read: connection reset by peer" Feb 17 13:55:54 crc kubenswrapper[4768]: I0217 13:55:54.575325 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 17 13:55:54 crc kubenswrapper[4768]: I0217 13:55:54.615548 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1ab3c1f4-c7cd-4a4b-b540-d5cf3c239785-combined-ca-bundle\") pod \"1ab3c1f4-c7cd-4a4b-b540-d5cf3c239785\" (UID: \"1ab3c1f4-c7cd-4a4b-b540-d5cf3c239785\") " Feb 17 13:55:54 crc kubenswrapper[4768]: I0217 13:55:54.615644 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/1ab3c1f4-c7cd-4a4b-b540-d5cf3c239785-etc-machine-id\") pod \"1ab3c1f4-c7cd-4a4b-b540-d5cf3c239785\" (UID: \"1ab3c1f4-c7cd-4a4b-b540-d5cf3c239785\") " Feb 17 13:55:54 crc kubenswrapper[4768]: I0217 13:55:54.615763 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1ab3c1f4-c7cd-4a4b-b540-d5cf3c239785-config-data\") pod \"1ab3c1f4-c7cd-4a4b-b540-d5cf3c239785\" (UID: \"1ab3c1f4-c7cd-4a4b-b540-d5cf3c239785\") " Feb 17 13:55:54 crc kubenswrapper[4768]: I0217 13:55:54.615813 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r7854\" (UniqueName: \"kubernetes.io/projected/1ab3c1f4-c7cd-4a4b-b540-d5cf3c239785-kube-api-access-r7854\") pod \"1ab3c1f4-c7cd-4a4b-b540-d5cf3c239785\" (UID: \"1ab3c1f4-c7cd-4a4b-b540-d5cf3c239785\") " Feb 17 13:55:54 crc kubenswrapper[4768]: I0217 13:55:54.615898 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1ab3c1f4-c7cd-4a4b-b540-d5cf3c239785-scripts\") pod \"1ab3c1f4-c7cd-4a4b-b540-d5cf3c239785\" (UID: \"1ab3c1f4-c7cd-4a4b-b540-d5cf3c239785\") " Feb 17 13:55:54 crc kubenswrapper[4768]: I0217 13:55:54.615948 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1ab3c1f4-c7cd-4a4b-b540-d5cf3c239785-config-data-custom\") pod \"1ab3c1f4-c7cd-4a4b-b540-d5cf3c239785\" (UID: \"1ab3c1f4-c7cd-4a4b-b540-d5cf3c239785\") " Feb 17 13:55:54 crc kubenswrapper[4768]: I0217 13:55:54.617369 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1ab3c1f4-c7cd-4a4b-b540-d5cf3c239785-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "1ab3c1f4-c7cd-4a4b-b540-d5cf3c239785" (UID: "1ab3c1f4-c7cd-4a4b-b540-d5cf3c239785"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 13:55:54 crc kubenswrapper[4768]: I0217 13:55:54.624207 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1ab3c1f4-c7cd-4a4b-b540-d5cf3c239785-scripts" (OuterVolumeSpecName: "scripts") pod "1ab3c1f4-c7cd-4a4b-b540-d5cf3c239785" (UID: "1ab3c1f4-c7cd-4a4b-b540-d5cf3c239785"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 13:55:54 crc kubenswrapper[4768]: I0217 13:55:54.627589 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1ab3c1f4-c7cd-4a4b-b540-d5cf3c239785-kube-api-access-r7854" (OuterVolumeSpecName: "kube-api-access-r7854") pod "1ab3c1f4-c7cd-4a4b-b540-d5cf3c239785" (UID: "1ab3c1f4-c7cd-4a4b-b540-d5cf3c239785"). InnerVolumeSpecName "kube-api-access-r7854". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 13:55:54 crc kubenswrapper[4768]: I0217 13:55:54.645133 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1ab3c1f4-c7cd-4a4b-b540-d5cf3c239785-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "1ab3c1f4-c7cd-4a4b-b540-d5cf3c239785" (UID: "1ab3c1f4-c7cd-4a4b-b540-d5cf3c239785"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 13:55:54 crc kubenswrapper[4768]: I0217 13:55:54.704734 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1ab3c1f4-c7cd-4a4b-b540-d5cf3c239785-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1ab3c1f4-c7cd-4a4b-b540-d5cf3c239785" (UID: "1ab3c1f4-c7cd-4a4b-b540-d5cf3c239785"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 13:55:54 crc kubenswrapper[4768]: I0217 13:55:54.718948 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r7854\" (UniqueName: \"kubernetes.io/projected/1ab3c1f4-c7cd-4a4b-b540-d5cf3c239785-kube-api-access-r7854\") on node \"crc\" DevicePath \"\"" Feb 17 13:55:54 crc kubenswrapper[4768]: I0217 13:55:54.720566 4768 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1ab3c1f4-c7cd-4a4b-b540-d5cf3c239785-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 13:55:54 crc kubenswrapper[4768]: I0217 13:55:54.720586 4768 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1ab3c1f4-c7cd-4a4b-b540-d5cf3c239785-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 17 13:55:54 crc kubenswrapper[4768]: I0217 13:55:54.720595 4768 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1ab3c1f4-c7cd-4a4b-b540-d5cf3c239785-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 13:55:54 crc kubenswrapper[4768]: I0217 13:55:54.720624 4768 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/1ab3c1f4-c7cd-4a4b-b540-d5cf3c239785-etc-machine-id\") on node \"crc\" DevicePath \"\"" Feb 17 13:55:54 crc kubenswrapper[4768]: I0217 13:55:54.726818 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1ab3c1f4-c7cd-4a4b-b540-d5cf3c239785-config-data" (OuterVolumeSpecName: "config-data") pod "1ab3c1f4-c7cd-4a4b-b540-d5cf3c239785" (UID: "1ab3c1f4-c7cd-4a4b-b540-d5cf3c239785"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 13:55:54 crc kubenswrapper[4768]: I0217 13:55:54.823134 4768 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1ab3c1f4-c7cd-4a4b-b540-d5cf3c239785-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 13:55:54 crc kubenswrapper[4768]: I0217 13:55:54.936394 4768 generic.go:334] "Generic (PLEG): container finished" podID="c20ad4a2-cf3e-4390-9141-1cc58518fd2b" containerID="4725ccba78631322acecbc5f57357e815cbf090efacbc802aac47a86ce2bbbf1" exitCode=0 Feb 17 13:55:54 crc kubenswrapper[4768]: I0217 13:55:54.936446 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-684746c5d4-6lxfv" event={"ID":"c20ad4a2-cf3e-4390-9141-1cc58518fd2b","Type":"ContainerDied","Data":"4725ccba78631322acecbc5f57357e815cbf090efacbc802aac47a86ce2bbbf1"} Feb 17 13:55:54 crc kubenswrapper[4768]: I0217 13:55:54.947372 4768 generic.go:334] "Generic (PLEG): container finished" podID="1ab3c1f4-c7cd-4a4b-b540-d5cf3c239785" containerID="a36b16178e7cf5c2d82b2bbdfaa59b932c020e50559ffbebaa49f29346781024" exitCode=0 Feb 17 13:55:54 crc kubenswrapper[4768]: I0217 13:55:54.947428 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"1ab3c1f4-c7cd-4a4b-b540-d5cf3c239785","Type":"ContainerDied","Data":"a36b16178e7cf5c2d82b2bbdfaa59b932c020e50559ffbebaa49f29346781024"} Feb 17 13:55:54 crc kubenswrapper[4768]: I0217 13:55:54.947430 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 17 13:55:54 crc kubenswrapper[4768]: I0217 13:55:54.947486 4768 scope.go:117] "RemoveContainer" containerID="971ae71742d842e6e5629b4b0bdbfb502de9857f6e4b5ce43ede79eef76d2899" Feb 17 13:55:54 crc kubenswrapper[4768]: I0217 13:55:54.947472 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"1ab3c1f4-c7cd-4a4b-b540-d5cf3c239785","Type":"ContainerDied","Data":"d21534418276c9f8a7adcafdf40e136748eabc8b4ae771108b1f6db433f1d336"} Feb 17 13:55:55 crc kubenswrapper[4768]: I0217 13:55:55.014990 4768 scope.go:117] "RemoveContainer" containerID="a36b16178e7cf5c2d82b2bbdfaa59b932c020e50559ffbebaa49f29346781024" Feb 17 13:55:55 crc kubenswrapper[4768]: I0217 13:55:55.035733 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 17 13:55:55 crc kubenswrapper[4768]: I0217 13:55:55.055381 4768 scope.go:117] "RemoveContainer" containerID="971ae71742d842e6e5629b4b0bdbfb502de9857f6e4b5ce43ede79eef76d2899" Feb 17 13:55:55 crc kubenswrapper[4768]: E0217 13:55:55.056321 4768 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"971ae71742d842e6e5629b4b0bdbfb502de9857f6e4b5ce43ede79eef76d2899\": container with ID starting with 971ae71742d842e6e5629b4b0bdbfb502de9857f6e4b5ce43ede79eef76d2899 not found: ID does not exist" containerID="971ae71742d842e6e5629b4b0bdbfb502de9857f6e4b5ce43ede79eef76d2899" Feb 17 13:55:55 crc kubenswrapper[4768]: I0217 13:55:55.056392 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"971ae71742d842e6e5629b4b0bdbfb502de9857f6e4b5ce43ede79eef76d2899"} err="failed to get container status \"971ae71742d842e6e5629b4b0bdbfb502de9857f6e4b5ce43ede79eef76d2899\": rpc error: code = NotFound desc = could not find container \"971ae71742d842e6e5629b4b0bdbfb502de9857f6e4b5ce43ede79eef76d2899\": container with ID starting with 971ae71742d842e6e5629b4b0bdbfb502de9857f6e4b5ce43ede79eef76d2899 not found: ID does not exist" Feb 17 13:55:55 crc kubenswrapper[4768]: I0217 13:55:55.056424 4768 scope.go:117] "RemoveContainer" containerID="a36b16178e7cf5c2d82b2bbdfaa59b932c020e50559ffbebaa49f29346781024" Feb 17 13:55:55 crc kubenswrapper[4768]: E0217 13:55:55.056829 4768 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a36b16178e7cf5c2d82b2bbdfaa59b932c020e50559ffbebaa49f29346781024\": container with ID starting with a36b16178e7cf5c2d82b2bbdfaa59b932c020e50559ffbebaa49f29346781024 not found: ID does not exist" containerID="a36b16178e7cf5c2d82b2bbdfaa59b932c020e50559ffbebaa49f29346781024" Feb 17 13:55:55 crc kubenswrapper[4768]: I0217 13:55:55.056866 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a36b16178e7cf5c2d82b2bbdfaa59b932c020e50559ffbebaa49f29346781024"} err="failed to get container status \"a36b16178e7cf5c2d82b2bbdfaa59b932c020e50559ffbebaa49f29346781024\": rpc error: code = NotFound desc = could not find container \"a36b16178e7cf5c2d82b2bbdfaa59b932c020e50559ffbebaa49f29346781024\": container with ID starting with a36b16178e7cf5c2d82b2bbdfaa59b932c020e50559ffbebaa49f29346781024 not found: ID does not exist" Feb 17 13:55:55 crc kubenswrapper[4768]: I0217 13:55:55.061167 4768 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 17 13:55:55 crc kubenswrapper[4768]: I0217 13:55:55.073744 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Feb 17 13:55:55 crc kubenswrapper[4768]: E0217 13:55:55.074197 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1ab3c1f4-c7cd-4a4b-b540-d5cf3c239785" containerName="probe" Feb 17 13:55:55 crc kubenswrapper[4768]: I0217 13:55:55.074227 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="1ab3c1f4-c7cd-4a4b-b540-d5cf3c239785" containerName="probe" Feb 17 13:55:55 crc kubenswrapper[4768]: E0217 13:55:55.074249 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9ce4d08f-4b2c-4831-acce-546ddff7277a" containerName="dnsmasq-dns" Feb 17 13:55:55 crc kubenswrapper[4768]: I0217 13:55:55.074263 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="9ce4d08f-4b2c-4831-acce-546ddff7277a" containerName="dnsmasq-dns" Feb 17 13:55:55 crc kubenswrapper[4768]: E0217 13:55:55.074272 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1ab3c1f4-c7cd-4a4b-b540-d5cf3c239785" containerName="cinder-scheduler" Feb 17 13:55:55 crc kubenswrapper[4768]: I0217 13:55:55.074279 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="1ab3c1f4-c7cd-4a4b-b540-d5cf3c239785" containerName="cinder-scheduler" Feb 17 13:55:55 crc kubenswrapper[4768]: E0217 13:55:55.074297 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2de38494-6385-477a-9ec8-2383ad286611" containerName="neutron-api" Feb 17 13:55:55 crc kubenswrapper[4768]: I0217 13:55:55.074304 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="2de38494-6385-477a-9ec8-2383ad286611" containerName="neutron-api" Feb 17 13:55:55 crc kubenswrapper[4768]: E0217 13:55:55.074321 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2de38494-6385-477a-9ec8-2383ad286611" containerName="neutron-httpd" Feb 17 13:55:55 crc kubenswrapper[4768]: I0217 13:55:55.074327 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="2de38494-6385-477a-9ec8-2383ad286611" containerName="neutron-httpd" Feb 17 13:55:55 crc kubenswrapper[4768]: E0217 13:55:55.074343 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9ce4d08f-4b2c-4831-acce-546ddff7277a" containerName="init" Feb 17 13:55:55 crc kubenswrapper[4768]: I0217 13:55:55.074350 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="9ce4d08f-4b2c-4831-acce-546ddff7277a" containerName="init" Feb 17 13:55:55 crc kubenswrapper[4768]: I0217 13:55:55.074567 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="2de38494-6385-477a-9ec8-2383ad286611" containerName="neutron-api" Feb 17 13:55:55 crc kubenswrapper[4768]: I0217 13:55:55.074585 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="1ab3c1f4-c7cd-4a4b-b540-d5cf3c239785" containerName="cinder-scheduler" Feb 17 13:55:55 crc kubenswrapper[4768]: I0217 13:55:55.074594 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="9ce4d08f-4b2c-4831-acce-546ddff7277a" containerName="dnsmasq-dns" Feb 17 13:55:55 crc kubenswrapper[4768]: I0217 13:55:55.074608 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="1ab3c1f4-c7cd-4a4b-b540-d5cf3c239785" containerName="probe" Feb 17 13:55:55 crc kubenswrapper[4768]: I0217 13:55:55.074644 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="2de38494-6385-477a-9ec8-2383ad286611" containerName="neutron-httpd" Feb 17 13:55:55 crc kubenswrapper[4768]: I0217 13:55:55.075614 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 17 13:55:55 crc kubenswrapper[4768]: I0217 13:55:55.081423 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Feb 17 13:55:55 crc kubenswrapper[4768]: I0217 13:55:55.084646 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 17 13:55:55 crc kubenswrapper[4768]: I0217 13:55:55.132582 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/bd2b9dae-27bf-467c-96e0-194f0e25b814-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"bd2b9dae-27bf-467c-96e0-194f0e25b814\") " pod="openstack/cinder-scheduler-0" Feb 17 13:55:55 crc kubenswrapper[4768]: I0217 13:55:55.132640 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bd2b9dae-27bf-467c-96e0-194f0e25b814-scripts\") pod \"cinder-scheduler-0\" (UID: \"bd2b9dae-27bf-467c-96e0-194f0e25b814\") " pod="openstack/cinder-scheduler-0" Feb 17 13:55:55 crc kubenswrapper[4768]: I0217 13:55:55.132808 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/bd2b9dae-27bf-467c-96e0-194f0e25b814-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"bd2b9dae-27bf-467c-96e0-194f0e25b814\") " pod="openstack/cinder-scheduler-0" Feb 17 13:55:55 crc kubenswrapper[4768]: I0217 13:55:55.132881 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rk2j5\" (UniqueName: \"kubernetes.io/projected/bd2b9dae-27bf-467c-96e0-194f0e25b814-kube-api-access-rk2j5\") pod \"cinder-scheduler-0\" (UID: \"bd2b9dae-27bf-467c-96e0-194f0e25b814\") " pod="openstack/cinder-scheduler-0" Feb 17 13:55:55 crc kubenswrapper[4768]: I0217 13:55:55.132917 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bd2b9dae-27bf-467c-96e0-194f0e25b814-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"bd2b9dae-27bf-467c-96e0-194f0e25b814\") " pod="openstack/cinder-scheduler-0" Feb 17 13:55:55 crc kubenswrapper[4768]: I0217 13:55:55.133147 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bd2b9dae-27bf-467c-96e0-194f0e25b814-config-data\") pod \"cinder-scheduler-0\" (UID: \"bd2b9dae-27bf-467c-96e0-194f0e25b814\") " pod="openstack/cinder-scheduler-0" Feb 17 13:55:55 crc kubenswrapper[4768]: I0217 13:55:55.234429 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bd2b9dae-27bf-467c-96e0-194f0e25b814-config-data\") pod \"cinder-scheduler-0\" (UID: \"bd2b9dae-27bf-467c-96e0-194f0e25b814\") " pod="openstack/cinder-scheduler-0" Feb 17 13:55:55 crc kubenswrapper[4768]: I0217 13:55:55.234485 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/bd2b9dae-27bf-467c-96e0-194f0e25b814-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"bd2b9dae-27bf-467c-96e0-194f0e25b814\") " pod="openstack/cinder-scheduler-0" Feb 17 13:55:55 crc kubenswrapper[4768]: I0217 13:55:55.234530 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bd2b9dae-27bf-467c-96e0-194f0e25b814-scripts\") pod \"cinder-scheduler-0\" (UID: \"bd2b9dae-27bf-467c-96e0-194f0e25b814\") " pod="openstack/cinder-scheduler-0" Feb 17 13:55:55 crc kubenswrapper[4768]: I0217 13:55:55.234622 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/bd2b9dae-27bf-467c-96e0-194f0e25b814-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"bd2b9dae-27bf-467c-96e0-194f0e25b814\") " pod="openstack/cinder-scheduler-0" Feb 17 13:55:55 crc kubenswrapper[4768]: I0217 13:55:55.234658 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rk2j5\" (UniqueName: \"kubernetes.io/projected/bd2b9dae-27bf-467c-96e0-194f0e25b814-kube-api-access-rk2j5\") pod \"cinder-scheduler-0\" (UID: \"bd2b9dae-27bf-467c-96e0-194f0e25b814\") " pod="openstack/cinder-scheduler-0" Feb 17 13:55:55 crc kubenswrapper[4768]: I0217 13:55:55.234681 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bd2b9dae-27bf-467c-96e0-194f0e25b814-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"bd2b9dae-27bf-467c-96e0-194f0e25b814\") " pod="openstack/cinder-scheduler-0" Feb 17 13:55:55 crc kubenswrapper[4768]: I0217 13:55:55.234727 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/bd2b9dae-27bf-467c-96e0-194f0e25b814-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"bd2b9dae-27bf-467c-96e0-194f0e25b814\") " pod="openstack/cinder-scheduler-0" Feb 17 13:55:55 crc kubenswrapper[4768]: I0217 13:55:55.238297 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bd2b9dae-27bf-467c-96e0-194f0e25b814-scripts\") pod \"cinder-scheduler-0\" (UID: \"bd2b9dae-27bf-467c-96e0-194f0e25b814\") " pod="openstack/cinder-scheduler-0" Feb 17 13:55:55 crc kubenswrapper[4768]: I0217 13:55:55.238983 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bd2b9dae-27bf-467c-96e0-194f0e25b814-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"bd2b9dae-27bf-467c-96e0-194f0e25b814\") " pod="openstack/cinder-scheduler-0" Feb 17 13:55:55 crc kubenswrapper[4768]: I0217 13:55:55.242484 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/bd2b9dae-27bf-467c-96e0-194f0e25b814-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"bd2b9dae-27bf-467c-96e0-194f0e25b814\") " pod="openstack/cinder-scheduler-0" Feb 17 13:55:55 crc kubenswrapper[4768]: I0217 13:55:55.243941 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bd2b9dae-27bf-467c-96e0-194f0e25b814-config-data\") pod \"cinder-scheduler-0\" (UID: \"bd2b9dae-27bf-467c-96e0-194f0e25b814\") " pod="openstack/cinder-scheduler-0" Feb 17 13:55:55 crc kubenswrapper[4768]: I0217 13:55:55.266694 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rk2j5\" (UniqueName: \"kubernetes.io/projected/bd2b9dae-27bf-467c-96e0-194f0e25b814-kube-api-access-rk2j5\") pod \"cinder-scheduler-0\" (UID: \"bd2b9dae-27bf-467c-96e0-194f0e25b814\") " pod="openstack/cinder-scheduler-0" Feb 17 13:55:55 crc kubenswrapper[4768]: I0217 13:55:55.407715 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 17 13:55:55 crc kubenswrapper[4768]: I0217 13:55:55.552084 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1ab3c1f4-c7cd-4a4b-b540-d5cf3c239785" path="/var/lib/kubelet/pods/1ab3c1f4-c7cd-4a4b-b540-d5cf3c239785/volumes" Feb 17 13:55:55 crc kubenswrapper[4768]: I0217 13:55:55.960151 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"bd2b9dae-27bf-467c-96e0-194f0e25b814","Type":"ContainerStarted","Data":"19d3cc00e3b62fd1249be9b7b51fe9e36b59a00b9a5603f5ddbf48a5b56be48e"} Feb 17 13:55:55 crc kubenswrapper[4768]: I0217 13:55:55.968232 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 17 13:55:56 crc kubenswrapper[4768]: I0217 13:55:56.023417 4768 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-684746c5d4-6lxfv" podUID="c20ad4a2-cf3e-4390-9141-1cc58518fd2b" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.147:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.147:8443: connect: connection refused" Feb 17 13:55:56 crc kubenswrapper[4768]: I0217 13:55:56.970979 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"bd2b9dae-27bf-467c-96e0-194f0e25b814","Type":"ContainerStarted","Data":"e5a7a61ad88f9776b1169d5c24482527c62a8e7b4cab174b9b17b38d374c65c6"} Feb 17 13:55:57 crc kubenswrapper[4768]: I0217 13:55:57.205374 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Feb 17 13:55:57 crc kubenswrapper[4768]: I0217 13:55:57.981820 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"bd2b9dae-27bf-467c-96e0-194f0e25b814","Type":"ContainerStarted","Data":"184064fb2d2adaf1692976c0a1636b46288cc1f88e117ea93af632d3c3b18434"} Feb 17 13:55:58 crc kubenswrapper[4768]: I0217 13:55:58.013936 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=3.013913374 podStartE2EDuration="3.013913374s" podCreationTimestamp="2026-02-17 13:55:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 13:55:58.003213477 +0000 UTC m=+1177.282599919" watchObservedRunningTime="2026-02-17 13:55:58.013913374 +0000 UTC m=+1177.293299826" Feb 17 13:55:58 crc kubenswrapper[4768]: I0217 13:55:58.248524 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-5f5954c4f6-p5w62" Feb 17 13:55:58 crc kubenswrapper[4768]: I0217 13:55:58.470398 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-5f5954c4f6-p5w62" Feb 17 13:55:58 crc kubenswrapper[4768]: I0217 13:55:58.545588 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-6c9fbc7fd6-zqhzg"] Feb 17 13:55:58 crc kubenswrapper[4768]: I0217 13:55:58.545838 4768 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-6c9fbc7fd6-zqhzg" podUID="e2a044ad-f31e-4c3b-9659-91650838f9da" containerName="barbican-api-log" containerID="cri-o://43e7ea3b3299e81c360a334cd1a4c79c1dde1801b98aa5377fc80a9a292e924e" gracePeriod=30 Feb 17 13:55:58 crc kubenswrapper[4768]: I0217 13:55:58.546177 4768 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-6c9fbc7fd6-zqhzg" podUID="e2a044ad-f31e-4c3b-9659-91650838f9da" containerName="barbican-api" containerID="cri-o://e07f63dbdb33f6fa1ef3c171110eaecf383514533db5e7ad803e547be67ad11c" gracePeriod=30 Feb 17 13:55:58 crc kubenswrapper[4768]: I0217 13:55:58.991540 4768 generic.go:334] "Generic (PLEG): container finished" podID="e2a044ad-f31e-4c3b-9659-91650838f9da" containerID="43e7ea3b3299e81c360a334cd1a4c79c1dde1801b98aa5377fc80a9a292e924e" exitCode=143 Feb 17 13:55:58 crc kubenswrapper[4768]: I0217 13:55:58.991640 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6c9fbc7fd6-zqhzg" event={"ID":"e2a044ad-f31e-4c3b-9659-91650838f9da","Type":"ContainerDied","Data":"43e7ea3b3299e81c360a334cd1a4c79c1dde1801b98aa5377fc80a9a292e924e"} Feb 17 13:55:59 crc kubenswrapper[4768]: I0217 13:55:59.705574 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-6c85df6c44-rr84t" Feb 17 13:55:59 crc kubenswrapper[4768]: I0217 13:55:59.765021 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-6c85df6c44-rr84t" Feb 17 13:56:00 crc kubenswrapper[4768]: I0217 13:56:00.126685 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/keystone-77c78fc8c5-fgk9h" Feb 17 13:56:00 crc kubenswrapper[4768]: I0217 13:56:00.408507 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Feb 17 13:56:01 crc kubenswrapper[4768]: I0217 13:56:01.706315 4768 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-6c9fbc7fd6-zqhzg" podUID="e2a044ad-f31e-4c3b-9659-91650838f9da" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.163:9311/healthcheck\": read tcp 10.217.0.2:55034->10.217.0.163:9311: read: connection reset by peer" Feb 17 13:56:01 crc kubenswrapper[4768]: I0217 13:56:01.706341 4768 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-6c9fbc7fd6-zqhzg" podUID="e2a044ad-f31e-4c3b-9659-91650838f9da" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.163:9311/healthcheck\": read tcp 10.217.0.2:55044->10.217.0.163:9311: read: connection reset by peer" Feb 17 13:56:02 crc kubenswrapper[4768]: I0217 13:56:02.022394 4768 generic.go:334] "Generic (PLEG): container finished" podID="e2a044ad-f31e-4c3b-9659-91650838f9da" containerID="e07f63dbdb33f6fa1ef3c171110eaecf383514533db5e7ad803e547be67ad11c" exitCode=0 Feb 17 13:56:02 crc kubenswrapper[4768]: I0217 13:56:02.022489 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6c9fbc7fd6-zqhzg" event={"ID":"e2a044ad-f31e-4c3b-9659-91650838f9da","Type":"ContainerDied","Data":"e07f63dbdb33f6fa1ef3c171110eaecf383514533db5e7ad803e547be67ad11c"} Feb 17 13:56:02 crc kubenswrapper[4768]: I0217 13:56:02.138215 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-6c9fbc7fd6-zqhzg" Feb 17 13:56:02 crc kubenswrapper[4768]: I0217 13:56:02.269425 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e2a044ad-f31e-4c3b-9659-91650838f9da-config-data\") pod \"e2a044ad-f31e-4c3b-9659-91650838f9da\" (UID: \"e2a044ad-f31e-4c3b-9659-91650838f9da\") " Feb 17 13:56:02 crc kubenswrapper[4768]: I0217 13:56:02.269580 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9rhb\" (UniqueName: \"kubernetes.io/projected/e2a044ad-f31e-4c3b-9659-91650838f9da-kube-api-access-w9rhb\") pod \"e2a044ad-f31e-4c3b-9659-91650838f9da\" (UID: \"e2a044ad-f31e-4c3b-9659-91650838f9da\") " Feb 17 13:56:02 crc kubenswrapper[4768]: I0217 13:56:02.270251 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e2a044ad-f31e-4c3b-9659-91650838f9da-config-data-custom\") pod \"e2a044ad-f31e-4c3b-9659-91650838f9da\" (UID: \"e2a044ad-f31e-4c3b-9659-91650838f9da\") " Feb 17 13:56:02 crc kubenswrapper[4768]: I0217 13:56:02.270286 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e2a044ad-f31e-4c3b-9659-91650838f9da-logs\") pod \"e2a044ad-f31e-4c3b-9659-91650838f9da\" (UID: \"e2a044ad-f31e-4c3b-9659-91650838f9da\") " Feb 17 13:56:02 crc kubenswrapper[4768]: I0217 13:56:02.270654 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e2a044ad-f31e-4c3b-9659-91650838f9da-combined-ca-bundle\") pod \"e2a044ad-f31e-4c3b-9659-91650838f9da\" (UID: \"e2a044ad-f31e-4c3b-9659-91650838f9da\") " Feb 17 13:56:02 crc kubenswrapper[4768]: I0217 13:56:02.270938 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e2a044ad-f31e-4c3b-9659-91650838f9da-logs" (OuterVolumeSpecName: "logs") pod "e2a044ad-f31e-4c3b-9659-91650838f9da" (UID: "e2a044ad-f31e-4c3b-9659-91650838f9da"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 13:56:02 crc kubenswrapper[4768]: I0217 13:56:02.271379 4768 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e2a044ad-f31e-4c3b-9659-91650838f9da-logs\") on node \"crc\" DevicePath \"\"" Feb 17 13:56:02 crc kubenswrapper[4768]: I0217 13:56:02.276941 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e2a044ad-f31e-4c3b-9659-91650838f9da-kube-api-access-w9rhb" (OuterVolumeSpecName: "kube-api-access-w9rhb") pod "e2a044ad-f31e-4c3b-9659-91650838f9da" (UID: "e2a044ad-f31e-4c3b-9659-91650838f9da"). InnerVolumeSpecName "kube-api-access-w9rhb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 13:56:02 crc kubenswrapper[4768]: I0217 13:56:02.276930 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e2a044ad-f31e-4c3b-9659-91650838f9da-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "e2a044ad-f31e-4c3b-9659-91650838f9da" (UID: "e2a044ad-f31e-4c3b-9659-91650838f9da"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 13:56:02 crc kubenswrapper[4768]: I0217 13:56:02.319830 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e2a044ad-f31e-4c3b-9659-91650838f9da-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e2a044ad-f31e-4c3b-9659-91650838f9da" (UID: "e2a044ad-f31e-4c3b-9659-91650838f9da"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 13:56:02 crc kubenswrapper[4768]: I0217 13:56:02.348749 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e2a044ad-f31e-4c3b-9659-91650838f9da-config-data" (OuterVolumeSpecName: "config-data") pod "e2a044ad-f31e-4c3b-9659-91650838f9da" (UID: "e2a044ad-f31e-4c3b-9659-91650838f9da"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 13:56:02 crc kubenswrapper[4768]: I0217 13:56:02.373413 4768 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e2a044ad-f31e-4c3b-9659-91650838f9da-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 13:56:02 crc kubenswrapper[4768]: I0217 13:56:02.373449 4768 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e2a044ad-f31e-4c3b-9659-91650838f9da-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 13:56:02 crc kubenswrapper[4768]: I0217 13:56:02.373458 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9rhb\" (UniqueName: \"kubernetes.io/projected/e2a044ad-f31e-4c3b-9659-91650838f9da-kube-api-access-w9rhb\") on node \"crc\" DevicePath \"\"" Feb 17 13:56:02 crc kubenswrapper[4768]: I0217 13:56:02.373470 4768 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e2a044ad-f31e-4c3b-9659-91650838f9da-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 17 13:56:02 crc kubenswrapper[4768]: I0217 13:56:02.395779 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Feb 17 13:56:02 crc kubenswrapper[4768]: E0217 13:56:02.396254 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e2a044ad-f31e-4c3b-9659-91650838f9da" containerName="barbican-api" Feb 17 13:56:02 crc kubenswrapper[4768]: I0217 13:56:02.396275 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="e2a044ad-f31e-4c3b-9659-91650838f9da" containerName="barbican-api" Feb 17 13:56:02 crc kubenswrapper[4768]: E0217 13:56:02.396315 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e2a044ad-f31e-4c3b-9659-91650838f9da" containerName="barbican-api-log" Feb 17 13:56:02 crc kubenswrapper[4768]: I0217 13:56:02.396323 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="e2a044ad-f31e-4c3b-9659-91650838f9da" containerName="barbican-api-log" Feb 17 13:56:02 crc kubenswrapper[4768]: I0217 13:56:02.396545 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="e2a044ad-f31e-4c3b-9659-91650838f9da" containerName="barbican-api-log" Feb 17 13:56:02 crc kubenswrapper[4768]: I0217 13:56:02.396565 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="e2a044ad-f31e-4c3b-9659-91650838f9da" containerName="barbican-api" Feb 17 13:56:02 crc kubenswrapper[4768]: I0217 13:56:02.397320 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Feb 17 13:56:02 crc kubenswrapper[4768]: I0217 13:56:02.402235 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config" Feb 17 13:56:02 crc kubenswrapper[4768]: I0217 13:56:02.402367 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstackclient-openstackclient-dockercfg-g29lj" Feb 17 13:56:02 crc kubenswrapper[4768]: I0217 13:56:02.402395 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-config-secret" Feb 17 13:56:02 crc kubenswrapper[4768]: I0217 13:56:02.408772 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Feb 17 13:56:02 crc kubenswrapper[4768]: I0217 13:56:02.578146 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4966t\" (UniqueName: \"kubernetes.io/projected/1a852d52-8ec3-48fc-af5d-ecb7068b6d2b-kube-api-access-4966t\") pod \"openstackclient\" (UID: \"1a852d52-8ec3-48fc-af5d-ecb7068b6d2b\") " pod="openstack/openstackclient" Feb 17 13:56:02 crc kubenswrapper[4768]: I0217 13:56:02.578488 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/1a852d52-8ec3-48fc-af5d-ecb7068b6d2b-openstack-config\") pod \"openstackclient\" (UID: \"1a852d52-8ec3-48fc-af5d-ecb7068b6d2b\") " pod="openstack/openstackclient" Feb 17 13:56:02 crc kubenswrapper[4768]: I0217 13:56:02.578647 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1a852d52-8ec3-48fc-af5d-ecb7068b6d2b-combined-ca-bundle\") pod \"openstackclient\" (UID: \"1a852d52-8ec3-48fc-af5d-ecb7068b6d2b\") " pod="openstack/openstackclient" Feb 17 13:56:02 crc kubenswrapper[4768]: I0217 13:56:02.578890 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/1a852d52-8ec3-48fc-af5d-ecb7068b6d2b-openstack-config-secret\") pod \"openstackclient\" (UID: \"1a852d52-8ec3-48fc-af5d-ecb7068b6d2b\") " pod="openstack/openstackclient" Feb 17 13:56:02 crc kubenswrapper[4768]: I0217 13:56:02.681596 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4966t\" (UniqueName: \"kubernetes.io/projected/1a852d52-8ec3-48fc-af5d-ecb7068b6d2b-kube-api-access-4966t\") pod \"openstackclient\" (UID: \"1a852d52-8ec3-48fc-af5d-ecb7068b6d2b\") " pod="openstack/openstackclient" Feb 17 13:56:02 crc kubenswrapper[4768]: I0217 13:56:02.681691 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/1a852d52-8ec3-48fc-af5d-ecb7068b6d2b-openstack-config\") pod \"openstackclient\" (UID: \"1a852d52-8ec3-48fc-af5d-ecb7068b6d2b\") " pod="openstack/openstackclient" Feb 17 13:56:02 crc kubenswrapper[4768]: I0217 13:56:02.681775 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1a852d52-8ec3-48fc-af5d-ecb7068b6d2b-combined-ca-bundle\") pod \"openstackclient\" (UID: \"1a852d52-8ec3-48fc-af5d-ecb7068b6d2b\") " pod="openstack/openstackclient" Feb 17 13:56:02 crc kubenswrapper[4768]: I0217 13:56:02.681870 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/1a852d52-8ec3-48fc-af5d-ecb7068b6d2b-openstack-config-secret\") pod \"openstackclient\" (UID: \"1a852d52-8ec3-48fc-af5d-ecb7068b6d2b\") " pod="openstack/openstackclient" Feb 17 13:56:02 crc kubenswrapper[4768]: I0217 13:56:02.682718 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/1a852d52-8ec3-48fc-af5d-ecb7068b6d2b-openstack-config\") pod \"openstackclient\" (UID: \"1a852d52-8ec3-48fc-af5d-ecb7068b6d2b\") " pod="openstack/openstackclient" Feb 17 13:56:02 crc kubenswrapper[4768]: I0217 13:56:02.686653 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/1a852d52-8ec3-48fc-af5d-ecb7068b6d2b-openstack-config-secret\") pod \"openstackclient\" (UID: \"1a852d52-8ec3-48fc-af5d-ecb7068b6d2b\") " pod="openstack/openstackclient" Feb 17 13:56:02 crc kubenswrapper[4768]: I0217 13:56:02.687837 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1a852d52-8ec3-48fc-af5d-ecb7068b6d2b-combined-ca-bundle\") pod \"openstackclient\" (UID: \"1a852d52-8ec3-48fc-af5d-ecb7068b6d2b\") " pod="openstack/openstackclient" Feb 17 13:56:02 crc kubenswrapper[4768]: I0217 13:56:02.702648 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4966t\" (UniqueName: \"kubernetes.io/projected/1a852d52-8ec3-48fc-af5d-ecb7068b6d2b-kube-api-access-4966t\") pod \"openstackclient\" (UID: \"1a852d52-8ec3-48fc-af5d-ecb7068b6d2b\") " pod="openstack/openstackclient" Feb 17 13:56:02 crc kubenswrapper[4768]: I0217 13:56:02.737955 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Feb 17 13:56:02 crc kubenswrapper[4768]: I0217 13:56:02.778622 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/openstackclient"] Feb 17 13:56:02 crc kubenswrapper[4768]: I0217 13:56:02.793859 4768 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/openstackclient"] Feb 17 13:56:02 crc kubenswrapper[4768]: I0217 13:56:02.842514 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Feb 17 13:56:02 crc kubenswrapper[4768]: I0217 13:56:02.843595 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Feb 17 13:56:02 crc kubenswrapper[4768]: I0217 13:56:02.875689 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Feb 17 13:56:02 crc kubenswrapper[4768]: E0217 13:56:02.986709 4768 log.go:32] "RunPodSandbox from runtime service failed" err=< Feb 17 13:56:02 crc kubenswrapper[4768]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_openstackclient_openstack_1a852d52-8ec3-48fc-af5d-ecb7068b6d2b_0(26320f7cf772d232eebeef6fc625d748a21d2b51a96400e9038e617e12d324d0): error adding pod openstack_openstackclient to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"26320f7cf772d232eebeef6fc625d748a21d2b51a96400e9038e617e12d324d0" Netns:"/var/run/netns/bab822b9-c4db-4c05-968e-eed50b2330a4" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openstack;K8S_POD_NAME=openstackclient;K8S_POD_INFRA_CONTAINER_ID=26320f7cf772d232eebeef6fc625d748a21d2b51a96400e9038e617e12d324d0;K8S_POD_UID=1a852d52-8ec3-48fc-af5d-ecb7068b6d2b" Path:"" ERRORED: error configuring pod [openstack/openstackclient] networking: Multus: [openstack/openstackclient/1a852d52-8ec3-48fc-af5d-ecb7068b6d2b]: expected pod UID "1a852d52-8ec3-48fc-af5d-ecb7068b6d2b" but got "b765d360-2c6c-4740-b75e-bd16636a41e0" from Kube API Feb 17 13:56:02 crc kubenswrapper[4768]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Feb 17 13:56:02 crc kubenswrapper[4768]: > Feb 17 13:56:02 crc kubenswrapper[4768]: E0217 13:56:02.986785 4768 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err=< Feb 17 13:56:02 crc kubenswrapper[4768]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_openstackclient_openstack_1a852d52-8ec3-48fc-af5d-ecb7068b6d2b_0(26320f7cf772d232eebeef6fc625d748a21d2b51a96400e9038e617e12d324d0): error adding pod openstack_openstackclient to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"26320f7cf772d232eebeef6fc625d748a21d2b51a96400e9038e617e12d324d0" Netns:"/var/run/netns/bab822b9-c4db-4c05-968e-eed50b2330a4" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openstack;K8S_POD_NAME=openstackclient;K8S_POD_INFRA_CONTAINER_ID=26320f7cf772d232eebeef6fc625d748a21d2b51a96400e9038e617e12d324d0;K8S_POD_UID=1a852d52-8ec3-48fc-af5d-ecb7068b6d2b" Path:"" ERRORED: error configuring pod [openstack/openstackclient] networking: Multus: [openstack/openstackclient/1a852d52-8ec3-48fc-af5d-ecb7068b6d2b]: expected pod UID "1a852d52-8ec3-48fc-af5d-ecb7068b6d2b" but got "b765d360-2c6c-4740-b75e-bd16636a41e0" from Kube API Feb 17 13:56:02 crc kubenswrapper[4768]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Feb 17 13:56:02 crc kubenswrapper[4768]: > pod="openstack/openstackclient" Feb 17 13:56:02 crc kubenswrapper[4768]: I0217 13:56:02.991692 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/b765d360-2c6c-4740-b75e-bd16636a41e0-openstack-config\") pod \"openstackclient\" (UID: \"b765d360-2c6c-4740-b75e-bd16636a41e0\") " pod="openstack/openstackclient" Feb 17 13:56:02 crc kubenswrapper[4768]: I0217 13:56:02.991840 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/b765d360-2c6c-4740-b75e-bd16636a41e0-openstack-config-secret\") pod \"openstackclient\" (UID: \"b765d360-2c6c-4740-b75e-bd16636a41e0\") " pod="openstack/openstackclient" Feb 17 13:56:02 crc kubenswrapper[4768]: I0217 13:56:02.991958 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gpms4\" (UniqueName: \"kubernetes.io/projected/b765d360-2c6c-4740-b75e-bd16636a41e0-kube-api-access-gpms4\") pod \"openstackclient\" (UID: \"b765d360-2c6c-4740-b75e-bd16636a41e0\") " pod="openstack/openstackclient" Feb 17 13:56:02 crc kubenswrapper[4768]: I0217 13:56:02.992043 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b765d360-2c6c-4740-b75e-bd16636a41e0-combined-ca-bundle\") pod \"openstackclient\" (UID: \"b765d360-2c6c-4740-b75e-bd16636a41e0\") " pod="openstack/openstackclient" Feb 17 13:56:03 crc kubenswrapper[4768]: I0217 13:56:03.031944 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Feb 17 13:56:03 crc kubenswrapper[4768]: I0217 13:56:03.032195 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-6c9fbc7fd6-zqhzg" Feb 17 13:56:03 crc kubenswrapper[4768]: I0217 13:56:03.032193 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6c9fbc7fd6-zqhzg" event={"ID":"e2a044ad-f31e-4c3b-9659-91650838f9da","Type":"ContainerDied","Data":"ed30c74ad3e19f911c9a27c219eff65767a6992c8f2c18d92b5ea9e88e5b4f43"} Feb 17 13:56:03 crc kubenswrapper[4768]: I0217 13:56:03.032242 4768 scope.go:117] "RemoveContainer" containerID="e07f63dbdb33f6fa1ef3c171110eaecf383514533db5e7ad803e547be67ad11c" Feb 17 13:56:03 crc kubenswrapper[4768]: I0217 13:56:03.035306 4768 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openstack/openstackclient" oldPodUID="1a852d52-8ec3-48fc-af5d-ecb7068b6d2b" podUID="b765d360-2c6c-4740-b75e-bd16636a41e0" Feb 17 13:56:03 crc kubenswrapper[4768]: I0217 13:56:03.060479 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Feb 17 13:56:03 crc kubenswrapper[4768]: I0217 13:56:03.081800 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-6f459487b8-6m6q4" Feb 17 13:56:03 crc kubenswrapper[4768]: I0217 13:56:03.083510 4768 scope.go:117] "RemoveContainer" containerID="43e7ea3b3299e81c360a334cd1a4c79c1dde1801b98aa5377fc80a9a292e924e" Feb 17 13:56:03 crc kubenswrapper[4768]: I0217 13:56:03.083981 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-6c9fbc7fd6-zqhzg"] Feb 17 13:56:03 crc kubenswrapper[4768]: I0217 13:56:03.084893 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-6f459487b8-6m6q4" Feb 17 13:56:03 crc kubenswrapper[4768]: I0217 13:56:03.093053 4768 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-api-6c9fbc7fd6-zqhzg"] Feb 17 13:56:03 crc kubenswrapper[4768]: I0217 13:56:03.093857 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/b765d360-2c6c-4740-b75e-bd16636a41e0-openstack-config\") pod \"openstackclient\" (UID: \"b765d360-2c6c-4740-b75e-bd16636a41e0\") " pod="openstack/openstackclient" Feb 17 13:56:03 crc kubenswrapper[4768]: I0217 13:56:03.093923 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/b765d360-2c6c-4740-b75e-bd16636a41e0-openstack-config-secret\") pod \"openstackclient\" (UID: \"b765d360-2c6c-4740-b75e-bd16636a41e0\") " pod="openstack/openstackclient" Feb 17 13:56:03 crc kubenswrapper[4768]: I0217 13:56:03.094262 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gpms4\" (UniqueName: \"kubernetes.io/projected/b765d360-2c6c-4740-b75e-bd16636a41e0-kube-api-access-gpms4\") pod \"openstackclient\" (UID: \"b765d360-2c6c-4740-b75e-bd16636a41e0\") " pod="openstack/openstackclient" Feb 17 13:56:03 crc kubenswrapper[4768]: I0217 13:56:03.094309 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b765d360-2c6c-4740-b75e-bd16636a41e0-combined-ca-bundle\") pod \"openstackclient\" (UID: \"b765d360-2c6c-4740-b75e-bd16636a41e0\") " pod="openstack/openstackclient" Feb 17 13:56:03 crc kubenswrapper[4768]: I0217 13:56:03.096318 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/b765d360-2c6c-4740-b75e-bd16636a41e0-openstack-config\") pod \"openstackclient\" (UID: \"b765d360-2c6c-4740-b75e-bd16636a41e0\") " pod="openstack/openstackclient" Feb 17 13:56:03 crc kubenswrapper[4768]: I0217 13:56:03.101740 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/b765d360-2c6c-4740-b75e-bd16636a41e0-openstack-config-secret\") pod \"openstackclient\" (UID: \"b765d360-2c6c-4740-b75e-bd16636a41e0\") " pod="openstack/openstackclient" Feb 17 13:56:03 crc kubenswrapper[4768]: I0217 13:56:03.102940 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b765d360-2c6c-4740-b75e-bd16636a41e0-combined-ca-bundle\") pod \"openstackclient\" (UID: \"b765d360-2c6c-4740-b75e-bd16636a41e0\") " pod="openstack/openstackclient" Feb 17 13:56:03 crc kubenswrapper[4768]: I0217 13:56:03.114750 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gpms4\" (UniqueName: \"kubernetes.io/projected/b765d360-2c6c-4740-b75e-bd16636a41e0-kube-api-access-gpms4\") pod \"openstackclient\" (UID: \"b765d360-2c6c-4740-b75e-bd16636a41e0\") " pod="openstack/openstackclient" Feb 17 13:56:03 crc kubenswrapper[4768]: I0217 13:56:03.168900 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-6c85df6c44-rr84t"] Feb 17 13:56:03 crc kubenswrapper[4768]: I0217 13:56:03.169154 4768 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/placement-6c85df6c44-rr84t" podUID="55c82341-24e7-4524-82c7-996a851af418" containerName="placement-log" containerID="cri-o://94c3a76cef44422cbd71d3dcc9ce17fa67b716c5bd3df5a80fdd5c6b8ca23179" gracePeriod=30 Feb 17 13:56:03 crc kubenswrapper[4768]: I0217 13:56:03.169283 4768 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/placement-6c85df6c44-rr84t" podUID="55c82341-24e7-4524-82c7-996a851af418" containerName="placement-api" containerID="cri-o://e0584fae020d78e354808a3803e647082acf2d6b1f5b1bf761a5c618a2e4b69a" gracePeriod=30 Feb 17 13:56:03 crc kubenswrapper[4768]: I0217 13:56:03.195347 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4966t\" (UniqueName: \"kubernetes.io/projected/1a852d52-8ec3-48fc-af5d-ecb7068b6d2b-kube-api-access-4966t\") pod \"1a852d52-8ec3-48fc-af5d-ecb7068b6d2b\" (UID: \"1a852d52-8ec3-48fc-af5d-ecb7068b6d2b\") " Feb 17 13:56:03 crc kubenswrapper[4768]: I0217 13:56:03.195787 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/1a852d52-8ec3-48fc-af5d-ecb7068b6d2b-openstack-config-secret\") pod \"1a852d52-8ec3-48fc-af5d-ecb7068b6d2b\" (UID: \"1a852d52-8ec3-48fc-af5d-ecb7068b6d2b\") " Feb 17 13:56:03 crc kubenswrapper[4768]: I0217 13:56:03.195826 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1a852d52-8ec3-48fc-af5d-ecb7068b6d2b-combined-ca-bundle\") pod \"1a852d52-8ec3-48fc-af5d-ecb7068b6d2b\" (UID: \"1a852d52-8ec3-48fc-af5d-ecb7068b6d2b\") " Feb 17 13:56:03 crc kubenswrapper[4768]: I0217 13:56:03.195859 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/1a852d52-8ec3-48fc-af5d-ecb7068b6d2b-openstack-config\") pod \"1a852d52-8ec3-48fc-af5d-ecb7068b6d2b\" (UID: \"1a852d52-8ec3-48fc-af5d-ecb7068b6d2b\") " Feb 17 13:56:03 crc kubenswrapper[4768]: I0217 13:56:03.199781 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1a852d52-8ec3-48fc-af5d-ecb7068b6d2b-openstack-config" (OuterVolumeSpecName: "openstack-config") pod "1a852d52-8ec3-48fc-af5d-ecb7068b6d2b" (UID: "1a852d52-8ec3-48fc-af5d-ecb7068b6d2b"). InnerVolumeSpecName "openstack-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 13:56:03 crc kubenswrapper[4768]: I0217 13:56:03.201494 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Feb 17 13:56:03 crc kubenswrapper[4768]: I0217 13:56:03.202714 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1a852d52-8ec3-48fc-af5d-ecb7068b6d2b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1a852d52-8ec3-48fc-af5d-ecb7068b6d2b" (UID: "1a852d52-8ec3-48fc-af5d-ecb7068b6d2b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 13:56:03 crc kubenswrapper[4768]: I0217 13:56:03.202903 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1a852d52-8ec3-48fc-af5d-ecb7068b6d2b-kube-api-access-4966t" (OuterVolumeSpecName: "kube-api-access-4966t") pod "1a852d52-8ec3-48fc-af5d-ecb7068b6d2b" (UID: "1a852d52-8ec3-48fc-af5d-ecb7068b6d2b"). InnerVolumeSpecName "kube-api-access-4966t". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 13:56:03 crc kubenswrapper[4768]: I0217 13:56:03.203294 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1a852d52-8ec3-48fc-af5d-ecb7068b6d2b-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "1a852d52-8ec3-48fc-af5d-ecb7068b6d2b" (UID: "1a852d52-8ec3-48fc-af5d-ecb7068b6d2b"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 13:56:03 crc kubenswrapper[4768]: I0217 13:56:03.297806 4768 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/1a852d52-8ec3-48fc-af5d-ecb7068b6d2b-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Feb 17 13:56:03 crc kubenswrapper[4768]: I0217 13:56:03.297854 4768 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1a852d52-8ec3-48fc-af5d-ecb7068b6d2b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 13:56:03 crc kubenswrapper[4768]: I0217 13:56:03.297868 4768 reconciler_common.go:293] "Volume detached for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/1a852d52-8ec3-48fc-af5d-ecb7068b6d2b-openstack-config\") on node \"crc\" DevicePath \"\"" Feb 17 13:56:03 crc kubenswrapper[4768]: I0217 13:56:03.297880 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4966t\" (UniqueName: \"kubernetes.io/projected/1a852d52-8ec3-48fc-af5d-ecb7068b6d2b-kube-api-access-4966t\") on node \"crc\" DevicePath \"\"" Feb 17 13:56:03 crc kubenswrapper[4768]: I0217 13:56:03.545410 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1a852d52-8ec3-48fc-af5d-ecb7068b6d2b" path="/var/lib/kubelet/pods/1a852d52-8ec3-48fc-af5d-ecb7068b6d2b/volumes" Feb 17 13:56:03 crc kubenswrapper[4768]: I0217 13:56:03.545912 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e2a044ad-f31e-4c3b-9659-91650838f9da" path="/var/lib/kubelet/pods/e2a044ad-f31e-4c3b-9659-91650838f9da/volumes" Feb 17 13:56:03 crc kubenswrapper[4768]: I0217 13:56:03.654488 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Feb 17 13:56:04 crc kubenswrapper[4768]: I0217 13:56:04.040676 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"b765d360-2c6c-4740-b75e-bd16636a41e0","Type":"ContainerStarted","Data":"6a173085b1ed2e59e523eefe9bea3a9fafbe3f70edf4db06bfc8d57ba8b068c4"} Feb 17 13:56:04 crc kubenswrapper[4768]: I0217 13:56:04.042832 4768 generic.go:334] "Generic (PLEG): container finished" podID="55c82341-24e7-4524-82c7-996a851af418" containerID="94c3a76cef44422cbd71d3dcc9ce17fa67b716c5bd3df5a80fdd5c6b8ca23179" exitCode=143 Feb 17 13:56:04 crc kubenswrapper[4768]: I0217 13:56:04.042916 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-6c85df6c44-rr84t" event={"ID":"55c82341-24e7-4524-82c7-996a851af418","Type":"ContainerDied","Data":"94c3a76cef44422cbd71d3dcc9ce17fa67b716c5bd3df5a80fdd5c6b8ca23179"} Feb 17 13:56:04 crc kubenswrapper[4768]: I0217 13:56:04.042981 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Feb 17 13:56:04 crc kubenswrapper[4768]: I0217 13:56:04.049859 4768 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openstack/openstackclient" oldPodUID="1a852d52-8ec3-48fc-af5d-ecb7068b6d2b" podUID="b765d360-2c6c-4740-b75e-bd16636a41e0" Feb 17 13:56:05 crc kubenswrapper[4768]: I0217 13:56:05.637372 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Feb 17 13:56:05 crc kubenswrapper[4768]: I0217 13:56:05.795400 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-proxy-6999b7cf5c-4f5kt"] Feb 17 13:56:05 crc kubenswrapper[4768]: I0217 13:56:05.797335 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-6999b7cf5c-4f5kt" Feb 17 13:56:05 crc kubenswrapper[4768]: I0217 13:56:05.800859 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-internal-svc" Feb 17 13:56:05 crc kubenswrapper[4768]: I0217 13:56:05.801082 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Feb 17 13:56:05 crc kubenswrapper[4768]: I0217 13:56:05.802713 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-public-svc" Feb 17 13:56:05 crc kubenswrapper[4768]: I0217 13:56:05.810395 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-6999b7cf5c-4f5kt"] Feb 17 13:56:05 crc kubenswrapper[4768]: I0217 13:56:05.947392 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4ac4ebb9-cc51-4934-b4c7-590830f2a04a-internal-tls-certs\") pod \"swift-proxy-6999b7cf5c-4f5kt\" (UID: \"4ac4ebb9-cc51-4934-b4c7-590830f2a04a\") " pod="openstack/swift-proxy-6999b7cf5c-4f5kt" Feb 17 13:56:05 crc kubenswrapper[4768]: I0217 13:56:05.947462 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/4ac4ebb9-cc51-4934-b4c7-590830f2a04a-etc-swift\") pod \"swift-proxy-6999b7cf5c-4f5kt\" (UID: \"4ac4ebb9-cc51-4934-b4c7-590830f2a04a\") " pod="openstack/swift-proxy-6999b7cf5c-4f5kt" Feb 17 13:56:05 crc kubenswrapper[4768]: I0217 13:56:05.947485 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4ac4ebb9-cc51-4934-b4c7-590830f2a04a-public-tls-certs\") pod \"swift-proxy-6999b7cf5c-4f5kt\" (UID: \"4ac4ebb9-cc51-4934-b4c7-590830f2a04a\") " pod="openstack/swift-proxy-6999b7cf5c-4f5kt" Feb 17 13:56:05 crc kubenswrapper[4768]: I0217 13:56:05.947822 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4ac4ebb9-cc51-4934-b4c7-590830f2a04a-combined-ca-bundle\") pod \"swift-proxy-6999b7cf5c-4f5kt\" (UID: \"4ac4ebb9-cc51-4934-b4c7-590830f2a04a\") " pod="openstack/swift-proxy-6999b7cf5c-4f5kt" Feb 17 13:56:05 crc kubenswrapper[4768]: I0217 13:56:05.947994 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4ac4ebb9-cc51-4934-b4c7-590830f2a04a-log-httpd\") pod \"swift-proxy-6999b7cf5c-4f5kt\" (UID: \"4ac4ebb9-cc51-4934-b4c7-590830f2a04a\") " pod="openstack/swift-proxy-6999b7cf5c-4f5kt" Feb 17 13:56:05 crc kubenswrapper[4768]: I0217 13:56:05.948028 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4ac4ebb9-cc51-4934-b4c7-590830f2a04a-config-data\") pod \"swift-proxy-6999b7cf5c-4f5kt\" (UID: \"4ac4ebb9-cc51-4934-b4c7-590830f2a04a\") " pod="openstack/swift-proxy-6999b7cf5c-4f5kt" Feb 17 13:56:05 crc kubenswrapper[4768]: I0217 13:56:05.948176 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kgpzc\" (UniqueName: \"kubernetes.io/projected/4ac4ebb9-cc51-4934-b4c7-590830f2a04a-kube-api-access-kgpzc\") pod \"swift-proxy-6999b7cf5c-4f5kt\" (UID: \"4ac4ebb9-cc51-4934-b4c7-590830f2a04a\") " pod="openstack/swift-proxy-6999b7cf5c-4f5kt" Feb 17 13:56:05 crc kubenswrapper[4768]: I0217 13:56:05.948251 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4ac4ebb9-cc51-4934-b4c7-590830f2a04a-run-httpd\") pod \"swift-proxy-6999b7cf5c-4f5kt\" (UID: \"4ac4ebb9-cc51-4934-b4c7-590830f2a04a\") " pod="openstack/swift-proxy-6999b7cf5c-4f5kt" Feb 17 13:56:06 crc kubenswrapper[4768]: I0217 13:56:06.024322 4768 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-684746c5d4-6lxfv" podUID="c20ad4a2-cf3e-4390-9141-1cc58518fd2b" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.147:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.147:8443: connect: connection refused" Feb 17 13:56:06 crc kubenswrapper[4768]: I0217 13:56:06.049923 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4ac4ebb9-cc51-4934-b4c7-590830f2a04a-combined-ca-bundle\") pod \"swift-proxy-6999b7cf5c-4f5kt\" (UID: \"4ac4ebb9-cc51-4934-b4c7-590830f2a04a\") " pod="openstack/swift-proxy-6999b7cf5c-4f5kt" Feb 17 13:56:06 crc kubenswrapper[4768]: I0217 13:56:06.049992 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4ac4ebb9-cc51-4934-b4c7-590830f2a04a-log-httpd\") pod \"swift-proxy-6999b7cf5c-4f5kt\" (UID: \"4ac4ebb9-cc51-4934-b4c7-590830f2a04a\") " pod="openstack/swift-proxy-6999b7cf5c-4f5kt" Feb 17 13:56:06 crc kubenswrapper[4768]: I0217 13:56:06.050012 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4ac4ebb9-cc51-4934-b4c7-590830f2a04a-config-data\") pod \"swift-proxy-6999b7cf5c-4f5kt\" (UID: \"4ac4ebb9-cc51-4934-b4c7-590830f2a04a\") " pod="openstack/swift-proxy-6999b7cf5c-4f5kt" Feb 17 13:56:06 crc kubenswrapper[4768]: I0217 13:56:06.050063 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kgpzc\" (UniqueName: \"kubernetes.io/projected/4ac4ebb9-cc51-4934-b4c7-590830f2a04a-kube-api-access-kgpzc\") pod \"swift-proxy-6999b7cf5c-4f5kt\" (UID: \"4ac4ebb9-cc51-4934-b4c7-590830f2a04a\") " pod="openstack/swift-proxy-6999b7cf5c-4f5kt" Feb 17 13:56:06 crc kubenswrapper[4768]: I0217 13:56:06.050089 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4ac4ebb9-cc51-4934-b4c7-590830f2a04a-run-httpd\") pod \"swift-proxy-6999b7cf5c-4f5kt\" (UID: \"4ac4ebb9-cc51-4934-b4c7-590830f2a04a\") " pod="openstack/swift-proxy-6999b7cf5c-4f5kt" Feb 17 13:56:06 crc kubenswrapper[4768]: I0217 13:56:06.050137 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4ac4ebb9-cc51-4934-b4c7-590830f2a04a-internal-tls-certs\") pod \"swift-proxy-6999b7cf5c-4f5kt\" (UID: \"4ac4ebb9-cc51-4934-b4c7-590830f2a04a\") " pod="openstack/swift-proxy-6999b7cf5c-4f5kt" Feb 17 13:56:06 crc kubenswrapper[4768]: I0217 13:56:06.050170 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/4ac4ebb9-cc51-4934-b4c7-590830f2a04a-etc-swift\") pod \"swift-proxy-6999b7cf5c-4f5kt\" (UID: \"4ac4ebb9-cc51-4934-b4c7-590830f2a04a\") " pod="openstack/swift-proxy-6999b7cf5c-4f5kt" Feb 17 13:56:06 crc kubenswrapper[4768]: I0217 13:56:06.050206 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4ac4ebb9-cc51-4934-b4c7-590830f2a04a-public-tls-certs\") pod \"swift-proxy-6999b7cf5c-4f5kt\" (UID: \"4ac4ebb9-cc51-4934-b4c7-590830f2a04a\") " pod="openstack/swift-proxy-6999b7cf5c-4f5kt" Feb 17 13:56:06 crc kubenswrapper[4768]: I0217 13:56:06.051384 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4ac4ebb9-cc51-4934-b4c7-590830f2a04a-log-httpd\") pod \"swift-proxy-6999b7cf5c-4f5kt\" (UID: \"4ac4ebb9-cc51-4934-b4c7-590830f2a04a\") " pod="openstack/swift-proxy-6999b7cf5c-4f5kt" Feb 17 13:56:06 crc kubenswrapper[4768]: I0217 13:56:06.052318 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4ac4ebb9-cc51-4934-b4c7-590830f2a04a-run-httpd\") pod \"swift-proxy-6999b7cf5c-4f5kt\" (UID: \"4ac4ebb9-cc51-4934-b4c7-590830f2a04a\") " pod="openstack/swift-proxy-6999b7cf5c-4f5kt" Feb 17 13:56:06 crc kubenswrapper[4768]: I0217 13:56:06.057290 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4ac4ebb9-cc51-4934-b4c7-590830f2a04a-internal-tls-certs\") pod \"swift-proxy-6999b7cf5c-4f5kt\" (UID: \"4ac4ebb9-cc51-4934-b4c7-590830f2a04a\") " pod="openstack/swift-proxy-6999b7cf5c-4f5kt" Feb 17 13:56:06 crc kubenswrapper[4768]: I0217 13:56:06.057959 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/4ac4ebb9-cc51-4934-b4c7-590830f2a04a-etc-swift\") pod \"swift-proxy-6999b7cf5c-4f5kt\" (UID: \"4ac4ebb9-cc51-4934-b4c7-590830f2a04a\") " pod="openstack/swift-proxy-6999b7cf5c-4f5kt" Feb 17 13:56:06 crc kubenswrapper[4768]: I0217 13:56:06.062941 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4ac4ebb9-cc51-4934-b4c7-590830f2a04a-combined-ca-bundle\") pod \"swift-proxy-6999b7cf5c-4f5kt\" (UID: \"4ac4ebb9-cc51-4934-b4c7-590830f2a04a\") " pod="openstack/swift-proxy-6999b7cf5c-4f5kt" Feb 17 13:56:06 crc kubenswrapper[4768]: I0217 13:56:06.063402 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4ac4ebb9-cc51-4934-b4c7-590830f2a04a-public-tls-certs\") pod \"swift-proxy-6999b7cf5c-4f5kt\" (UID: \"4ac4ebb9-cc51-4934-b4c7-590830f2a04a\") " pod="openstack/swift-proxy-6999b7cf5c-4f5kt" Feb 17 13:56:06 crc kubenswrapper[4768]: I0217 13:56:06.066527 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kgpzc\" (UniqueName: \"kubernetes.io/projected/4ac4ebb9-cc51-4934-b4c7-590830f2a04a-kube-api-access-kgpzc\") pod \"swift-proxy-6999b7cf5c-4f5kt\" (UID: \"4ac4ebb9-cc51-4934-b4c7-590830f2a04a\") " pod="openstack/swift-proxy-6999b7cf5c-4f5kt" Feb 17 13:56:06 crc kubenswrapper[4768]: I0217 13:56:06.068855 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4ac4ebb9-cc51-4934-b4c7-590830f2a04a-config-data\") pod \"swift-proxy-6999b7cf5c-4f5kt\" (UID: \"4ac4ebb9-cc51-4934-b4c7-590830f2a04a\") " pod="openstack/swift-proxy-6999b7cf5c-4f5kt" Feb 17 13:56:06 crc kubenswrapper[4768]: I0217 13:56:06.116227 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-6999b7cf5c-4f5kt" Feb 17 13:56:06 crc kubenswrapper[4768]: I0217 13:56:06.660611 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-6999b7cf5c-4f5kt"] Feb 17 13:56:06 crc kubenswrapper[4768]: I0217 13:56:06.707197 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-6c85df6c44-rr84t" Feb 17 13:56:06 crc kubenswrapper[4768]: I0217 13:56:06.863146 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/55c82341-24e7-4524-82c7-996a851af418-logs\") pod \"55c82341-24e7-4524-82c7-996a851af418\" (UID: \"55c82341-24e7-4524-82c7-996a851af418\") " Feb 17 13:56:06 crc kubenswrapper[4768]: I0217 13:56:06.863265 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/55c82341-24e7-4524-82c7-996a851af418-internal-tls-certs\") pod \"55c82341-24e7-4524-82c7-996a851af418\" (UID: \"55c82341-24e7-4524-82c7-996a851af418\") " Feb 17 13:56:06 crc kubenswrapper[4768]: I0217 13:56:06.863389 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/55c82341-24e7-4524-82c7-996a851af418-public-tls-certs\") pod \"55c82341-24e7-4524-82c7-996a851af418\" (UID: \"55c82341-24e7-4524-82c7-996a851af418\") " Feb 17 13:56:06 crc kubenswrapper[4768]: I0217 13:56:06.863463 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/55c82341-24e7-4524-82c7-996a851af418-combined-ca-bundle\") pod \"55c82341-24e7-4524-82c7-996a851af418\" (UID: \"55c82341-24e7-4524-82c7-996a851af418\") " Feb 17 13:56:06 crc kubenswrapper[4768]: I0217 13:56:06.863497 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nckcb\" (UniqueName: \"kubernetes.io/projected/55c82341-24e7-4524-82c7-996a851af418-kube-api-access-nckcb\") pod \"55c82341-24e7-4524-82c7-996a851af418\" (UID: \"55c82341-24e7-4524-82c7-996a851af418\") " Feb 17 13:56:06 crc kubenswrapper[4768]: I0217 13:56:06.863545 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/55c82341-24e7-4524-82c7-996a851af418-scripts\") pod \"55c82341-24e7-4524-82c7-996a851af418\" (UID: \"55c82341-24e7-4524-82c7-996a851af418\") " Feb 17 13:56:06 crc kubenswrapper[4768]: I0217 13:56:06.863607 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/55c82341-24e7-4524-82c7-996a851af418-config-data\") pod \"55c82341-24e7-4524-82c7-996a851af418\" (UID: \"55c82341-24e7-4524-82c7-996a851af418\") " Feb 17 13:56:06 crc kubenswrapper[4768]: I0217 13:56:06.863748 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/55c82341-24e7-4524-82c7-996a851af418-logs" (OuterVolumeSpecName: "logs") pod "55c82341-24e7-4524-82c7-996a851af418" (UID: "55c82341-24e7-4524-82c7-996a851af418"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 13:56:06 crc kubenswrapper[4768]: I0217 13:56:06.864093 4768 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/55c82341-24e7-4524-82c7-996a851af418-logs\") on node \"crc\" DevicePath \"\"" Feb 17 13:56:06 crc kubenswrapper[4768]: I0217 13:56:06.869403 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/55c82341-24e7-4524-82c7-996a851af418-kube-api-access-nckcb" (OuterVolumeSpecName: "kube-api-access-nckcb") pod "55c82341-24e7-4524-82c7-996a851af418" (UID: "55c82341-24e7-4524-82c7-996a851af418"). InnerVolumeSpecName "kube-api-access-nckcb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 13:56:06 crc kubenswrapper[4768]: I0217 13:56:06.869924 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/55c82341-24e7-4524-82c7-996a851af418-scripts" (OuterVolumeSpecName: "scripts") pod "55c82341-24e7-4524-82c7-996a851af418" (UID: "55c82341-24e7-4524-82c7-996a851af418"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 13:56:06 crc kubenswrapper[4768]: I0217 13:56:06.932315 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/55c82341-24e7-4524-82c7-996a851af418-config-data" (OuterVolumeSpecName: "config-data") pod "55c82341-24e7-4524-82c7-996a851af418" (UID: "55c82341-24e7-4524-82c7-996a851af418"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 13:56:06 crc kubenswrapper[4768]: I0217 13:56:06.949922 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/55c82341-24e7-4524-82c7-996a851af418-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "55c82341-24e7-4524-82c7-996a851af418" (UID: "55c82341-24e7-4524-82c7-996a851af418"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 13:56:06 crc kubenswrapper[4768]: I0217 13:56:06.966551 4768 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/55c82341-24e7-4524-82c7-996a851af418-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 13:56:06 crc kubenswrapper[4768]: I0217 13:56:06.966586 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nckcb\" (UniqueName: \"kubernetes.io/projected/55c82341-24e7-4524-82c7-996a851af418-kube-api-access-nckcb\") on node \"crc\" DevicePath \"\"" Feb 17 13:56:06 crc kubenswrapper[4768]: I0217 13:56:06.966600 4768 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/55c82341-24e7-4524-82c7-996a851af418-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 13:56:06 crc kubenswrapper[4768]: I0217 13:56:06.966610 4768 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/55c82341-24e7-4524-82c7-996a851af418-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 13:56:06 crc kubenswrapper[4768]: I0217 13:56:06.969905 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/55c82341-24e7-4524-82c7-996a851af418-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "55c82341-24e7-4524-82c7-996a851af418" (UID: "55c82341-24e7-4524-82c7-996a851af418"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 13:56:06 crc kubenswrapper[4768]: I0217 13:56:06.973729 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/55c82341-24e7-4524-82c7-996a851af418-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "55c82341-24e7-4524-82c7-996a851af418" (UID: "55c82341-24e7-4524-82c7-996a851af418"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 13:56:07 crc kubenswrapper[4768]: I0217 13:56:07.072697 4768 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/55c82341-24e7-4524-82c7-996a851af418-public-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 17 13:56:07 crc kubenswrapper[4768]: I0217 13:56:07.072744 4768 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/55c82341-24e7-4524-82c7-996a851af418-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 17 13:56:07 crc kubenswrapper[4768]: I0217 13:56:07.082277 4768 generic.go:334] "Generic (PLEG): container finished" podID="55c82341-24e7-4524-82c7-996a851af418" containerID="e0584fae020d78e354808a3803e647082acf2d6b1f5b1bf761a5c618a2e4b69a" exitCode=0 Feb 17 13:56:07 crc kubenswrapper[4768]: I0217 13:56:07.082332 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-6c85df6c44-rr84t" event={"ID":"55c82341-24e7-4524-82c7-996a851af418","Type":"ContainerDied","Data":"e0584fae020d78e354808a3803e647082acf2d6b1f5b1bf761a5c618a2e4b69a"} Feb 17 13:56:07 crc kubenswrapper[4768]: I0217 13:56:07.082359 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-6c85df6c44-rr84t" event={"ID":"55c82341-24e7-4524-82c7-996a851af418","Type":"ContainerDied","Data":"a7fe683dc88a8ba1e575fb22233a035467a95a7dfafc297fcf47b6ad63b3d340"} Feb 17 13:56:07 crc kubenswrapper[4768]: I0217 13:56:07.082377 4768 scope.go:117] "RemoveContainer" containerID="e0584fae020d78e354808a3803e647082acf2d6b1f5b1bf761a5c618a2e4b69a" Feb 17 13:56:07 crc kubenswrapper[4768]: I0217 13:56:07.082482 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-6c85df6c44-rr84t" Feb 17 13:56:07 crc kubenswrapper[4768]: I0217 13:56:07.092424 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-6999b7cf5c-4f5kt" event={"ID":"4ac4ebb9-cc51-4934-b4c7-590830f2a04a","Type":"ContainerStarted","Data":"bcadb410ce2a673b230d18f44c04176fb21250d964a059464af08083ed697f8f"} Feb 17 13:56:07 crc kubenswrapper[4768]: I0217 13:56:07.092488 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-6999b7cf5c-4f5kt" event={"ID":"4ac4ebb9-cc51-4934-b4c7-590830f2a04a","Type":"ContainerStarted","Data":"c13a8afc7d2c2c4c2f27db2fe02a2e87d0d6e295685f540d4dab0ccd7827d2f4"} Feb 17 13:56:07 crc kubenswrapper[4768]: I0217 13:56:07.166315 4768 scope.go:117] "RemoveContainer" containerID="94c3a76cef44422cbd71d3dcc9ce17fa67b716c5bd3df5a80fdd5c6b8ca23179" Feb 17 13:56:07 crc kubenswrapper[4768]: I0217 13:56:07.180689 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-6c85df6c44-rr84t"] Feb 17 13:56:07 crc kubenswrapper[4768]: I0217 13:56:07.188021 4768 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-6c85df6c44-rr84t"] Feb 17 13:56:07 crc kubenswrapper[4768]: I0217 13:56:07.197856 4768 scope.go:117] "RemoveContainer" containerID="e0584fae020d78e354808a3803e647082acf2d6b1f5b1bf761a5c618a2e4b69a" Feb 17 13:56:07 crc kubenswrapper[4768]: E0217 13:56:07.198332 4768 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e0584fae020d78e354808a3803e647082acf2d6b1f5b1bf761a5c618a2e4b69a\": container with ID starting with e0584fae020d78e354808a3803e647082acf2d6b1f5b1bf761a5c618a2e4b69a not found: ID does not exist" containerID="e0584fae020d78e354808a3803e647082acf2d6b1f5b1bf761a5c618a2e4b69a" Feb 17 13:56:07 crc kubenswrapper[4768]: I0217 13:56:07.198375 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e0584fae020d78e354808a3803e647082acf2d6b1f5b1bf761a5c618a2e4b69a"} err="failed to get container status \"e0584fae020d78e354808a3803e647082acf2d6b1f5b1bf761a5c618a2e4b69a\": rpc error: code = NotFound desc = could not find container \"e0584fae020d78e354808a3803e647082acf2d6b1f5b1bf761a5c618a2e4b69a\": container with ID starting with e0584fae020d78e354808a3803e647082acf2d6b1f5b1bf761a5c618a2e4b69a not found: ID does not exist" Feb 17 13:56:07 crc kubenswrapper[4768]: I0217 13:56:07.198405 4768 scope.go:117] "RemoveContainer" containerID="94c3a76cef44422cbd71d3dcc9ce17fa67b716c5bd3df5a80fdd5c6b8ca23179" Feb 17 13:56:07 crc kubenswrapper[4768]: E0217 13:56:07.198831 4768 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"94c3a76cef44422cbd71d3dcc9ce17fa67b716c5bd3df5a80fdd5c6b8ca23179\": container with ID starting with 94c3a76cef44422cbd71d3dcc9ce17fa67b716c5bd3df5a80fdd5c6b8ca23179 not found: ID does not exist" containerID="94c3a76cef44422cbd71d3dcc9ce17fa67b716c5bd3df5a80fdd5c6b8ca23179" Feb 17 13:56:07 crc kubenswrapper[4768]: I0217 13:56:07.198859 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"94c3a76cef44422cbd71d3dcc9ce17fa67b716c5bd3df5a80fdd5c6b8ca23179"} err="failed to get container status \"94c3a76cef44422cbd71d3dcc9ce17fa67b716c5bd3df5a80fdd5c6b8ca23179\": rpc error: code = NotFound desc = could not find container \"94c3a76cef44422cbd71d3dcc9ce17fa67b716c5bd3df5a80fdd5c6b8ca23179\": container with ID starting with 94c3a76cef44422cbd71d3dcc9ce17fa67b716c5bd3df5a80fdd5c6b8ca23179 not found: ID does not exist" Feb 17 13:56:07 crc kubenswrapper[4768]: I0217 13:56:07.547271 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="55c82341-24e7-4524-82c7-996a851af418" path="/var/lib/kubelet/pods/55c82341-24e7-4524-82c7-996a851af418/volumes" Feb 17 13:56:08 crc kubenswrapper[4768]: I0217 13:56:08.104088 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-6999b7cf5c-4f5kt" event={"ID":"4ac4ebb9-cc51-4934-b4c7-590830f2a04a","Type":"ContainerStarted","Data":"a551bd3d683cc31532e476b578f3dc73040572be60316573e6bb89c3e20d4331"} Feb 17 13:56:08 crc kubenswrapper[4768]: I0217 13:56:08.105308 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-6999b7cf5c-4f5kt" Feb 17 13:56:08 crc kubenswrapper[4768]: I0217 13:56:08.138946 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-proxy-6999b7cf5c-4f5kt" podStartSLOduration=3.138918513 podStartE2EDuration="3.138918513s" podCreationTimestamp="2026-02-17 13:56:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 13:56:08.127567088 +0000 UTC m=+1187.406953540" watchObservedRunningTime="2026-02-17 13:56:08.138918513 +0000 UTC m=+1187.418304955" Feb 17 13:56:09 crc kubenswrapper[4768]: I0217 13:56:09.112909 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-6999b7cf5c-4f5kt" Feb 17 13:56:09 crc kubenswrapper[4768]: I0217 13:56:09.233796 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 17 13:56:09 crc kubenswrapper[4768]: I0217 13:56:09.234549 4768 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="8dde7e08-dc91-4904-9e22-5e77b459a138" containerName="proxy-httpd" containerID="cri-o://62831d24c9b06763090b377ac82922512e8105844cc0693394a6ea50479e6e3e" gracePeriod=30 Feb 17 13:56:09 crc kubenswrapper[4768]: I0217 13:56:09.234565 4768 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="8dde7e08-dc91-4904-9e22-5e77b459a138" containerName="ceilometer-central-agent" containerID="cri-o://58411a6ca16b335c9514c9172eb72f31124cc761fb33338cfd030519d6e8465a" gracePeriod=30 Feb 17 13:56:09 crc kubenswrapper[4768]: I0217 13:56:09.237247 4768 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="8dde7e08-dc91-4904-9e22-5e77b459a138" containerName="ceilometer-notification-agent" containerID="cri-o://a5dcbb194c7459e74d972ad1984296c67f3ba78a3c11fb4a40c9945ea4d35993" gracePeriod=30 Feb 17 13:56:09 crc kubenswrapper[4768]: I0217 13:56:09.237813 4768 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="8dde7e08-dc91-4904-9e22-5e77b459a138" containerName="sg-core" containerID="cri-o://1b6e2d4b982055d878e5d006e51c15bdef9eb88065475bfa61afa025bf52a48b" gracePeriod=30 Feb 17 13:56:09 crc kubenswrapper[4768]: I0217 13:56:09.244412 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Feb 17 13:56:10 crc kubenswrapper[4768]: I0217 13:56:10.122521 4768 generic.go:334] "Generic (PLEG): container finished" podID="8dde7e08-dc91-4904-9e22-5e77b459a138" containerID="62831d24c9b06763090b377ac82922512e8105844cc0693394a6ea50479e6e3e" exitCode=0 Feb 17 13:56:10 crc kubenswrapper[4768]: I0217 13:56:10.122802 4768 generic.go:334] "Generic (PLEG): container finished" podID="8dde7e08-dc91-4904-9e22-5e77b459a138" containerID="1b6e2d4b982055d878e5d006e51c15bdef9eb88065475bfa61afa025bf52a48b" exitCode=2 Feb 17 13:56:10 crc kubenswrapper[4768]: I0217 13:56:10.122811 4768 generic.go:334] "Generic (PLEG): container finished" podID="8dde7e08-dc91-4904-9e22-5e77b459a138" containerID="58411a6ca16b335c9514c9172eb72f31124cc761fb33338cfd030519d6e8465a" exitCode=0 Feb 17 13:56:10 crc kubenswrapper[4768]: I0217 13:56:10.122603 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8dde7e08-dc91-4904-9e22-5e77b459a138","Type":"ContainerDied","Data":"62831d24c9b06763090b377ac82922512e8105844cc0693394a6ea50479e6e3e"} Feb 17 13:56:10 crc kubenswrapper[4768]: I0217 13:56:10.122860 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8dde7e08-dc91-4904-9e22-5e77b459a138","Type":"ContainerDied","Data":"1b6e2d4b982055d878e5d006e51c15bdef9eb88065475bfa61afa025bf52a48b"} Feb 17 13:56:10 crc kubenswrapper[4768]: I0217 13:56:10.122873 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8dde7e08-dc91-4904-9e22-5e77b459a138","Type":"ContainerDied","Data":"58411a6ca16b335c9514c9172eb72f31124cc761fb33338cfd030519d6e8465a"} Feb 17 13:56:11 crc kubenswrapper[4768]: I0217 13:56:11.130845 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-6999b7cf5c-4f5kt" Feb 17 13:56:11 crc kubenswrapper[4768]: I0217 13:56:11.133691 4768 generic.go:334] "Generic (PLEG): container finished" podID="8dde7e08-dc91-4904-9e22-5e77b459a138" containerID="a5dcbb194c7459e74d972ad1984296c67f3ba78a3c11fb4a40c9945ea4d35993" exitCode=0 Feb 17 13:56:11 crc kubenswrapper[4768]: I0217 13:56:11.134562 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8dde7e08-dc91-4904-9e22-5e77b459a138","Type":"ContainerDied","Data":"a5dcbb194c7459e74d972ad1984296c67f3ba78a3c11fb4a40c9945ea4d35993"} Feb 17 13:56:14 crc kubenswrapper[4768]: I0217 13:56:14.443915 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 17 13:56:14 crc kubenswrapper[4768]: I0217 13:56:14.625345 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/8dde7e08-dc91-4904-9e22-5e77b459a138-sg-core-conf-yaml\") pod \"8dde7e08-dc91-4904-9e22-5e77b459a138\" (UID: \"8dde7e08-dc91-4904-9e22-5e77b459a138\") " Feb 17 13:56:14 crc kubenswrapper[4768]: I0217 13:56:14.625713 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8dde7e08-dc91-4904-9e22-5e77b459a138-combined-ca-bundle\") pod \"8dde7e08-dc91-4904-9e22-5e77b459a138\" (UID: \"8dde7e08-dc91-4904-9e22-5e77b459a138\") " Feb 17 13:56:14 crc kubenswrapper[4768]: I0217 13:56:14.625741 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6vktv\" (UniqueName: \"kubernetes.io/projected/8dde7e08-dc91-4904-9e22-5e77b459a138-kube-api-access-6vktv\") pod \"8dde7e08-dc91-4904-9e22-5e77b459a138\" (UID: \"8dde7e08-dc91-4904-9e22-5e77b459a138\") " Feb 17 13:56:14 crc kubenswrapper[4768]: I0217 13:56:14.625759 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8dde7e08-dc91-4904-9e22-5e77b459a138-log-httpd\") pod \"8dde7e08-dc91-4904-9e22-5e77b459a138\" (UID: \"8dde7e08-dc91-4904-9e22-5e77b459a138\") " Feb 17 13:56:14 crc kubenswrapper[4768]: I0217 13:56:14.625793 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8dde7e08-dc91-4904-9e22-5e77b459a138-config-data\") pod \"8dde7e08-dc91-4904-9e22-5e77b459a138\" (UID: \"8dde7e08-dc91-4904-9e22-5e77b459a138\") " Feb 17 13:56:14 crc kubenswrapper[4768]: I0217 13:56:14.625817 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8dde7e08-dc91-4904-9e22-5e77b459a138-run-httpd\") pod \"8dde7e08-dc91-4904-9e22-5e77b459a138\" (UID: \"8dde7e08-dc91-4904-9e22-5e77b459a138\") " Feb 17 13:56:14 crc kubenswrapper[4768]: I0217 13:56:14.625859 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8dde7e08-dc91-4904-9e22-5e77b459a138-scripts\") pod \"8dde7e08-dc91-4904-9e22-5e77b459a138\" (UID: \"8dde7e08-dc91-4904-9e22-5e77b459a138\") " Feb 17 13:56:14 crc kubenswrapper[4768]: I0217 13:56:14.626374 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8dde7e08-dc91-4904-9e22-5e77b459a138-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "8dde7e08-dc91-4904-9e22-5e77b459a138" (UID: "8dde7e08-dc91-4904-9e22-5e77b459a138"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 13:56:14 crc kubenswrapper[4768]: I0217 13:56:14.626435 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8dde7e08-dc91-4904-9e22-5e77b459a138-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "8dde7e08-dc91-4904-9e22-5e77b459a138" (UID: "8dde7e08-dc91-4904-9e22-5e77b459a138"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 13:56:14 crc kubenswrapper[4768]: I0217 13:56:14.631612 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8dde7e08-dc91-4904-9e22-5e77b459a138-scripts" (OuterVolumeSpecName: "scripts") pod "8dde7e08-dc91-4904-9e22-5e77b459a138" (UID: "8dde7e08-dc91-4904-9e22-5e77b459a138"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 13:56:14 crc kubenswrapper[4768]: I0217 13:56:14.632386 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8dde7e08-dc91-4904-9e22-5e77b459a138-kube-api-access-6vktv" (OuterVolumeSpecName: "kube-api-access-6vktv") pod "8dde7e08-dc91-4904-9e22-5e77b459a138" (UID: "8dde7e08-dc91-4904-9e22-5e77b459a138"). InnerVolumeSpecName "kube-api-access-6vktv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 13:56:14 crc kubenswrapper[4768]: I0217 13:56:14.654942 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8dde7e08-dc91-4904-9e22-5e77b459a138-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "8dde7e08-dc91-4904-9e22-5e77b459a138" (UID: "8dde7e08-dc91-4904-9e22-5e77b459a138"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 13:56:14 crc kubenswrapper[4768]: I0217 13:56:14.698793 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8dde7e08-dc91-4904-9e22-5e77b459a138-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8dde7e08-dc91-4904-9e22-5e77b459a138" (UID: "8dde7e08-dc91-4904-9e22-5e77b459a138"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 13:56:14 crc kubenswrapper[4768]: I0217 13:56:14.718967 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 17 13:56:14 crc kubenswrapper[4768]: I0217 13:56:14.719178 4768 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/kube-state-metrics-0" podUID="af700d67-c9a3-4577-b872-6ffd620ce9b5" containerName="kube-state-metrics" containerID="cri-o://07af6ff52633bd9c46b353cf08a42dbb5ea64d2d3f278bd8dccfb2c912b59bcd" gracePeriod=30 Feb 17 13:56:14 crc kubenswrapper[4768]: I0217 13:56:14.730361 4768 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/8dde7e08-dc91-4904-9e22-5e77b459a138-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 17 13:56:14 crc kubenswrapper[4768]: I0217 13:56:14.730386 4768 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8dde7e08-dc91-4904-9e22-5e77b459a138-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 13:56:14 crc kubenswrapper[4768]: I0217 13:56:14.730397 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6vktv\" (UniqueName: \"kubernetes.io/projected/8dde7e08-dc91-4904-9e22-5e77b459a138-kube-api-access-6vktv\") on node \"crc\" DevicePath \"\"" Feb 17 13:56:14 crc kubenswrapper[4768]: I0217 13:56:14.730409 4768 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8dde7e08-dc91-4904-9e22-5e77b459a138-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 17 13:56:14 crc kubenswrapper[4768]: I0217 13:56:14.730417 4768 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8dde7e08-dc91-4904-9e22-5e77b459a138-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 17 13:56:14 crc kubenswrapper[4768]: I0217 13:56:14.730425 4768 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8dde7e08-dc91-4904-9e22-5e77b459a138-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 13:56:14 crc kubenswrapper[4768]: I0217 13:56:14.752521 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8dde7e08-dc91-4904-9e22-5e77b459a138-config-data" (OuterVolumeSpecName: "config-data") pod "8dde7e08-dc91-4904-9e22-5e77b459a138" (UID: "8dde7e08-dc91-4904-9e22-5e77b459a138"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 13:56:14 crc kubenswrapper[4768]: I0217 13:56:14.832488 4768 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8dde7e08-dc91-4904-9e22-5e77b459a138-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 13:56:15 crc kubenswrapper[4768]: I0217 13:56:15.185557 4768 generic.go:334] "Generic (PLEG): container finished" podID="af700d67-c9a3-4577-b872-6ffd620ce9b5" containerID="07af6ff52633bd9c46b353cf08a42dbb5ea64d2d3f278bd8dccfb2c912b59bcd" exitCode=2 Feb 17 13:56:15 crc kubenswrapper[4768]: I0217 13:56:15.185657 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"af700d67-c9a3-4577-b872-6ffd620ce9b5","Type":"ContainerDied","Data":"07af6ff52633bd9c46b353cf08a42dbb5ea64d2d3f278bd8dccfb2c912b59bcd"} Feb 17 13:56:15 crc kubenswrapper[4768]: I0217 13:56:15.194174 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8dde7e08-dc91-4904-9e22-5e77b459a138","Type":"ContainerDied","Data":"284a74bcc4e1b1e803bb852ffe4ad241cd0b5b84c3b72e0cd2a5058be6163ce3"} Feb 17 13:56:15 crc kubenswrapper[4768]: I0217 13:56:15.194251 4768 scope.go:117] "RemoveContainer" containerID="62831d24c9b06763090b377ac82922512e8105844cc0693394a6ea50479e6e3e" Feb 17 13:56:15 crc kubenswrapper[4768]: I0217 13:56:15.194363 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 17 13:56:15 crc kubenswrapper[4768]: I0217 13:56:15.198957 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"b765d360-2c6c-4740-b75e-bd16636a41e0","Type":"ContainerStarted","Data":"765d615b65bf8f0c77e5e6aeeab3dded5243a0951afe023f99c50aff2b7e4880"} Feb 17 13:56:15 crc kubenswrapper[4768]: I0217 13:56:15.221854 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstackclient" podStartSLOduration=2.562228333 podStartE2EDuration="13.221837553s" podCreationTimestamp="2026-02-17 13:56:02 +0000 UTC" firstStartedPulling="2026-02-17 13:56:03.657821219 +0000 UTC m=+1182.937207661" lastFinishedPulling="2026-02-17 13:56:14.317430439 +0000 UTC m=+1193.596816881" observedRunningTime="2026-02-17 13:56:15.217204124 +0000 UTC m=+1194.496590566" watchObservedRunningTime="2026-02-17 13:56:15.221837553 +0000 UTC m=+1194.501223995" Feb 17 13:56:15 crc kubenswrapper[4768]: I0217 13:56:15.228389 4768 scope.go:117] "RemoveContainer" containerID="1b6e2d4b982055d878e5d006e51c15bdef9eb88065475bfa61afa025bf52a48b" Feb 17 13:56:15 crc kubenswrapper[4768]: I0217 13:56:15.264504 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 17 13:56:15 crc kubenswrapper[4768]: I0217 13:56:15.268453 4768 scope.go:117] "RemoveContainer" containerID="a5dcbb194c7459e74d972ad1984296c67f3ba78a3c11fb4a40c9945ea4d35993" Feb 17 13:56:15 crc kubenswrapper[4768]: I0217 13:56:15.281026 4768 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 17 13:56:15 crc kubenswrapper[4768]: I0217 13:56:15.290343 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 17 13:56:15 crc kubenswrapper[4768]: E0217 13:56:15.290760 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="55c82341-24e7-4524-82c7-996a851af418" containerName="placement-log" Feb 17 13:56:15 crc kubenswrapper[4768]: I0217 13:56:15.290783 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="55c82341-24e7-4524-82c7-996a851af418" containerName="placement-log" Feb 17 13:56:15 crc kubenswrapper[4768]: E0217 13:56:15.290806 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8dde7e08-dc91-4904-9e22-5e77b459a138" containerName="ceilometer-notification-agent" Feb 17 13:56:15 crc kubenswrapper[4768]: I0217 13:56:15.290814 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="8dde7e08-dc91-4904-9e22-5e77b459a138" containerName="ceilometer-notification-agent" Feb 17 13:56:15 crc kubenswrapper[4768]: E0217 13:56:15.290836 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8dde7e08-dc91-4904-9e22-5e77b459a138" containerName="proxy-httpd" Feb 17 13:56:15 crc kubenswrapper[4768]: I0217 13:56:15.290844 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="8dde7e08-dc91-4904-9e22-5e77b459a138" containerName="proxy-httpd" Feb 17 13:56:15 crc kubenswrapper[4768]: E0217 13:56:15.290866 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="55c82341-24e7-4524-82c7-996a851af418" containerName="placement-api" Feb 17 13:56:15 crc kubenswrapper[4768]: I0217 13:56:15.290873 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="55c82341-24e7-4524-82c7-996a851af418" containerName="placement-api" Feb 17 13:56:15 crc kubenswrapper[4768]: E0217 13:56:15.290883 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8dde7e08-dc91-4904-9e22-5e77b459a138" containerName="ceilometer-central-agent" Feb 17 13:56:15 crc kubenswrapper[4768]: I0217 13:56:15.290891 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="8dde7e08-dc91-4904-9e22-5e77b459a138" containerName="ceilometer-central-agent" Feb 17 13:56:15 crc kubenswrapper[4768]: E0217 13:56:15.290901 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8dde7e08-dc91-4904-9e22-5e77b459a138" containerName="sg-core" Feb 17 13:56:15 crc kubenswrapper[4768]: I0217 13:56:15.290908 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="8dde7e08-dc91-4904-9e22-5e77b459a138" containerName="sg-core" Feb 17 13:56:15 crc kubenswrapper[4768]: I0217 13:56:15.291143 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="8dde7e08-dc91-4904-9e22-5e77b459a138" containerName="sg-core" Feb 17 13:56:15 crc kubenswrapper[4768]: I0217 13:56:15.291162 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="8dde7e08-dc91-4904-9e22-5e77b459a138" containerName="ceilometer-central-agent" Feb 17 13:56:15 crc kubenswrapper[4768]: I0217 13:56:15.291172 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="8dde7e08-dc91-4904-9e22-5e77b459a138" containerName="proxy-httpd" Feb 17 13:56:15 crc kubenswrapper[4768]: I0217 13:56:15.291195 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="8dde7e08-dc91-4904-9e22-5e77b459a138" containerName="ceilometer-notification-agent" Feb 17 13:56:15 crc kubenswrapper[4768]: I0217 13:56:15.291210 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="55c82341-24e7-4524-82c7-996a851af418" containerName="placement-log" Feb 17 13:56:15 crc kubenswrapper[4768]: I0217 13:56:15.291219 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="55c82341-24e7-4524-82c7-996a851af418" containerName="placement-api" Feb 17 13:56:15 crc kubenswrapper[4768]: I0217 13:56:15.293487 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 17 13:56:15 crc kubenswrapper[4768]: I0217 13:56:15.295265 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 17 13:56:15 crc kubenswrapper[4768]: I0217 13:56:15.295731 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 17 13:56:15 crc kubenswrapper[4768]: I0217 13:56:15.324836 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 17 13:56:15 crc kubenswrapper[4768]: I0217 13:56:15.380636 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 17 13:56:15 crc kubenswrapper[4768]: I0217 13:56:15.382923 4768 scope.go:117] "RemoveContainer" containerID="58411a6ca16b335c9514c9172eb72f31124cc761fb33338cfd030519d6e8465a" Feb 17 13:56:15 crc kubenswrapper[4768]: I0217 13:56:15.465480 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e41e9e01-66b3-4abc-b5f3-e3679c36be33-config-data\") pod \"ceilometer-0\" (UID: \"e41e9e01-66b3-4abc-b5f3-e3679c36be33\") " pod="openstack/ceilometer-0" Feb 17 13:56:15 crc kubenswrapper[4768]: I0217 13:56:15.465546 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7w55c\" (UniqueName: \"kubernetes.io/projected/e41e9e01-66b3-4abc-b5f3-e3679c36be33-kube-api-access-7w55c\") pod \"ceilometer-0\" (UID: \"e41e9e01-66b3-4abc-b5f3-e3679c36be33\") " pod="openstack/ceilometer-0" Feb 17 13:56:15 crc kubenswrapper[4768]: I0217 13:56:15.465574 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e41e9e01-66b3-4abc-b5f3-e3679c36be33-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"e41e9e01-66b3-4abc-b5f3-e3679c36be33\") " pod="openstack/ceilometer-0" Feb 17 13:56:15 crc kubenswrapper[4768]: I0217 13:56:15.465600 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e41e9e01-66b3-4abc-b5f3-e3679c36be33-run-httpd\") pod \"ceilometer-0\" (UID: \"e41e9e01-66b3-4abc-b5f3-e3679c36be33\") " pod="openstack/ceilometer-0" Feb 17 13:56:15 crc kubenswrapper[4768]: I0217 13:56:15.465658 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e41e9e01-66b3-4abc-b5f3-e3679c36be33-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"e41e9e01-66b3-4abc-b5f3-e3679c36be33\") " pod="openstack/ceilometer-0" Feb 17 13:56:15 crc kubenswrapper[4768]: I0217 13:56:15.465687 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e41e9e01-66b3-4abc-b5f3-e3679c36be33-scripts\") pod \"ceilometer-0\" (UID: \"e41e9e01-66b3-4abc-b5f3-e3679c36be33\") " pod="openstack/ceilometer-0" Feb 17 13:56:15 crc kubenswrapper[4768]: I0217 13:56:15.465734 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e41e9e01-66b3-4abc-b5f3-e3679c36be33-log-httpd\") pod \"ceilometer-0\" (UID: \"e41e9e01-66b3-4abc-b5f3-e3679c36be33\") " pod="openstack/ceilometer-0" Feb 17 13:56:15 crc kubenswrapper[4768]: I0217 13:56:15.548441 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8dde7e08-dc91-4904-9e22-5e77b459a138" path="/var/lib/kubelet/pods/8dde7e08-dc91-4904-9e22-5e77b459a138/volumes" Feb 17 13:56:15 crc kubenswrapper[4768]: I0217 13:56:15.566930 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z69w2\" (UniqueName: \"kubernetes.io/projected/af700d67-c9a3-4577-b872-6ffd620ce9b5-kube-api-access-z69w2\") pod \"af700d67-c9a3-4577-b872-6ffd620ce9b5\" (UID: \"af700d67-c9a3-4577-b872-6ffd620ce9b5\") " Feb 17 13:56:15 crc kubenswrapper[4768]: I0217 13:56:15.567367 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e41e9e01-66b3-4abc-b5f3-e3679c36be33-log-httpd\") pod \"ceilometer-0\" (UID: \"e41e9e01-66b3-4abc-b5f3-e3679c36be33\") " pod="openstack/ceilometer-0" Feb 17 13:56:15 crc kubenswrapper[4768]: I0217 13:56:15.567452 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e41e9e01-66b3-4abc-b5f3-e3679c36be33-config-data\") pod \"ceilometer-0\" (UID: \"e41e9e01-66b3-4abc-b5f3-e3679c36be33\") " pod="openstack/ceilometer-0" Feb 17 13:56:15 crc kubenswrapper[4768]: I0217 13:56:15.567485 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7w55c\" (UniqueName: \"kubernetes.io/projected/e41e9e01-66b3-4abc-b5f3-e3679c36be33-kube-api-access-7w55c\") pod \"ceilometer-0\" (UID: \"e41e9e01-66b3-4abc-b5f3-e3679c36be33\") " pod="openstack/ceilometer-0" Feb 17 13:56:15 crc kubenswrapper[4768]: I0217 13:56:15.567510 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e41e9e01-66b3-4abc-b5f3-e3679c36be33-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"e41e9e01-66b3-4abc-b5f3-e3679c36be33\") " pod="openstack/ceilometer-0" Feb 17 13:56:15 crc kubenswrapper[4768]: I0217 13:56:15.567530 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e41e9e01-66b3-4abc-b5f3-e3679c36be33-run-httpd\") pod \"ceilometer-0\" (UID: \"e41e9e01-66b3-4abc-b5f3-e3679c36be33\") " pod="openstack/ceilometer-0" Feb 17 13:56:15 crc kubenswrapper[4768]: I0217 13:56:15.567572 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e41e9e01-66b3-4abc-b5f3-e3679c36be33-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"e41e9e01-66b3-4abc-b5f3-e3679c36be33\") " pod="openstack/ceilometer-0" Feb 17 13:56:15 crc kubenswrapper[4768]: I0217 13:56:15.567594 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e41e9e01-66b3-4abc-b5f3-e3679c36be33-scripts\") pod \"ceilometer-0\" (UID: \"e41e9e01-66b3-4abc-b5f3-e3679c36be33\") " pod="openstack/ceilometer-0" Feb 17 13:56:15 crc kubenswrapper[4768]: I0217 13:56:15.567856 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e41e9e01-66b3-4abc-b5f3-e3679c36be33-log-httpd\") pod \"ceilometer-0\" (UID: \"e41e9e01-66b3-4abc-b5f3-e3679c36be33\") " pod="openstack/ceilometer-0" Feb 17 13:56:15 crc kubenswrapper[4768]: I0217 13:56:15.568249 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e41e9e01-66b3-4abc-b5f3-e3679c36be33-run-httpd\") pod \"ceilometer-0\" (UID: \"e41e9e01-66b3-4abc-b5f3-e3679c36be33\") " pod="openstack/ceilometer-0" Feb 17 13:56:15 crc kubenswrapper[4768]: I0217 13:56:15.572770 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e41e9e01-66b3-4abc-b5f3-e3679c36be33-config-data\") pod \"ceilometer-0\" (UID: \"e41e9e01-66b3-4abc-b5f3-e3679c36be33\") " pod="openstack/ceilometer-0" Feb 17 13:56:15 crc kubenswrapper[4768]: I0217 13:56:15.575276 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/af700d67-c9a3-4577-b872-6ffd620ce9b5-kube-api-access-z69w2" (OuterVolumeSpecName: "kube-api-access-z69w2") pod "af700d67-c9a3-4577-b872-6ffd620ce9b5" (UID: "af700d67-c9a3-4577-b872-6ffd620ce9b5"). InnerVolumeSpecName "kube-api-access-z69w2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 13:56:15 crc kubenswrapper[4768]: I0217 13:56:15.575768 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e41e9e01-66b3-4abc-b5f3-e3679c36be33-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"e41e9e01-66b3-4abc-b5f3-e3679c36be33\") " pod="openstack/ceilometer-0" Feb 17 13:56:15 crc kubenswrapper[4768]: I0217 13:56:15.576066 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e41e9e01-66b3-4abc-b5f3-e3679c36be33-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"e41e9e01-66b3-4abc-b5f3-e3679c36be33\") " pod="openstack/ceilometer-0" Feb 17 13:56:15 crc kubenswrapper[4768]: I0217 13:56:15.576323 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e41e9e01-66b3-4abc-b5f3-e3679c36be33-scripts\") pod \"ceilometer-0\" (UID: \"e41e9e01-66b3-4abc-b5f3-e3679c36be33\") " pod="openstack/ceilometer-0" Feb 17 13:56:15 crc kubenswrapper[4768]: I0217 13:56:15.589233 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7w55c\" (UniqueName: \"kubernetes.io/projected/e41e9e01-66b3-4abc-b5f3-e3679c36be33-kube-api-access-7w55c\") pod \"ceilometer-0\" (UID: \"e41e9e01-66b3-4abc-b5f3-e3679c36be33\") " pod="openstack/ceilometer-0" Feb 17 13:56:15 crc kubenswrapper[4768]: I0217 13:56:15.609427 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 17 13:56:15 crc kubenswrapper[4768]: I0217 13:56:15.609657 4768 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="9bc3b9ad-1d13-4214-9824-af7003192ace" containerName="glance-log" containerID="cri-o://2d1c215245457bf85f991f88f31a0dca885fcb114dcec4dae932d7a001c6c78d" gracePeriod=30 Feb 17 13:56:15 crc kubenswrapper[4768]: I0217 13:56:15.609718 4768 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="9bc3b9ad-1d13-4214-9824-af7003192ace" containerName="glance-httpd" containerID="cri-o://296d6aba5a4d2bd2541c1c1b3437ee8e6f346ee60ab706f0f22283ada21455de" gracePeriod=30 Feb 17 13:56:15 crc kubenswrapper[4768]: I0217 13:56:15.671144 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z69w2\" (UniqueName: \"kubernetes.io/projected/af700d67-c9a3-4577-b872-6ffd620ce9b5-kube-api-access-z69w2\") on node \"crc\" DevicePath \"\"" Feb 17 13:56:15 crc kubenswrapper[4768]: I0217 13:56:15.689939 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 17 13:56:16 crc kubenswrapper[4768]: I0217 13:56:16.023759 4768 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-684746c5d4-6lxfv" podUID="c20ad4a2-cf3e-4390-9141-1cc58518fd2b" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.147:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.147:8443: connect: connection refused" Feb 17 13:56:16 crc kubenswrapper[4768]: I0217 13:56:16.123540 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-6999b7cf5c-4f5kt" Feb 17 13:56:16 crc kubenswrapper[4768]: I0217 13:56:16.157141 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 17 13:56:16 crc kubenswrapper[4768]: I0217 13:56:16.235585 4768 generic.go:334] "Generic (PLEG): container finished" podID="9bc3b9ad-1d13-4214-9824-af7003192ace" containerID="2d1c215245457bf85f991f88f31a0dca885fcb114dcec4dae932d7a001c6c78d" exitCode=143 Feb 17 13:56:16 crc kubenswrapper[4768]: I0217 13:56:16.235669 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"9bc3b9ad-1d13-4214-9824-af7003192ace","Type":"ContainerDied","Data":"2d1c215245457bf85f991f88f31a0dca885fcb114dcec4dae932d7a001c6c78d"} Feb 17 13:56:16 crc kubenswrapper[4768]: I0217 13:56:16.270047 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 17 13:56:16 crc kubenswrapper[4768]: I0217 13:56:16.270449 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"af700d67-c9a3-4577-b872-6ffd620ce9b5","Type":"ContainerDied","Data":"5552a05ba8b326989acffab4a0639d4e4408339023df02d5fcc2376e12d8f6e3"} Feb 17 13:56:16 crc kubenswrapper[4768]: I0217 13:56:16.270514 4768 scope.go:117] "RemoveContainer" containerID="07af6ff52633bd9c46b353cf08a42dbb5ea64d2d3f278bd8dccfb2c912b59bcd" Feb 17 13:56:16 crc kubenswrapper[4768]: I0217 13:56:16.301458 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e41e9e01-66b3-4abc-b5f3-e3679c36be33","Type":"ContainerStarted","Data":"f42fd80fe21648b512a25013fef19367337ed2159aa61e4ac086d21d3559b34f"} Feb 17 13:56:16 crc kubenswrapper[4768]: I0217 13:56:16.351194 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 17 13:56:16 crc kubenswrapper[4768]: I0217 13:56:16.368198 4768 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 17 13:56:16 crc kubenswrapper[4768]: I0217 13:56:16.389161 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Feb 17 13:56:16 crc kubenswrapper[4768]: E0217 13:56:16.389628 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="af700d67-c9a3-4577-b872-6ffd620ce9b5" containerName="kube-state-metrics" Feb 17 13:56:16 crc kubenswrapper[4768]: I0217 13:56:16.389697 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="af700d67-c9a3-4577-b872-6ffd620ce9b5" containerName="kube-state-metrics" Feb 17 13:56:16 crc kubenswrapper[4768]: I0217 13:56:16.389960 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="af700d67-c9a3-4577-b872-6ffd620ce9b5" containerName="kube-state-metrics" Feb 17 13:56:16 crc kubenswrapper[4768]: I0217 13:56:16.390721 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 17 13:56:16 crc kubenswrapper[4768]: I0217 13:56:16.394729 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"kube-state-metrics-tls-config" Feb 17 13:56:16 crc kubenswrapper[4768]: I0217 13:56:16.395029 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-kube-state-metrics-svc" Feb 17 13:56:16 crc kubenswrapper[4768]: I0217 13:56:16.406922 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 17 13:56:16 crc kubenswrapper[4768]: I0217 13:56:16.499859 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dceedb47-5ab1-46d0-9e16-a8d267d73ff8-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"dceedb47-5ab1-46d0-9e16-a8d267d73ff8\") " pod="openstack/kube-state-metrics-0" Feb 17 13:56:16 crc kubenswrapper[4768]: I0217 13:56:16.499903 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/dceedb47-5ab1-46d0-9e16-a8d267d73ff8-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"dceedb47-5ab1-46d0-9e16-a8d267d73ff8\") " pod="openstack/kube-state-metrics-0" Feb 17 13:56:16 crc kubenswrapper[4768]: I0217 13:56:16.499942 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/dceedb47-5ab1-46d0-9e16-a8d267d73ff8-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"dceedb47-5ab1-46d0-9e16-a8d267d73ff8\") " pod="openstack/kube-state-metrics-0" Feb 17 13:56:16 crc kubenswrapper[4768]: I0217 13:56:16.500002 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wnqbr\" (UniqueName: \"kubernetes.io/projected/dceedb47-5ab1-46d0-9e16-a8d267d73ff8-kube-api-access-wnqbr\") pod \"kube-state-metrics-0\" (UID: \"dceedb47-5ab1-46d0-9e16-a8d267d73ff8\") " pod="openstack/kube-state-metrics-0" Feb 17 13:56:16 crc kubenswrapper[4768]: I0217 13:56:16.601535 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dceedb47-5ab1-46d0-9e16-a8d267d73ff8-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"dceedb47-5ab1-46d0-9e16-a8d267d73ff8\") " pod="openstack/kube-state-metrics-0" Feb 17 13:56:16 crc kubenswrapper[4768]: I0217 13:56:16.601614 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/dceedb47-5ab1-46d0-9e16-a8d267d73ff8-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"dceedb47-5ab1-46d0-9e16-a8d267d73ff8\") " pod="openstack/kube-state-metrics-0" Feb 17 13:56:16 crc kubenswrapper[4768]: I0217 13:56:16.601656 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/dceedb47-5ab1-46d0-9e16-a8d267d73ff8-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"dceedb47-5ab1-46d0-9e16-a8d267d73ff8\") " pod="openstack/kube-state-metrics-0" Feb 17 13:56:16 crc kubenswrapper[4768]: I0217 13:56:16.601718 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wnqbr\" (UniqueName: \"kubernetes.io/projected/dceedb47-5ab1-46d0-9e16-a8d267d73ff8-kube-api-access-wnqbr\") pod \"kube-state-metrics-0\" (UID: \"dceedb47-5ab1-46d0-9e16-a8d267d73ff8\") " pod="openstack/kube-state-metrics-0" Feb 17 13:56:16 crc kubenswrapper[4768]: I0217 13:56:16.607763 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/dceedb47-5ab1-46d0-9e16-a8d267d73ff8-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"dceedb47-5ab1-46d0-9e16-a8d267d73ff8\") " pod="openstack/kube-state-metrics-0" Feb 17 13:56:16 crc kubenswrapper[4768]: I0217 13:56:16.611145 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/dceedb47-5ab1-46d0-9e16-a8d267d73ff8-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"dceedb47-5ab1-46d0-9e16-a8d267d73ff8\") " pod="openstack/kube-state-metrics-0" Feb 17 13:56:16 crc kubenswrapper[4768]: I0217 13:56:16.614943 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dceedb47-5ab1-46d0-9e16-a8d267d73ff8-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"dceedb47-5ab1-46d0-9e16-a8d267d73ff8\") " pod="openstack/kube-state-metrics-0" Feb 17 13:56:16 crc kubenswrapper[4768]: I0217 13:56:16.626299 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wnqbr\" (UniqueName: \"kubernetes.io/projected/dceedb47-5ab1-46d0-9e16-a8d267d73ff8-kube-api-access-wnqbr\") pod \"kube-state-metrics-0\" (UID: \"dceedb47-5ab1-46d0-9e16-a8d267d73ff8\") " pod="openstack/kube-state-metrics-0" Feb 17 13:56:16 crc kubenswrapper[4768]: I0217 13:56:16.710971 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 17 13:56:17 crc kubenswrapper[4768]: I0217 13:56:17.054221 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 17 13:56:17 crc kubenswrapper[4768]: I0217 13:56:17.143166 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 17 13:56:17 crc kubenswrapper[4768]: W0217 13:56:17.145753 4768 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddceedb47_5ab1_46d0_9e16_a8d267d73ff8.slice/crio-99e8182f369cd4326c03ecdf418d84c074784237fc8946d5e82a623d29a01085 WatchSource:0}: Error finding container 99e8182f369cd4326c03ecdf418d84c074784237fc8946d5e82a623d29a01085: Status 404 returned error can't find the container with id 99e8182f369cd4326c03ecdf418d84c074784237fc8946d5e82a623d29a01085 Feb 17 13:56:17 crc kubenswrapper[4768]: I0217 13:56:17.310911 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e41e9e01-66b3-4abc-b5f3-e3679c36be33","Type":"ContainerStarted","Data":"5fa7c315ecddcca2b1de68ee42a6f06db97d07f3d00e61737d0adf10f5b0dfdb"} Feb 17 13:56:17 crc kubenswrapper[4768]: I0217 13:56:17.311818 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"dceedb47-5ab1-46d0-9e16-a8d267d73ff8","Type":"ContainerStarted","Data":"99e8182f369cd4326c03ecdf418d84c074784237fc8946d5e82a623d29a01085"} Feb 17 13:56:17 crc kubenswrapper[4768]: I0217 13:56:17.686676 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="af700d67-c9a3-4577-b872-6ffd620ce9b5" path="/var/lib/kubelet/pods/af700d67-c9a3-4577-b872-6ffd620ce9b5/volumes" Feb 17 13:56:18 crc kubenswrapper[4768]: I0217 13:56:18.157464 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 17 13:56:18 crc kubenswrapper[4768]: I0217 13:56:18.157681 4768 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="a6e3afdd-2e51-4f0a-9208-5784a5900c96" containerName="glance-log" containerID="cri-o://203ea60ac8fe35f253b6cd1d649e5f9ff820300b5239154f25461f6868be29a3" gracePeriod=30 Feb 17 13:56:18 crc kubenswrapper[4768]: I0217 13:56:18.158164 4768 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="a6e3afdd-2e51-4f0a-9208-5784a5900c96" containerName="glance-httpd" containerID="cri-o://451132f07075a65508d8072b3c4ecbb82a0ca3b3da706b6771e4c4b4fc56d7a5" gracePeriod=30 Feb 17 13:56:18 crc kubenswrapper[4768]: I0217 13:56:18.328811 4768 generic.go:334] "Generic (PLEG): container finished" podID="a6e3afdd-2e51-4f0a-9208-5784a5900c96" containerID="203ea60ac8fe35f253b6cd1d649e5f9ff820300b5239154f25461f6868be29a3" exitCode=143 Feb 17 13:56:18 crc kubenswrapper[4768]: I0217 13:56:18.328908 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"a6e3afdd-2e51-4f0a-9208-5784a5900c96","Type":"ContainerDied","Data":"203ea60ac8fe35f253b6cd1d649e5f9ff820300b5239154f25461f6868be29a3"} Feb 17 13:56:18 crc kubenswrapper[4768]: I0217 13:56:18.516951 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-85664fc4b9-7bclg" Feb 17 13:56:18 crc kubenswrapper[4768]: I0217 13:56:18.576602 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-b89745fbd-lcjtt"] Feb 17 13:56:18 crc kubenswrapper[4768]: I0217 13:56:18.576866 4768 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-b89745fbd-lcjtt" podUID="c732e620-9ed0-4246-93ca-c71277029344" containerName="neutron-api" containerID="cri-o://9229cce94e15b6699b3ab990d66b23241868303d7e67cbe615cd0b79079aea50" gracePeriod=30 Feb 17 13:56:18 crc kubenswrapper[4768]: I0217 13:56:18.576994 4768 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-b89745fbd-lcjtt" podUID="c732e620-9ed0-4246-93ca-c71277029344" containerName="neutron-httpd" containerID="cri-o://52b2a11a2eea4835beeebdcbe488ab1711193cf6232d4e75045d81b19a9a758c" gracePeriod=30 Feb 17 13:56:19 crc kubenswrapper[4768]: I0217 13:56:19.346617 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"dceedb47-5ab1-46d0-9e16-a8d267d73ff8","Type":"ContainerStarted","Data":"1d8a7663e3cce766d683e1740b5be319053f378c798c49f586b8474af5254f99"} Feb 17 13:56:19 crc kubenswrapper[4768]: I0217 13:56:19.348511 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Feb 17 13:56:19 crc kubenswrapper[4768]: I0217 13:56:19.358997 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e41e9e01-66b3-4abc-b5f3-e3679c36be33","Type":"ContainerStarted","Data":"19b892b320c19a000f0d4f971fe8d740085de7e9329d7502691155f570151d98"} Feb 17 13:56:19 crc kubenswrapper[4768]: I0217 13:56:19.381138 4768 generic.go:334] "Generic (PLEG): container finished" podID="9bc3b9ad-1d13-4214-9824-af7003192ace" containerID="296d6aba5a4d2bd2541c1c1b3437ee8e6f346ee60ab706f0f22283ada21455de" exitCode=0 Feb 17 13:56:19 crc kubenswrapper[4768]: I0217 13:56:19.381236 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"9bc3b9ad-1d13-4214-9824-af7003192ace","Type":"ContainerDied","Data":"296d6aba5a4d2bd2541c1c1b3437ee8e6f346ee60ab706f0f22283ada21455de"} Feb 17 13:56:19 crc kubenswrapper[4768]: I0217 13:56:19.381579 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=2.485454191 podStartE2EDuration="3.381538033s" podCreationTimestamp="2026-02-17 13:56:16 +0000 UTC" firstStartedPulling="2026-02-17 13:56:17.148456873 +0000 UTC m=+1196.427843315" lastFinishedPulling="2026-02-17 13:56:18.044540715 +0000 UTC m=+1197.323927157" observedRunningTime="2026-02-17 13:56:19.366475775 +0000 UTC m=+1198.645862217" watchObservedRunningTime="2026-02-17 13:56:19.381538033 +0000 UTC m=+1198.660924475" Feb 17 13:56:19 crc kubenswrapper[4768]: I0217 13:56:19.397183 4768 generic.go:334] "Generic (PLEG): container finished" podID="c732e620-9ed0-4246-93ca-c71277029344" containerID="52b2a11a2eea4835beeebdcbe488ab1711193cf6232d4e75045d81b19a9a758c" exitCode=0 Feb 17 13:56:19 crc kubenswrapper[4768]: I0217 13:56:19.397229 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-b89745fbd-lcjtt" event={"ID":"c732e620-9ed0-4246-93ca-c71277029344","Type":"ContainerDied","Data":"52b2a11a2eea4835beeebdcbe488ab1711193cf6232d4e75045d81b19a9a758c"} Feb 17 13:56:19 crc kubenswrapper[4768]: I0217 13:56:19.444049 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 17 13:56:19 crc kubenswrapper[4768]: I0217 13:56:19.481912 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5t7hz\" (UniqueName: \"kubernetes.io/projected/9bc3b9ad-1d13-4214-9824-af7003192ace-kube-api-access-5t7hz\") pod \"9bc3b9ad-1d13-4214-9824-af7003192ace\" (UID: \"9bc3b9ad-1d13-4214-9824-af7003192ace\") " Feb 17 13:56:19 crc kubenswrapper[4768]: I0217 13:56:19.482069 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/9bc3b9ad-1d13-4214-9824-af7003192ace-httpd-run\") pod \"9bc3b9ad-1d13-4214-9824-af7003192ace\" (UID: \"9bc3b9ad-1d13-4214-9824-af7003192ace\") " Feb 17 13:56:19 crc kubenswrapper[4768]: I0217 13:56:19.482162 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"9bc3b9ad-1d13-4214-9824-af7003192ace\" (UID: \"9bc3b9ad-1d13-4214-9824-af7003192ace\") " Feb 17 13:56:19 crc kubenswrapper[4768]: I0217 13:56:19.482224 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9bc3b9ad-1d13-4214-9824-af7003192ace-scripts\") pod \"9bc3b9ad-1d13-4214-9824-af7003192ace\" (UID: \"9bc3b9ad-1d13-4214-9824-af7003192ace\") " Feb 17 13:56:19 crc kubenswrapper[4768]: I0217 13:56:19.482286 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9bc3b9ad-1d13-4214-9824-af7003192ace-config-data\") pod \"9bc3b9ad-1d13-4214-9824-af7003192ace\" (UID: \"9bc3b9ad-1d13-4214-9824-af7003192ace\") " Feb 17 13:56:19 crc kubenswrapper[4768]: I0217 13:56:19.482305 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9bc3b9ad-1d13-4214-9824-af7003192ace-combined-ca-bundle\") pod \"9bc3b9ad-1d13-4214-9824-af7003192ace\" (UID: \"9bc3b9ad-1d13-4214-9824-af7003192ace\") " Feb 17 13:56:19 crc kubenswrapper[4768]: I0217 13:56:19.482347 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9bc3b9ad-1d13-4214-9824-af7003192ace-logs\") pod \"9bc3b9ad-1d13-4214-9824-af7003192ace\" (UID: \"9bc3b9ad-1d13-4214-9824-af7003192ace\") " Feb 17 13:56:19 crc kubenswrapper[4768]: I0217 13:56:19.482409 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9bc3b9ad-1d13-4214-9824-af7003192ace-public-tls-certs\") pod \"9bc3b9ad-1d13-4214-9824-af7003192ace\" (UID: \"9bc3b9ad-1d13-4214-9824-af7003192ace\") " Feb 17 13:56:19 crc kubenswrapper[4768]: I0217 13:56:19.483882 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9bc3b9ad-1d13-4214-9824-af7003192ace-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "9bc3b9ad-1d13-4214-9824-af7003192ace" (UID: "9bc3b9ad-1d13-4214-9824-af7003192ace"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 13:56:19 crc kubenswrapper[4768]: I0217 13:56:19.484283 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9bc3b9ad-1d13-4214-9824-af7003192ace-logs" (OuterVolumeSpecName: "logs") pod "9bc3b9ad-1d13-4214-9824-af7003192ace" (UID: "9bc3b9ad-1d13-4214-9824-af7003192ace"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 13:56:19 crc kubenswrapper[4768]: I0217 13:56:19.491808 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage11-crc" (OuterVolumeSpecName: "glance") pod "9bc3b9ad-1d13-4214-9824-af7003192ace" (UID: "9bc3b9ad-1d13-4214-9824-af7003192ace"). InnerVolumeSpecName "local-storage11-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Feb 17 13:56:19 crc kubenswrapper[4768]: I0217 13:56:19.515835 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9bc3b9ad-1d13-4214-9824-af7003192ace-scripts" (OuterVolumeSpecName: "scripts") pod "9bc3b9ad-1d13-4214-9824-af7003192ace" (UID: "9bc3b9ad-1d13-4214-9824-af7003192ace"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 13:56:19 crc kubenswrapper[4768]: I0217 13:56:19.515890 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9bc3b9ad-1d13-4214-9824-af7003192ace-kube-api-access-5t7hz" (OuterVolumeSpecName: "kube-api-access-5t7hz") pod "9bc3b9ad-1d13-4214-9824-af7003192ace" (UID: "9bc3b9ad-1d13-4214-9824-af7003192ace"). InnerVolumeSpecName "kube-api-access-5t7hz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 13:56:19 crc kubenswrapper[4768]: I0217 13:56:19.559028 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9bc3b9ad-1d13-4214-9824-af7003192ace-config-data" (OuterVolumeSpecName: "config-data") pod "9bc3b9ad-1d13-4214-9824-af7003192ace" (UID: "9bc3b9ad-1d13-4214-9824-af7003192ace"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 13:56:19 crc kubenswrapper[4768]: I0217 13:56:19.564825 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9bc3b9ad-1d13-4214-9824-af7003192ace-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "9bc3b9ad-1d13-4214-9824-af7003192ace" (UID: "9bc3b9ad-1d13-4214-9824-af7003192ace"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 13:56:19 crc kubenswrapper[4768]: I0217 13:56:19.577470 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9bc3b9ad-1d13-4214-9824-af7003192ace-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9bc3b9ad-1d13-4214-9824-af7003192ace" (UID: "9bc3b9ad-1d13-4214-9824-af7003192ace"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 13:56:19 crc kubenswrapper[4768]: I0217 13:56:19.586150 4768 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9bc3b9ad-1d13-4214-9824-af7003192ace-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 13:56:19 crc kubenswrapper[4768]: I0217 13:56:19.586193 4768 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9bc3b9ad-1d13-4214-9824-af7003192ace-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 13:56:19 crc kubenswrapper[4768]: I0217 13:56:19.586205 4768 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9bc3b9ad-1d13-4214-9824-af7003192ace-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 13:56:19 crc kubenswrapper[4768]: I0217 13:56:19.586216 4768 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9bc3b9ad-1d13-4214-9824-af7003192ace-logs\") on node \"crc\" DevicePath \"\"" Feb 17 13:56:19 crc kubenswrapper[4768]: I0217 13:56:19.586224 4768 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9bc3b9ad-1d13-4214-9824-af7003192ace-public-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 17 13:56:19 crc kubenswrapper[4768]: I0217 13:56:19.586232 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5t7hz\" (UniqueName: \"kubernetes.io/projected/9bc3b9ad-1d13-4214-9824-af7003192ace-kube-api-access-5t7hz\") on node \"crc\" DevicePath \"\"" Feb 17 13:56:19 crc kubenswrapper[4768]: I0217 13:56:19.586240 4768 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/9bc3b9ad-1d13-4214-9824-af7003192ace-httpd-run\") on node \"crc\" DevicePath \"\"" Feb 17 13:56:19 crc kubenswrapper[4768]: I0217 13:56:19.586267 4768 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") on node \"crc\" " Feb 17 13:56:19 crc kubenswrapper[4768]: I0217 13:56:19.608063 4768 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage11-crc" (UniqueName: "kubernetes.io/local-volume/local-storage11-crc") on node "crc" Feb 17 13:56:19 crc kubenswrapper[4768]: I0217 13:56:19.688234 4768 reconciler_common.go:293] "Volume detached for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") on node \"crc\" DevicePath \"\"" Feb 17 13:56:20 crc kubenswrapper[4768]: I0217 13:56:20.407176 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"9bc3b9ad-1d13-4214-9824-af7003192ace","Type":"ContainerDied","Data":"9f43c8a9e94381f110295c6e677e57f773cd763ab23b211206a735b951148d44"} Feb 17 13:56:20 crc kubenswrapper[4768]: I0217 13:56:20.407537 4768 scope.go:117] "RemoveContainer" containerID="296d6aba5a4d2bd2541c1c1b3437ee8e6f346ee60ab706f0f22283ada21455de" Feb 17 13:56:20 crc kubenswrapper[4768]: I0217 13:56:20.407227 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 17 13:56:20 crc kubenswrapper[4768]: I0217 13:56:20.413083 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e41e9e01-66b3-4abc-b5f3-e3679c36be33","Type":"ContainerStarted","Data":"8f327c5a1381c4409f94baff651bd100f059c0771c4c52d93852ea73b65ef0a3"} Feb 17 13:56:20 crc kubenswrapper[4768]: I0217 13:56:20.444265 4768 scope.go:117] "RemoveContainer" containerID="2d1c215245457bf85f991f88f31a0dca885fcb114dcec4dae932d7a001c6c78d" Feb 17 13:56:20 crc kubenswrapper[4768]: I0217 13:56:20.450147 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 17 13:56:20 crc kubenswrapper[4768]: I0217 13:56:20.462667 4768 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 17 13:56:20 crc kubenswrapper[4768]: I0217 13:56:20.494892 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Feb 17 13:56:20 crc kubenswrapper[4768]: E0217 13:56:20.495325 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9bc3b9ad-1d13-4214-9824-af7003192ace" containerName="glance-log" Feb 17 13:56:20 crc kubenswrapper[4768]: I0217 13:56:20.495345 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="9bc3b9ad-1d13-4214-9824-af7003192ace" containerName="glance-log" Feb 17 13:56:20 crc kubenswrapper[4768]: E0217 13:56:20.495356 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9bc3b9ad-1d13-4214-9824-af7003192ace" containerName="glance-httpd" Feb 17 13:56:20 crc kubenswrapper[4768]: I0217 13:56:20.495362 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="9bc3b9ad-1d13-4214-9824-af7003192ace" containerName="glance-httpd" Feb 17 13:56:20 crc kubenswrapper[4768]: I0217 13:56:20.495552 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="9bc3b9ad-1d13-4214-9824-af7003192ace" containerName="glance-log" Feb 17 13:56:20 crc kubenswrapper[4768]: I0217 13:56:20.495567 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="9bc3b9ad-1d13-4214-9824-af7003192ace" containerName="glance-httpd" Feb 17 13:56:20 crc kubenswrapper[4768]: I0217 13:56:20.496448 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 17 13:56:20 crc kubenswrapper[4768]: I0217 13:56:20.505044 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Feb 17 13:56:20 crc kubenswrapper[4768]: I0217 13:56:20.505199 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Feb 17 13:56:20 crc kubenswrapper[4768]: I0217 13:56:20.513903 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 17 13:56:20 crc kubenswrapper[4768]: I0217 13:56:20.605694 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/5d72c76c-a1d7-4256-ada6-3216f5d7c71a-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"5d72c76c-a1d7-4256-ada6-3216f5d7c71a\") " pod="openstack/glance-default-external-api-0" Feb 17 13:56:20 crc kubenswrapper[4768]: I0217 13:56:20.605773 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5d72c76c-a1d7-4256-ada6-3216f5d7c71a-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"5d72c76c-a1d7-4256-ada6-3216f5d7c71a\") " pod="openstack/glance-default-external-api-0" Feb 17 13:56:20 crc kubenswrapper[4768]: I0217 13:56:20.605795 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5d72c76c-a1d7-4256-ada6-3216f5d7c71a-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"5d72c76c-a1d7-4256-ada6-3216f5d7c71a\") " pod="openstack/glance-default-external-api-0" Feb 17 13:56:20 crc kubenswrapper[4768]: I0217 13:56:20.605827 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2zspq\" (UniqueName: \"kubernetes.io/projected/5d72c76c-a1d7-4256-ada6-3216f5d7c71a-kube-api-access-2zspq\") pod \"glance-default-external-api-0\" (UID: \"5d72c76c-a1d7-4256-ada6-3216f5d7c71a\") " pod="openstack/glance-default-external-api-0" Feb 17 13:56:20 crc kubenswrapper[4768]: I0217 13:56:20.605849 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5d72c76c-a1d7-4256-ada6-3216f5d7c71a-config-data\") pod \"glance-default-external-api-0\" (UID: \"5d72c76c-a1d7-4256-ada6-3216f5d7c71a\") " pod="openstack/glance-default-external-api-0" Feb 17 13:56:20 crc kubenswrapper[4768]: I0217 13:56:20.605887 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5d72c76c-a1d7-4256-ada6-3216f5d7c71a-scripts\") pod \"glance-default-external-api-0\" (UID: \"5d72c76c-a1d7-4256-ada6-3216f5d7c71a\") " pod="openstack/glance-default-external-api-0" Feb 17 13:56:20 crc kubenswrapper[4768]: I0217 13:56:20.605927 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-external-api-0\" (UID: \"5d72c76c-a1d7-4256-ada6-3216f5d7c71a\") " pod="openstack/glance-default-external-api-0" Feb 17 13:56:20 crc kubenswrapper[4768]: I0217 13:56:20.605949 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5d72c76c-a1d7-4256-ada6-3216f5d7c71a-logs\") pod \"glance-default-external-api-0\" (UID: \"5d72c76c-a1d7-4256-ada6-3216f5d7c71a\") " pod="openstack/glance-default-external-api-0" Feb 17 13:56:20 crc kubenswrapper[4768]: I0217 13:56:20.707482 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5d72c76c-a1d7-4256-ada6-3216f5d7c71a-scripts\") pod \"glance-default-external-api-0\" (UID: \"5d72c76c-a1d7-4256-ada6-3216f5d7c71a\") " pod="openstack/glance-default-external-api-0" Feb 17 13:56:20 crc kubenswrapper[4768]: I0217 13:56:20.707559 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-external-api-0\" (UID: \"5d72c76c-a1d7-4256-ada6-3216f5d7c71a\") " pod="openstack/glance-default-external-api-0" Feb 17 13:56:20 crc kubenswrapper[4768]: I0217 13:56:20.707593 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5d72c76c-a1d7-4256-ada6-3216f5d7c71a-logs\") pod \"glance-default-external-api-0\" (UID: \"5d72c76c-a1d7-4256-ada6-3216f5d7c71a\") " pod="openstack/glance-default-external-api-0" Feb 17 13:56:20 crc kubenswrapper[4768]: I0217 13:56:20.707642 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/5d72c76c-a1d7-4256-ada6-3216f5d7c71a-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"5d72c76c-a1d7-4256-ada6-3216f5d7c71a\") " pod="openstack/glance-default-external-api-0" Feb 17 13:56:20 crc kubenswrapper[4768]: I0217 13:56:20.707681 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5d72c76c-a1d7-4256-ada6-3216f5d7c71a-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"5d72c76c-a1d7-4256-ada6-3216f5d7c71a\") " pod="openstack/glance-default-external-api-0" Feb 17 13:56:20 crc kubenswrapper[4768]: I0217 13:56:20.707701 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5d72c76c-a1d7-4256-ada6-3216f5d7c71a-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"5d72c76c-a1d7-4256-ada6-3216f5d7c71a\") " pod="openstack/glance-default-external-api-0" Feb 17 13:56:20 crc kubenswrapper[4768]: I0217 13:56:20.707735 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2zspq\" (UniqueName: \"kubernetes.io/projected/5d72c76c-a1d7-4256-ada6-3216f5d7c71a-kube-api-access-2zspq\") pod \"glance-default-external-api-0\" (UID: \"5d72c76c-a1d7-4256-ada6-3216f5d7c71a\") " pod="openstack/glance-default-external-api-0" Feb 17 13:56:20 crc kubenswrapper[4768]: I0217 13:56:20.707759 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5d72c76c-a1d7-4256-ada6-3216f5d7c71a-config-data\") pod \"glance-default-external-api-0\" (UID: \"5d72c76c-a1d7-4256-ada6-3216f5d7c71a\") " pod="openstack/glance-default-external-api-0" Feb 17 13:56:20 crc kubenswrapper[4768]: I0217 13:56:20.708552 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/5d72c76c-a1d7-4256-ada6-3216f5d7c71a-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"5d72c76c-a1d7-4256-ada6-3216f5d7c71a\") " pod="openstack/glance-default-external-api-0" Feb 17 13:56:20 crc kubenswrapper[4768]: I0217 13:56:20.708640 4768 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-external-api-0\" (UID: \"5d72c76c-a1d7-4256-ada6-3216f5d7c71a\") device mount path \"/mnt/openstack/pv11\"" pod="openstack/glance-default-external-api-0" Feb 17 13:56:20 crc kubenswrapper[4768]: I0217 13:56:20.708846 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5d72c76c-a1d7-4256-ada6-3216f5d7c71a-logs\") pod \"glance-default-external-api-0\" (UID: \"5d72c76c-a1d7-4256-ada6-3216f5d7c71a\") " pod="openstack/glance-default-external-api-0" Feb 17 13:56:20 crc kubenswrapper[4768]: I0217 13:56:20.711085 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5d72c76c-a1d7-4256-ada6-3216f5d7c71a-scripts\") pod \"glance-default-external-api-0\" (UID: \"5d72c76c-a1d7-4256-ada6-3216f5d7c71a\") " pod="openstack/glance-default-external-api-0" Feb 17 13:56:20 crc kubenswrapper[4768]: I0217 13:56:20.716843 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5d72c76c-a1d7-4256-ada6-3216f5d7c71a-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"5d72c76c-a1d7-4256-ada6-3216f5d7c71a\") " pod="openstack/glance-default-external-api-0" Feb 17 13:56:20 crc kubenswrapper[4768]: I0217 13:56:20.722318 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5d72c76c-a1d7-4256-ada6-3216f5d7c71a-config-data\") pod \"glance-default-external-api-0\" (UID: \"5d72c76c-a1d7-4256-ada6-3216f5d7c71a\") " pod="openstack/glance-default-external-api-0" Feb 17 13:56:20 crc kubenswrapper[4768]: I0217 13:56:20.726044 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5d72c76c-a1d7-4256-ada6-3216f5d7c71a-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"5d72c76c-a1d7-4256-ada6-3216f5d7c71a\") " pod="openstack/glance-default-external-api-0" Feb 17 13:56:20 crc kubenswrapper[4768]: I0217 13:56:20.727955 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2zspq\" (UniqueName: \"kubernetes.io/projected/5d72c76c-a1d7-4256-ada6-3216f5d7c71a-kube-api-access-2zspq\") pod \"glance-default-external-api-0\" (UID: \"5d72c76c-a1d7-4256-ada6-3216f5d7c71a\") " pod="openstack/glance-default-external-api-0" Feb 17 13:56:20 crc kubenswrapper[4768]: I0217 13:56:20.752904 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-external-api-0\" (UID: \"5d72c76c-a1d7-4256-ada6-3216f5d7c71a\") " pod="openstack/glance-default-external-api-0" Feb 17 13:56:20 crc kubenswrapper[4768]: I0217 13:56:20.855134 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 17 13:56:21 crc kubenswrapper[4768]: I0217 13:56:21.255637 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-684746c5d4-6lxfv" Feb 17 13:56:21 crc kubenswrapper[4768]: I0217 13:56:21.423661 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/c20ad4a2-cf3e-4390-9141-1cc58518fd2b-horizon-tls-certs\") pod \"c20ad4a2-cf3e-4390-9141-1cc58518fd2b\" (UID: \"c20ad4a2-cf3e-4390-9141-1cc58518fd2b\") " Feb 17 13:56:21 crc kubenswrapper[4768]: I0217 13:56:21.423726 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c20ad4a2-cf3e-4390-9141-1cc58518fd2b-scripts\") pod \"c20ad4a2-cf3e-4390-9141-1cc58518fd2b\" (UID: \"c20ad4a2-cf3e-4390-9141-1cc58518fd2b\") " Feb 17 13:56:21 crc kubenswrapper[4768]: I0217 13:56:21.423776 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bxtml\" (UniqueName: \"kubernetes.io/projected/c20ad4a2-cf3e-4390-9141-1cc58518fd2b-kube-api-access-bxtml\") pod \"c20ad4a2-cf3e-4390-9141-1cc58518fd2b\" (UID: \"c20ad4a2-cf3e-4390-9141-1cc58518fd2b\") " Feb 17 13:56:21 crc kubenswrapper[4768]: I0217 13:56:21.423807 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c20ad4a2-cf3e-4390-9141-1cc58518fd2b-combined-ca-bundle\") pod \"c20ad4a2-cf3e-4390-9141-1cc58518fd2b\" (UID: \"c20ad4a2-cf3e-4390-9141-1cc58518fd2b\") " Feb 17 13:56:21 crc kubenswrapper[4768]: I0217 13:56:21.423909 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c20ad4a2-cf3e-4390-9141-1cc58518fd2b-logs\") pod \"c20ad4a2-cf3e-4390-9141-1cc58518fd2b\" (UID: \"c20ad4a2-cf3e-4390-9141-1cc58518fd2b\") " Feb 17 13:56:21 crc kubenswrapper[4768]: I0217 13:56:21.423977 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/c20ad4a2-cf3e-4390-9141-1cc58518fd2b-horizon-secret-key\") pod \"c20ad4a2-cf3e-4390-9141-1cc58518fd2b\" (UID: \"c20ad4a2-cf3e-4390-9141-1cc58518fd2b\") " Feb 17 13:56:21 crc kubenswrapper[4768]: I0217 13:56:21.424057 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/c20ad4a2-cf3e-4390-9141-1cc58518fd2b-config-data\") pod \"c20ad4a2-cf3e-4390-9141-1cc58518fd2b\" (UID: \"c20ad4a2-cf3e-4390-9141-1cc58518fd2b\") " Feb 17 13:56:21 crc kubenswrapper[4768]: I0217 13:56:21.426486 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c20ad4a2-cf3e-4390-9141-1cc58518fd2b-logs" (OuterVolumeSpecName: "logs") pod "c20ad4a2-cf3e-4390-9141-1cc58518fd2b" (UID: "c20ad4a2-cf3e-4390-9141-1cc58518fd2b"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 13:56:21 crc kubenswrapper[4768]: I0217 13:56:21.431056 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c20ad4a2-cf3e-4390-9141-1cc58518fd2b-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "c20ad4a2-cf3e-4390-9141-1cc58518fd2b" (UID: "c20ad4a2-cf3e-4390-9141-1cc58518fd2b"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 13:56:21 crc kubenswrapper[4768]: I0217 13:56:21.440855 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c20ad4a2-cf3e-4390-9141-1cc58518fd2b-kube-api-access-bxtml" (OuterVolumeSpecName: "kube-api-access-bxtml") pod "c20ad4a2-cf3e-4390-9141-1cc58518fd2b" (UID: "c20ad4a2-cf3e-4390-9141-1cc58518fd2b"). InnerVolumeSpecName "kube-api-access-bxtml". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 13:56:21 crc kubenswrapper[4768]: I0217 13:56:21.446702 4768 generic.go:334] "Generic (PLEG): container finished" podID="e41e9e01-66b3-4abc-b5f3-e3679c36be33" containerID="9584bda6fe96f289ee3a52380eb9c5d37c9edc61eb98e4c6e2cc650e326f5a1c" exitCode=1 Feb 17 13:56:21 crc kubenswrapper[4768]: I0217 13:56:21.446771 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e41e9e01-66b3-4abc-b5f3-e3679c36be33","Type":"ContainerDied","Data":"9584bda6fe96f289ee3a52380eb9c5d37c9edc61eb98e4c6e2cc650e326f5a1c"} Feb 17 13:56:21 crc kubenswrapper[4768]: I0217 13:56:21.446917 4768 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="e41e9e01-66b3-4abc-b5f3-e3679c36be33" containerName="ceilometer-central-agent" containerID="cri-o://5fa7c315ecddcca2b1de68ee42a6f06db97d07f3d00e61737d0adf10f5b0dfdb" gracePeriod=30 Feb 17 13:56:21 crc kubenswrapper[4768]: I0217 13:56:21.447321 4768 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="e41e9e01-66b3-4abc-b5f3-e3679c36be33" containerName="sg-core" containerID="cri-o://8f327c5a1381c4409f94baff651bd100f059c0771c4c52d93852ea73b65ef0a3" gracePeriod=30 Feb 17 13:56:21 crc kubenswrapper[4768]: I0217 13:56:21.447371 4768 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="e41e9e01-66b3-4abc-b5f3-e3679c36be33" containerName="ceilometer-notification-agent" containerID="cri-o://19b892b320c19a000f0d4f971fe8d740085de7e9329d7502691155f570151d98" gracePeriod=30 Feb 17 13:56:21 crc kubenswrapper[4768]: I0217 13:56:21.449979 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c20ad4a2-cf3e-4390-9141-1cc58518fd2b-scripts" (OuterVolumeSpecName: "scripts") pod "c20ad4a2-cf3e-4390-9141-1cc58518fd2b" (UID: "c20ad4a2-cf3e-4390-9141-1cc58518fd2b"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 13:56:21 crc kubenswrapper[4768]: I0217 13:56:21.459391 4768 generic.go:334] "Generic (PLEG): container finished" podID="a6e3afdd-2e51-4f0a-9208-5784a5900c96" containerID="451132f07075a65508d8072b3c4ecbb82a0ca3b3da706b6771e4c4b4fc56d7a5" exitCode=0 Feb 17 13:56:21 crc kubenswrapper[4768]: I0217 13:56:21.459460 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"a6e3afdd-2e51-4f0a-9208-5784a5900c96","Type":"ContainerDied","Data":"451132f07075a65508d8072b3c4ecbb82a0ca3b3da706b6771e4c4b4fc56d7a5"} Feb 17 13:56:21 crc kubenswrapper[4768]: I0217 13:56:21.461705 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c20ad4a2-cf3e-4390-9141-1cc58518fd2b-config-data" (OuterVolumeSpecName: "config-data") pod "c20ad4a2-cf3e-4390-9141-1cc58518fd2b" (UID: "c20ad4a2-cf3e-4390-9141-1cc58518fd2b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 13:56:21 crc kubenswrapper[4768]: I0217 13:56:21.477533 4768 generic.go:334] "Generic (PLEG): container finished" podID="c20ad4a2-cf3e-4390-9141-1cc58518fd2b" containerID="90e9f664f863f155e20726677206c7a01e93a7a33ecb475dfebd4929f9cabf08" exitCode=137 Feb 17 13:56:21 crc kubenswrapper[4768]: I0217 13:56:21.478646 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-684746c5d4-6lxfv" Feb 17 13:56:21 crc kubenswrapper[4768]: I0217 13:56:21.479207 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-684746c5d4-6lxfv" event={"ID":"c20ad4a2-cf3e-4390-9141-1cc58518fd2b","Type":"ContainerDied","Data":"90e9f664f863f155e20726677206c7a01e93a7a33ecb475dfebd4929f9cabf08"} Feb 17 13:56:21 crc kubenswrapper[4768]: I0217 13:56:21.479233 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-684746c5d4-6lxfv" event={"ID":"c20ad4a2-cf3e-4390-9141-1cc58518fd2b","Type":"ContainerDied","Data":"1d17c1ad4b2966fe20b3c50b7d3e8591d321560f133eb3c2ff7c07184fbb95e5"} Feb 17 13:56:21 crc kubenswrapper[4768]: I0217 13:56:21.479250 4768 scope.go:117] "RemoveContainer" containerID="4725ccba78631322acecbc5f57357e815cbf090efacbc802aac47a86ce2bbbf1" Feb 17 13:56:21 crc kubenswrapper[4768]: I0217 13:56:21.483846 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c20ad4a2-cf3e-4390-9141-1cc58518fd2b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c20ad4a2-cf3e-4390-9141-1cc58518fd2b" (UID: "c20ad4a2-cf3e-4390-9141-1cc58518fd2b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 13:56:21 crc kubenswrapper[4768]: I0217 13:56:21.526430 4768 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c20ad4a2-cf3e-4390-9141-1cc58518fd2b-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 13:56:21 crc kubenswrapper[4768]: I0217 13:56:21.526732 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bxtml\" (UniqueName: \"kubernetes.io/projected/c20ad4a2-cf3e-4390-9141-1cc58518fd2b-kube-api-access-bxtml\") on node \"crc\" DevicePath \"\"" Feb 17 13:56:21 crc kubenswrapper[4768]: I0217 13:56:21.526750 4768 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c20ad4a2-cf3e-4390-9141-1cc58518fd2b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 13:56:21 crc kubenswrapper[4768]: I0217 13:56:21.526761 4768 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c20ad4a2-cf3e-4390-9141-1cc58518fd2b-logs\") on node \"crc\" DevicePath \"\"" Feb 17 13:56:21 crc kubenswrapper[4768]: I0217 13:56:21.526772 4768 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/c20ad4a2-cf3e-4390-9141-1cc58518fd2b-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Feb 17 13:56:21 crc kubenswrapper[4768]: I0217 13:56:21.526785 4768 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/c20ad4a2-cf3e-4390-9141-1cc58518fd2b-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 13:56:21 crc kubenswrapper[4768]: I0217 13:56:21.533078 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 17 13:56:21 crc kubenswrapper[4768]: I0217 13:56:21.535793 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c20ad4a2-cf3e-4390-9141-1cc58518fd2b-horizon-tls-certs" (OuterVolumeSpecName: "horizon-tls-certs") pod "c20ad4a2-cf3e-4390-9141-1cc58518fd2b" (UID: "c20ad4a2-cf3e-4390-9141-1cc58518fd2b"). InnerVolumeSpecName "horizon-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 13:56:21 crc kubenswrapper[4768]: I0217 13:56:21.566667 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9bc3b9ad-1d13-4214-9824-af7003192ace" path="/var/lib/kubelet/pods/9bc3b9ad-1d13-4214-9824-af7003192ace/volumes" Feb 17 13:56:21 crc kubenswrapper[4768]: I0217 13:56:21.643165 4768 reconciler_common.go:293] "Volume detached for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/c20ad4a2-cf3e-4390-9141-1cc58518fd2b-horizon-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 17 13:56:21 crc kubenswrapper[4768]: I0217 13:56:21.675918 4768 scope.go:117] "RemoveContainer" containerID="90e9f664f863f155e20726677206c7a01e93a7a33ecb475dfebd4929f9cabf08" Feb 17 13:56:21 crc kubenswrapper[4768]: I0217 13:56:21.705173 4768 scope.go:117] "RemoveContainer" containerID="4725ccba78631322acecbc5f57357e815cbf090efacbc802aac47a86ce2bbbf1" Feb 17 13:56:21 crc kubenswrapper[4768]: E0217 13:56:21.706256 4768 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4725ccba78631322acecbc5f57357e815cbf090efacbc802aac47a86ce2bbbf1\": container with ID starting with 4725ccba78631322acecbc5f57357e815cbf090efacbc802aac47a86ce2bbbf1 not found: ID does not exist" containerID="4725ccba78631322acecbc5f57357e815cbf090efacbc802aac47a86ce2bbbf1" Feb 17 13:56:21 crc kubenswrapper[4768]: I0217 13:56:21.706295 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4725ccba78631322acecbc5f57357e815cbf090efacbc802aac47a86ce2bbbf1"} err="failed to get container status \"4725ccba78631322acecbc5f57357e815cbf090efacbc802aac47a86ce2bbbf1\": rpc error: code = NotFound desc = could not find container \"4725ccba78631322acecbc5f57357e815cbf090efacbc802aac47a86ce2bbbf1\": container with ID starting with 4725ccba78631322acecbc5f57357e815cbf090efacbc802aac47a86ce2bbbf1 not found: ID does not exist" Feb 17 13:56:21 crc kubenswrapper[4768]: I0217 13:56:21.706326 4768 scope.go:117] "RemoveContainer" containerID="90e9f664f863f155e20726677206c7a01e93a7a33ecb475dfebd4929f9cabf08" Feb 17 13:56:21 crc kubenswrapper[4768]: E0217 13:56:21.710539 4768 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"90e9f664f863f155e20726677206c7a01e93a7a33ecb475dfebd4929f9cabf08\": container with ID starting with 90e9f664f863f155e20726677206c7a01e93a7a33ecb475dfebd4929f9cabf08 not found: ID does not exist" containerID="90e9f664f863f155e20726677206c7a01e93a7a33ecb475dfebd4929f9cabf08" Feb 17 13:56:21 crc kubenswrapper[4768]: I0217 13:56:21.710590 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"90e9f664f863f155e20726677206c7a01e93a7a33ecb475dfebd4929f9cabf08"} err="failed to get container status \"90e9f664f863f155e20726677206c7a01e93a7a33ecb475dfebd4929f9cabf08\": rpc error: code = NotFound desc = could not find container \"90e9f664f863f155e20726677206c7a01e93a7a33ecb475dfebd4929f9cabf08\": container with ID starting with 90e9f664f863f155e20726677206c7a01e93a7a33ecb475dfebd4929f9cabf08 not found: ID does not exist" Feb 17 13:56:21 crc kubenswrapper[4768]: I0217 13:56:21.802990 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-684746c5d4-6lxfv"] Feb 17 13:56:21 crc kubenswrapper[4768]: I0217 13:56:21.818072 4768 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-684746c5d4-6lxfv"] Feb 17 13:56:22 crc kubenswrapper[4768]: I0217 13:56:22.452635 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-b89745fbd-lcjtt" Feb 17 13:56:22 crc kubenswrapper[4768]: I0217 13:56:22.467251 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/c732e620-9ed0-4246-93ca-c71277029344-config\") pod \"c732e620-9ed0-4246-93ca-c71277029344\" (UID: \"c732e620-9ed0-4246-93ca-c71277029344\") " Feb 17 13:56:22 crc kubenswrapper[4768]: I0217 13:56:22.467738 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/c732e620-9ed0-4246-93ca-c71277029344-httpd-config\") pod \"c732e620-9ed0-4246-93ca-c71277029344\" (UID: \"c732e620-9ed0-4246-93ca-c71277029344\") " Feb 17 13:56:22 crc kubenswrapper[4768]: I0217 13:56:22.476385 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c732e620-9ed0-4246-93ca-c71277029344-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "c732e620-9ed0-4246-93ca-c71277029344" (UID: "c732e620-9ed0-4246-93ca-c71277029344"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 13:56:22 crc kubenswrapper[4768]: I0217 13:56:22.498430 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"5d72c76c-a1d7-4256-ada6-3216f5d7c71a","Type":"ContainerStarted","Data":"20d2cd3e66e88626d1e04cd66b22a3b3fbfc3d51bfdec5ee050dd5f92ad6395e"} Feb 17 13:56:22 crc kubenswrapper[4768]: I0217 13:56:22.498480 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"5d72c76c-a1d7-4256-ada6-3216f5d7c71a","Type":"ContainerStarted","Data":"21dc9e71755375d8d6ea014af9bea5e0e895935d978a0bd0b71f792d3ea12418"} Feb 17 13:56:22 crc kubenswrapper[4768]: I0217 13:56:22.502904 4768 generic.go:334] "Generic (PLEG): container finished" podID="c732e620-9ed0-4246-93ca-c71277029344" containerID="9229cce94e15b6699b3ab990d66b23241868303d7e67cbe615cd0b79079aea50" exitCode=0 Feb 17 13:56:22 crc kubenswrapper[4768]: I0217 13:56:22.502980 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-b89745fbd-lcjtt" event={"ID":"c732e620-9ed0-4246-93ca-c71277029344","Type":"ContainerDied","Data":"9229cce94e15b6699b3ab990d66b23241868303d7e67cbe615cd0b79079aea50"} Feb 17 13:56:22 crc kubenswrapper[4768]: I0217 13:56:22.503011 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-b89745fbd-lcjtt" event={"ID":"c732e620-9ed0-4246-93ca-c71277029344","Type":"ContainerDied","Data":"ace3362ed07f5e0539ea6dc60aee1c83920d3560c685450b37450b8a59c0cb07"} Feb 17 13:56:22 crc kubenswrapper[4768]: I0217 13:56:22.503032 4768 scope.go:117] "RemoveContainer" containerID="52b2a11a2eea4835beeebdcbe488ab1711193cf6232d4e75045d81b19a9a758c" Feb 17 13:56:22 crc kubenswrapper[4768]: I0217 13:56:22.503220 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-b89745fbd-lcjtt" Feb 17 13:56:22 crc kubenswrapper[4768]: I0217 13:56:22.512783 4768 generic.go:334] "Generic (PLEG): container finished" podID="e41e9e01-66b3-4abc-b5f3-e3679c36be33" containerID="8f327c5a1381c4409f94baff651bd100f059c0771c4c52d93852ea73b65ef0a3" exitCode=2 Feb 17 13:56:22 crc kubenswrapper[4768]: I0217 13:56:22.512809 4768 generic.go:334] "Generic (PLEG): container finished" podID="e41e9e01-66b3-4abc-b5f3-e3679c36be33" containerID="19b892b320c19a000f0d4f971fe8d740085de7e9329d7502691155f570151d98" exitCode=0 Feb 17 13:56:22 crc kubenswrapper[4768]: I0217 13:56:22.513154 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e41e9e01-66b3-4abc-b5f3-e3679c36be33","Type":"ContainerDied","Data":"8f327c5a1381c4409f94baff651bd100f059c0771c4c52d93852ea73b65ef0a3"} Feb 17 13:56:22 crc kubenswrapper[4768]: I0217 13:56:22.513376 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e41e9e01-66b3-4abc-b5f3-e3679c36be33","Type":"ContainerDied","Data":"19b892b320c19a000f0d4f971fe8d740085de7e9329d7502691155f570151d98"} Feb 17 13:56:22 crc kubenswrapper[4768]: I0217 13:56:22.527381 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c732e620-9ed0-4246-93ca-c71277029344-config" (OuterVolumeSpecName: "config") pod "c732e620-9ed0-4246-93ca-c71277029344" (UID: "c732e620-9ed0-4246-93ca-c71277029344"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 13:56:22 crc kubenswrapper[4768]: I0217 13:56:22.552501 4768 scope.go:117] "RemoveContainer" containerID="9229cce94e15b6699b3ab990d66b23241868303d7e67cbe615cd0b79079aea50" Feb 17 13:56:22 crc kubenswrapper[4768]: I0217 13:56:22.572452 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s8ncg\" (UniqueName: \"kubernetes.io/projected/c732e620-9ed0-4246-93ca-c71277029344-kube-api-access-s8ncg\") pod \"c732e620-9ed0-4246-93ca-c71277029344\" (UID: \"c732e620-9ed0-4246-93ca-c71277029344\") " Feb 17 13:56:22 crc kubenswrapper[4768]: I0217 13:56:22.572511 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c732e620-9ed0-4246-93ca-c71277029344-combined-ca-bundle\") pod \"c732e620-9ed0-4246-93ca-c71277029344\" (UID: \"c732e620-9ed0-4246-93ca-c71277029344\") " Feb 17 13:56:22 crc kubenswrapper[4768]: I0217 13:56:22.572610 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/c732e620-9ed0-4246-93ca-c71277029344-ovndb-tls-certs\") pod \"c732e620-9ed0-4246-93ca-c71277029344\" (UID: \"c732e620-9ed0-4246-93ca-c71277029344\") " Feb 17 13:56:22 crc kubenswrapper[4768]: I0217 13:56:22.575691 4768 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/c732e620-9ed0-4246-93ca-c71277029344-config\") on node \"crc\" DevicePath \"\"" Feb 17 13:56:22 crc kubenswrapper[4768]: I0217 13:56:22.575765 4768 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/c732e620-9ed0-4246-93ca-c71277029344-httpd-config\") on node \"crc\" DevicePath \"\"" Feb 17 13:56:22 crc kubenswrapper[4768]: I0217 13:56:22.577566 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c732e620-9ed0-4246-93ca-c71277029344-kube-api-access-s8ncg" (OuterVolumeSpecName: "kube-api-access-s8ncg") pod "c732e620-9ed0-4246-93ca-c71277029344" (UID: "c732e620-9ed0-4246-93ca-c71277029344"). InnerVolumeSpecName "kube-api-access-s8ncg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 13:56:22 crc kubenswrapper[4768]: I0217 13:56:22.598841 4768 scope.go:117] "RemoveContainer" containerID="52b2a11a2eea4835beeebdcbe488ab1711193cf6232d4e75045d81b19a9a758c" Feb 17 13:56:22 crc kubenswrapper[4768]: E0217 13:56:22.604170 4768 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"52b2a11a2eea4835beeebdcbe488ab1711193cf6232d4e75045d81b19a9a758c\": container with ID starting with 52b2a11a2eea4835beeebdcbe488ab1711193cf6232d4e75045d81b19a9a758c not found: ID does not exist" containerID="52b2a11a2eea4835beeebdcbe488ab1711193cf6232d4e75045d81b19a9a758c" Feb 17 13:56:22 crc kubenswrapper[4768]: I0217 13:56:22.604216 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"52b2a11a2eea4835beeebdcbe488ab1711193cf6232d4e75045d81b19a9a758c"} err="failed to get container status \"52b2a11a2eea4835beeebdcbe488ab1711193cf6232d4e75045d81b19a9a758c\": rpc error: code = NotFound desc = could not find container \"52b2a11a2eea4835beeebdcbe488ab1711193cf6232d4e75045d81b19a9a758c\": container with ID starting with 52b2a11a2eea4835beeebdcbe488ab1711193cf6232d4e75045d81b19a9a758c not found: ID does not exist" Feb 17 13:56:22 crc kubenswrapper[4768]: I0217 13:56:22.604245 4768 scope.go:117] "RemoveContainer" containerID="9229cce94e15b6699b3ab990d66b23241868303d7e67cbe615cd0b79079aea50" Feb 17 13:56:22 crc kubenswrapper[4768]: E0217 13:56:22.607309 4768 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9229cce94e15b6699b3ab990d66b23241868303d7e67cbe615cd0b79079aea50\": container with ID starting with 9229cce94e15b6699b3ab990d66b23241868303d7e67cbe615cd0b79079aea50 not found: ID does not exist" containerID="9229cce94e15b6699b3ab990d66b23241868303d7e67cbe615cd0b79079aea50" Feb 17 13:56:22 crc kubenswrapper[4768]: I0217 13:56:22.607354 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9229cce94e15b6699b3ab990d66b23241868303d7e67cbe615cd0b79079aea50"} err="failed to get container status \"9229cce94e15b6699b3ab990d66b23241868303d7e67cbe615cd0b79079aea50\": rpc error: code = NotFound desc = could not find container \"9229cce94e15b6699b3ab990d66b23241868303d7e67cbe615cd0b79079aea50\": container with ID starting with 9229cce94e15b6699b3ab990d66b23241868303d7e67cbe615cd0b79079aea50 not found: ID does not exist" Feb 17 13:56:22 crc kubenswrapper[4768]: I0217 13:56:22.657837 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c732e620-9ed0-4246-93ca-c71277029344-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c732e620-9ed0-4246-93ca-c71277029344" (UID: "c732e620-9ed0-4246-93ca-c71277029344"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 13:56:22 crc kubenswrapper[4768]: I0217 13:56:22.678707 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s8ncg\" (UniqueName: \"kubernetes.io/projected/c732e620-9ed0-4246-93ca-c71277029344-kube-api-access-s8ncg\") on node \"crc\" DevicePath \"\"" Feb 17 13:56:22 crc kubenswrapper[4768]: I0217 13:56:22.678742 4768 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c732e620-9ed0-4246-93ca-c71277029344-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 13:56:22 crc kubenswrapper[4768]: I0217 13:56:22.687815 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c732e620-9ed0-4246-93ca-c71277029344-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "c732e620-9ed0-4246-93ca-c71277029344" (UID: "c732e620-9ed0-4246-93ca-c71277029344"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 13:56:22 crc kubenswrapper[4768]: I0217 13:56:22.770902 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 17 13:56:22 crc kubenswrapper[4768]: I0217 13:56:22.780913 4768 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/c732e620-9ed0-4246-93ca-c71277029344-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 17 13:56:22 crc kubenswrapper[4768]: I0217 13:56:22.854060 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-b89745fbd-lcjtt"] Feb 17 13:56:22 crc kubenswrapper[4768]: I0217 13:56:22.867090 4768 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-b89745fbd-lcjtt"] Feb 17 13:56:22 crc kubenswrapper[4768]: I0217 13:56:22.884411 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"a6e3afdd-2e51-4f0a-9208-5784a5900c96\" (UID: \"a6e3afdd-2e51-4f0a-9208-5784a5900c96\") " Feb 17 13:56:22 crc kubenswrapper[4768]: I0217 13:56:22.884660 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a6e3afdd-2e51-4f0a-9208-5784a5900c96-internal-tls-certs\") pod \"a6e3afdd-2e51-4f0a-9208-5784a5900c96\" (UID: \"a6e3afdd-2e51-4f0a-9208-5784a5900c96\") " Feb 17 13:56:22 crc kubenswrapper[4768]: I0217 13:56:22.884782 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a6e3afdd-2e51-4f0a-9208-5784a5900c96-scripts\") pod \"a6e3afdd-2e51-4f0a-9208-5784a5900c96\" (UID: \"a6e3afdd-2e51-4f0a-9208-5784a5900c96\") " Feb 17 13:56:22 crc kubenswrapper[4768]: I0217 13:56:22.884820 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-866qp\" (UniqueName: \"kubernetes.io/projected/a6e3afdd-2e51-4f0a-9208-5784a5900c96-kube-api-access-866qp\") pod \"a6e3afdd-2e51-4f0a-9208-5784a5900c96\" (UID: \"a6e3afdd-2e51-4f0a-9208-5784a5900c96\") " Feb 17 13:56:22 crc kubenswrapper[4768]: I0217 13:56:22.884882 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/a6e3afdd-2e51-4f0a-9208-5784a5900c96-httpd-run\") pod \"a6e3afdd-2e51-4f0a-9208-5784a5900c96\" (UID: \"a6e3afdd-2e51-4f0a-9208-5784a5900c96\") " Feb 17 13:56:22 crc kubenswrapper[4768]: I0217 13:56:22.884948 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a6e3afdd-2e51-4f0a-9208-5784a5900c96-config-data\") pod \"a6e3afdd-2e51-4f0a-9208-5784a5900c96\" (UID: \"a6e3afdd-2e51-4f0a-9208-5784a5900c96\") " Feb 17 13:56:22 crc kubenswrapper[4768]: I0217 13:56:22.885023 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a6e3afdd-2e51-4f0a-9208-5784a5900c96-logs\") pod \"a6e3afdd-2e51-4f0a-9208-5784a5900c96\" (UID: \"a6e3afdd-2e51-4f0a-9208-5784a5900c96\") " Feb 17 13:56:22 crc kubenswrapper[4768]: I0217 13:56:22.885055 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a6e3afdd-2e51-4f0a-9208-5784a5900c96-combined-ca-bundle\") pod \"a6e3afdd-2e51-4f0a-9208-5784a5900c96\" (UID: \"a6e3afdd-2e51-4f0a-9208-5784a5900c96\") " Feb 17 13:56:22 crc kubenswrapper[4768]: I0217 13:56:22.892488 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a6e3afdd-2e51-4f0a-9208-5784a5900c96-kube-api-access-866qp" (OuterVolumeSpecName: "kube-api-access-866qp") pod "a6e3afdd-2e51-4f0a-9208-5784a5900c96" (UID: "a6e3afdd-2e51-4f0a-9208-5784a5900c96"). InnerVolumeSpecName "kube-api-access-866qp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 13:56:22 crc kubenswrapper[4768]: I0217 13:56:22.892722 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a6e3afdd-2e51-4f0a-9208-5784a5900c96-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "a6e3afdd-2e51-4f0a-9208-5784a5900c96" (UID: "a6e3afdd-2e51-4f0a-9208-5784a5900c96"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 13:56:22 crc kubenswrapper[4768]: I0217 13:56:22.892782 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage10-crc" (OuterVolumeSpecName: "glance") pod "a6e3afdd-2e51-4f0a-9208-5784a5900c96" (UID: "a6e3afdd-2e51-4f0a-9208-5784a5900c96"). InnerVolumeSpecName "local-storage10-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Feb 17 13:56:22 crc kubenswrapper[4768]: I0217 13:56:22.892867 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a6e3afdd-2e51-4f0a-9208-5784a5900c96-logs" (OuterVolumeSpecName: "logs") pod "a6e3afdd-2e51-4f0a-9208-5784a5900c96" (UID: "a6e3afdd-2e51-4f0a-9208-5784a5900c96"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 13:56:22 crc kubenswrapper[4768]: I0217 13:56:22.895791 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a6e3afdd-2e51-4f0a-9208-5784a5900c96-scripts" (OuterVolumeSpecName: "scripts") pod "a6e3afdd-2e51-4f0a-9208-5784a5900c96" (UID: "a6e3afdd-2e51-4f0a-9208-5784a5900c96"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 13:56:22 crc kubenswrapper[4768]: I0217 13:56:22.926253 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a6e3afdd-2e51-4f0a-9208-5784a5900c96-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a6e3afdd-2e51-4f0a-9208-5784a5900c96" (UID: "a6e3afdd-2e51-4f0a-9208-5784a5900c96"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 13:56:22 crc kubenswrapper[4768]: I0217 13:56:22.949942 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a6e3afdd-2e51-4f0a-9208-5784a5900c96-config-data" (OuterVolumeSpecName: "config-data") pod "a6e3afdd-2e51-4f0a-9208-5784a5900c96" (UID: "a6e3afdd-2e51-4f0a-9208-5784a5900c96"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 13:56:22 crc kubenswrapper[4768]: I0217 13:56:22.963910 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a6e3afdd-2e51-4f0a-9208-5784a5900c96-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "a6e3afdd-2e51-4f0a-9208-5784a5900c96" (UID: "a6e3afdd-2e51-4f0a-9208-5784a5900c96"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 13:56:22 crc kubenswrapper[4768]: I0217 13:56:22.986984 4768 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a6e3afdd-2e51-4f0a-9208-5784a5900c96-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 13:56:22 crc kubenswrapper[4768]: I0217 13:56:22.987030 4768 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a6e3afdd-2e51-4f0a-9208-5784a5900c96-logs\") on node \"crc\" DevicePath \"\"" Feb 17 13:56:22 crc kubenswrapper[4768]: I0217 13:56:22.987043 4768 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a6e3afdd-2e51-4f0a-9208-5784a5900c96-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 13:56:22 crc kubenswrapper[4768]: I0217 13:56:22.987087 4768 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") on node \"crc\" " Feb 17 13:56:22 crc kubenswrapper[4768]: I0217 13:56:22.987177 4768 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a6e3afdd-2e51-4f0a-9208-5784a5900c96-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 17 13:56:22 crc kubenswrapper[4768]: I0217 13:56:22.987190 4768 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a6e3afdd-2e51-4f0a-9208-5784a5900c96-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 13:56:22 crc kubenswrapper[4768]: I0217 13:56:22.987200 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-866qp\" (UniqueName: \"kubernetes.io/projected/a6e3afdd-2e51-4f0a-9208-5784a5900c96-kube-api-access-866qp\") on node \"crc\" DevicePath \"\"" Feb 17 13:56:22 crc kubenswrapper[4768]: I0217 13:56:22.987213 4768 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/a6e3afdd-2e51-4f0a-9208-5784a5900c96-httpd-run\") on node \"crc\" DevicePath \"\"" Feb 17 13:56:23 crc kubenswrapper[4768]: I0217 13:56:23.014999 4768 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage10-crc" (UniqueName: "kubernetes.io/local-volume/local-storage10-crc") on node "crc" Feb 17 13:56:23 crc kubenswrapper[4768]: I0217 13:56:23.089801 4768 reconciler_common.go:293] "Volume detached for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") on node \"crc\" DevicePath \"\"" Feb 17 13:56:23 crc kubenswrapper[4768]: I0217 13:56:23.522894 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"5d72c76c-a1d7-4256-ada6-3216f5d7c71a","Type":"ContainerStarted","Data":"f3e96ce6f0c667c73dbc2b11d2c6552e91577a6ce67a0da158f9fc7ef4c3b4c9"} Feb 17 13:56:23 crc kubenswrapper[4768]: I0217 13:56:23.527036 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"a6e3afdd-2e51-4f0a-9208-5784a5900c96","Type":"ContainerDied","Data":"4b2868befb5600dd322335a286ed6bbfe75a41e04c110c758f1a5a3601b13e50"} Feb 17 13:56:23 crc kubenswrapper[4768]: I0217 13:56:23.527076 4768 scope.go:117] "RemoveContainer" containerID="451132f07075a65508d8072b3c4ecbb82a0ca3b3da706b6771e4c4b4fc56d7a5" Feb 17 13:56:23 crc kubenswrapper[4768]: I0217 13:56:23.527191 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 17 13:56:23 crc kubenswrapper[4768]: I0217 13:56:23.548477 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c20ad4a2-cf3e-4390-9141-1cc58518fd2b" path="/var/lib/kubelet/pods/c20ad4a2-cf3e-4390-9141-1cc58518fd2b/volumes" Feb 17 13:56:23 crc kubenswrapper[4768]: I0217 13:56:23.549642 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c732e620-9ed0-4246-93ca-c71277029344" path="/var/lib/kubelet/pods/c732e620-9ed0-4246-93ca-c71277029344/volumes" Feb 17 13:56:23 crc kubenswrapper[4768]: I0217 13:56:23.560779 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=3.5607650140000002 podStartE2EDuration="3.560765014s" podCreationTimestamp="2026-02-17 13:56:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 13:56:23.556830855 +0000 UTC m=+1202.836217297" watchObservedRunningTime="2026-02-17 13:56:23.560765014 +0000 UTC m=+1202.840151456" Feb 17 13:56:23 crc kubenswrapper[4768]: I0217 13:56:23.572039 4768 scope.go:117] "RemoveContainer" containerID="203ea60ac8fe35f253b6cd1d649e5f9ff820300b5239154f25461f6868be29a3" Feb 17 13:56:23 crc kubenswrapper[4768]: I0217 13:56:23.636641 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 17 13:56:23 crc kubenswrapper[4768]: I0217 13:56:23.654277 4768 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 17 13:56:23 crc kubenswrapper[4768]: I0217 13:56:23.682151 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 17 13:56:23 crc kubenswrapper[4768]: E0217 13:56:23.682546 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c20ad4a2-cf3e-4390-9141-1cc58518fd2b" containerName="horizon-log" Feb 17 13:56:23 crc kubenswrapper[4768]: I0217 13:56:23.682560 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="c20ad4a2-cf3e-4390-9141-1cc58518fd2b" containerName="horizon-log" Feb 17 13:56:23 crc kubenswrapper[4768]: E0217 13:56:23.682576 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c732e620-9ed0-4246-93ca-c71277029344" containerName="neutron-httpd" Feb 17 13:56:23 crc kubenswrapper[4768]: I0217 13:56:23.682582 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="c732e620-9ed0-4246-93ca-c71277029344" containerName="neutron-httpd" Feb 17 13:56:23 crc kubenswrapper[4768]: E0217 13:56:23.682597 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a6e3afdd-2e51-4f0a-9208-5784a5900c96" containerName="glance-log" Feb 17 13:56:23 crc kubenswrapper[4768]: I0217 13:56:23.682603 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="a6e3afdd-2e51-4f0a-9208-5784a5900c96" containerName="glance-log" Feb 17 13:56:23 crc kubenswrapper[4768]: E0217 13:56:23.682615 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c20ad4a2-cf3e-4390-9141-1cc58518fd2b" containerName="horizon" Feb 17 13:56:23 crc kubenswrapper[4768]: I0217 13:56:23.682622 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="c20ad4a2-cf3e-4390-9141-1cc58518fd2b" containerName="horizon" Feb 17 13:56:23 crc kubenswrapper[4768]: E0217 13:56:23.682631 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a6e3afdd-2e51-4f0a-9208-5784a5900c96" containerName="glance-httpd" Feb 17 13:56:23 crc kubenswrapper[4768]: I0217 13:56:23.682637 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="a6e3afdd-2e51-4f0a-9208-5784a5900c96" containerName="glance-httpd" Feb 17 13:56:23 crc kubenswrapper[4768]: E0217 13:56:23.682649 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c732e620-9ed0-4246-93ca-c71277029344" containerName="neutron-api" Feb 17 13:56:23 crc kubenswrapper[4768]: I0217 13:56:23.682655 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="c732e620-9ed0-4246-93ca-c71277029344" containerName="neutron-api" Feb 17 13:56:23 crc kubenswrapper[4768]: I0217 13:56:23.682826 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="c732e620-9ed0-4246-93ca-c71277029344" containerName="neutron-api" Feb 17 13:56:23 crc kubenswrapper[4768]: I0217 13:56:23.682839 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="c732e620-9ed0-4246-93ca-c71277029344" containerName="neutron-httpd" Feb 17 13:56:23 crc kubenswrapper[4768]: I0217 13:56:23.682851 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="a6e3afdd-2e51-4f0a-9208-5784a5900c96" containerName="glance-log" Feb 17 13:56:23 crc kubenswrapper[4768]: I0217 13:56:23.682861 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="a6e3afdd-2e51-4f0a-9208-5784a5900c96" containerName="glance-httpd" Feb 17 13:56:23 crc kubenswrapper[4768]: I0217 13:56:23.682870 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="c20ad4a2-cf3e-4390-9141-1cc58518fd2b" containerName="horizon" Feb 17 13:56:23 crc kubenswrapper[4768]: I0217 13:56:23.682878 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="c20ad4a2-cf3e-4390-9141-1cc58518fd2b" containerName="horizon-log" Feb 17 13:56:23 crc kubenswrapper[4768]: I0217 13:56:23.683750 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 17 13:56:23 crc kubenswrapper[4768]: I0217 13:56:23.688607 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Feb 17 13:56:23 crc kubenswrapper[4768]: I0217 13:56:23.688867 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Feb 17 13:56:23 crc kubenswrapper[4768]: I0217 13:56:23.697962 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 17 13:56:23 crc kubenswrapper[4768]: I0217 13:56:23.814984 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-internal-api-0\" (UID: \"46ee7793-1245-4648-aa12-ae11b1db13ca\") " pod="openstack/glance-default-internal-api-0" Feb 17 13:56:23 crc kubenswrapper[4768]: I0217 13:56:23.815056 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/46ee7793-1245-4648-aa12-ae11b1db13ca-logs\") pod \"glance-default-internal-api-0\" (UID: \"46ee7793-1245-4648-aa12-ae11b1db13ca\") " pod="openstack/glance-default-internal-api-0" Feb 17 13:56:23 crc kubenswrapper[4768]: I0217 13:56:23.815091 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/46ee7793-1245-4648-aa12-ae11b1db13ca-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"46ee7793-1245-4648-aa12-ae11b1db13ca\") " pod="openstack/glance-default-internal-api-0" Feb 17 13:56:23 crc kubenswrapper[4768]: I0217 13:56:23.815171 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/46ee7793-1245-4648-aa12-ae11b1db13ca-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"46ee7793-1245-4648-aa12-ae11b1db13ca\") " pod="openstack/glance-default-internal-api-0" Feb 17 13:56:23 crc kubenswrapper[4768]: I0217 13:56:23.815244 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/46ee7793-1245-4648-aa12-ae11b1db13ca-config-data\") pod \"glance-default-internal-api-0\" (UID: \"46ee7793-1245-4648-aa12-ae11b1db13ca\") " pod="openstack/glance-default-internal-api-0" Feb 17 13:56:23 crc kubenswrapper[4768]: I0217 13:56:23.815282 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/46ee7793-1245-4648-aa12-ae11b1db13ca-scripts\") pod \"glance-default-internal-api-0\" (UID: \"46ee7793-1245-4648-aa12-ae11b1db13ca\") " pod="openstack/glance-default-internal-api-0" Feb 17 13:56:23 crc kubenswrapper[4768]: I0217 13:56:23.815319 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9hhgt\" (UniqueName: \"kubernetes.io/projected/46ee7793-1245-4648-aa12-ae11b1db13ca-kube-api-access-9hhgt\") pod \"glance-default-internal-api-0\" (UID: \"46ee7793-1245-4648-aa12-ae11b1db13ca\") " pod="openstack/glance-default-internal-api-0" Feb 17 13:56:23 crc kubenswrapper[4768]: I0217 13:56:23.815373 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/46ee7793-1245-4648-aa12-ae11b1db13ca-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"46ee7793-1245-4648-aa12-ae11b1db13ca\") " pod="openstack/glance-default-internal-api-0" Feb 17 13:56:23 crc kubenswrapper[4768]: I0217 13:56:23.917459 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/46ee7793-1245-4648-aa12-ae11b1db13ca-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"46ee7793-1245-4648-aa12-ae11b1db13ca\") " pod="openstack/glance-default-internal-api-0" Feb 17 13:56:23 crc kubenswrapper[4768]: I0217 13:56:23.917520 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/46ee7793-1245-4648-aa12-ae11b1db13ca-config-data\") pod \"glance-default-internal-api-0\" (UID: \"46ee7793-1245-4648-aa12-ae11b1db13ca\") " pod="openstack/glance-default-internal-api-0" Feb 17 13:56:23 crc kubenswrapper[4768]: I0217 13:56:23.917545 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/46ee7793-1245-4648-aa12-ae11b1db13ca-scripts\") pod \"glance-default-internal-api-0\" (UID: \"46ee7793-1245-4648-aa12-ae11b1db13ca\") " pod="openstack/glance-default-internal-api-0" Feb 17 13:56:23 crc kubenswrapper[4768]: I0217 13:56:23.917570 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9hhgt\" (UniqueName: \"kubernetes.io/projected/46ee7793-1245-4648-aa12-ae11b1db13ca-kube-api-access-9hhgt\") pod \"glance-default-internal-api-0\" (UID: \"46ee7793-1245-4648-aa12-ae11b1db13ca\") " pod="openstack/glance-default-internal-api-0" Feb 17 13:56:23 crc kubenswrapper[4768]: I0217 13:56:23.917603 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/46ee7793-1245-4648-aa12-ae11b1db13ca-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"46ee7793-1245-4648-aa12-ae11b1db13ca\") " pod="openstack/glance-default-internal-api-0" Feb 17 13:56:23 crc kubenswrapper[4768]: I0217 13:56:23.917622 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-internal-api-0\" (UID: \"46ee7793-1245-4648-aa12-ae11b1db13ca\") " pod="openstack/glance-default-internal-api-0" Feb 17 13:56:23 crc kubenswrapper[4768]: I0217 13:56:23.917659 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/46ee7793-1245-4648-aa12-ae11b1db13ca-logs\") pod \"glance-default-internal-api-0\" (UID: \"46ee7793-1245-4648-aa12-ae11b1db13ca\") " pod="openstack/glance-default-internal-api-0" Feb 17 13:56:23 crc kubenswrapper[4768]: I0217 13:56:23.917691 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/46ee7793-1245-4648-aa12-ae11b1db13ca-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"46ee7793-1245-4648-aa12-ae11b1db13ca\") " pod="openstack/glance-default-internal-api-0" Feb 17 13:56:23 crc kubenswrapper[4768]: I0217 13:56:23.919003 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/46ee7793-1245-4648-aa12-ae11b1db13ca-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"46ee7793-1245-4648-aa12-ae11b1db13ca\") " pod="openstack/glance-default-internal-api-0" Feb 17 13:56:23 crc kubenswrapper[4768]: I0217 13:56:23.927221 4768 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-internal-api-0\" (UID: \"46ee7793-1245-4648-aa12-ae11b1db13ca\") device mount path \"/mnt/openstack/pv10\"" pod="openstack/glance-default-internal-api-0" Feb 17 13:56:23 crc kubenswrapper[4768]: I0217 13:56:23.928329 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/46ee7793-1245-4648-aa12-ae11b1db13ca-config-data\") pod \"glance-default-internal-api-0\" (UID: \"46ee7793-1245-4648-aa12-ae11b1db13ca\") " pod="openstack/glance-default-internal-api-0" Feb 17 13:56:23 crc kubenswrapper[4768]: I0217 13:56:23.932366 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/46ee7793-1245-4648-aa12-ae11b1db13ca-scripts\") pod \"glance-default-internal-api-0\" (UID: \"46ee7793-1245-4648-aa12-ae11b1db13ca\") " pod="openstack/glance-default-internal-api-0" Feb 17 13:56:23 crc kubenswrapper[4768]: I0217 13:56:23.935011 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/46ee7793-1245-4648-aa12-ae11b1db13ca-logs\") pod \"glance-default-internal-api-0\" (UID: \"46ee7793-1245-4648-aa12-ae11b1db13ca\") " pod="openstack/glance-default-internal-api-0" Feb 17 13:56:23 crc kubenswrapper[4768]: I0217 13:56:23.935372 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9hhgt\" (UniqueName: \"kubernetes.io/projected/46ee7793-1245-4648-aa12-ae11b1db13ca-kube-api-access-9hhgt\") pod \"glance-default-internal-api-0\" (UID: \"46ee7793-1245-4648-aa12-ae11b1db13ca\") " pod="openstack/glance-default-internal-api-0" Feb 17 13:56:23 crc kubenswrapper[4768]: I0217 13:56:23.937481 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/46ee7793-1245-4648-aa12-ae11b1db13ca-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"46ee7793-1245-4648-aa12-ae11b1db13ca\") " pod="openstack/glance-default-internal-api-0" Feb 17 13:56:23 crc kubenswrapper[4768]: I0217 13:56:23.938831 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/46ee7793-1245-4648-aa12-ae11b1db13ca-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"46ee7793-1245-4648-aa12-ae11b1db13ca\") " pod="openstack/glance-default-internal-api-0" Feb 17 13:56:23 crc kubenswrapper[4768]: I0217 13:56:23.967046 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-internal-api-0\" (UID: \"46ee7793-1245-4648-aa12-ae11b1db13ca\") " pod="openstack/glance-default-internal-api-0" Feb 17 13:56:24 crc kubenswrapper[4768]: I0217 13:56:24.005339 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 17 13:56:24 crc kubenswrapper[4768]: I0217 13:56:24.151574 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 17 13:56:24 crc kubenswrapper[4768]: I0217 13:56:24.230923 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7w55c\" (UniqueName: \"kubernetes.io/projected/e41e9e01-66b3-4abc-b5f3-e3679c36be33-kube-api-access-7w55c\") pod \"e41e9e01-66b3-4abc-b5f3-e3679c36be33\" (UID: \"e41e9e01-66b3-4abc-b5f3-e3679c36be33\") " Feb 17 13:56:24 crc kubenswrapper[4768]: I0217 13:56:24.231004 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e41e9e01-66b3-4abc-b5f3-e3679c36be33-run-httpd\") pod \"e41e9e01-66b3-4abc-b5f3-e3679c36be33\" (UID: \"e41e9e01-66b3-4abc-b5f3-e3679c36be33\") " Feb 17 13:56:24 crc kubenswrapper[4768]: I0217 13:56:24.231048 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e41e9e01-66b3-4abc-b5f3-e3679c36be33-log-httpd\") pod \"e41e9e01-66b3-4abc-b5f3-e3679c36be33\" (UID: \"e41e9e01-66b3-4abc-b5f3-e3679c36be33\") " Feb 17 13:56:24 crc kubenswrapper[4768]: I0217 13:56:24.231112 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e41e9e01-66b3-4abc-b5f3-e3679c36be33-scripts\") pod \"e41e9e01-66b3-4abc-b5f3-e3679c36be33\" (UID: \"e41e9e01-66b3-4abc-b5f3-e3679c36be33\") " Feb 17 13:56:24 crc kubenswrapper[4768]: I0217 13:56:24.231199 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e41e9e01-66b3-4abc-b5f3-e3679c36be33-combined-ca-bundle\") pod \"e41e9e01-66b3-4abc-b5f3-e3679c36be33\" (UID: \"e41e9e01-66b3-4abc-b5f3-e3679c36be33\") " Feb 17 13:56:24 crc kubenswrapper[4768]: I0217 13:56:24.231267 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e41e9e01-66b3-4abc-b5f3-e3679c36be33-sg-core-conf-yaml\") pod \"e41e9e01-66b3-4abc-b5f3-e3679c36be33\" (UID: \"e41e9e01-66b3-4abc-b5f3-e3679c36be33\") " Feb 17 13:56:24 crc kubenswrapper[4768]: I0217 13:56:24.231287 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e41e9e01-66b3-4abc-b5f3-e3679c36be33-config-data\") pod \"e41e9e01-66b3-4abc-b5f3-e3679c36be33\" (UID: \"e41e9e01-66b3-4abc-b5f3-e3679c36be33\") " Feb 17 13:56:24 crc kubenswrapper[4768]: I0217 13:56:24.232674 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e41e9e01-66b3-4abc-b5f3-e3679c36be33-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "e41e9e01-66b3-4abc-b5f3-e3679c36be33" (UID: "e41e9e01-66b3-4abc-b5f3-e3679c36be33"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 13:56:24 crc kubenswrapper[4768]: I0217 13:56:24.232842 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e41e9e01-66b3-4abc-b5f3-e3679c36be33-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "e41e9e01-66b3-4abc-b5f3-e3679c36be33" (UID: "e41e9e01-66b3-4abc-b5f3-e3679c36be33"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 13:56:24 crc kubenswrapper[4768]: I0217 13:56:24.238476 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e41e9e01-66b3-4abc-b5f3-e3679c36be33-scripts" (OuterVolumeSpecName: "scripts") pod "e41e9e01-66b3-4abc-b5f3-e3679c36be33" (UID: "e41e9e01-66b3-4abc-b5f3-e3679c36be33"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 13:56:24 crc kubenswrapper[4768]: I0217 13:56:24.251835 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e41e9e01-66b3-4abc-b5f3-e3679c36be33-kube-api-access-7w55c" (OuterVolumeSpecName: "kube-api-access-7w55c") pod "e41e9e01-66b3-4abc-b5f3-e3679c36be33" (UID: "e41e9e01-66b3-4abc-b5f3-e3679c36be33"). InnerVolumeSpecName "kube-api-access-7w55c". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 13:56:24 crc kubenswrapper[4768]: I0217 13:56:24.262362 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e41e9e01-66b3-4abc-b5f3-e3679c36be33-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "e41e9e01-66b3-4abc-b5f3-e3679c36be33" (UID: "e41e9e01-66b3-4abc-b5f3-e3679c36be33"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 13:56:24 crc kubenswrapper[4768]: I0217 13:56:24.323562 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e41e9e01-66b3-4abc-b5f3-e3679c36be33-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e41e9e01-66b3-4abc-b5f3-e3679c36be33" (UID: "e41e9e01-66b3-4abc-b5f3-e3679c36be33"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 13:56:24 crc kubenswrapper[4768]: I0217 13:56:24.333914 4768 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e41e9e01-66b3-4abc-b5f3-e3679c36be33-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 13:56:24 crc kubenswrapper[4768]: I0217 13:56:24.333951 4768 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e41e9e01-66b3-4abc-b5f3-e3679c36be33-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 17 13:56:24 crc kubenswrapper[4768]: I0217 13:56:24.333963 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7w55c\" (UniqueName: \"kubernetes.io/projected/e41e9e01-66b3-4abc-b5f3-e3679c36be33-kube-api-access-7w55c\") on node \"crc\" DevicePath \"\"" Feb 17 13:56:24 crc kubenswrapper[4768]: I0217 13:56:24.333973 4768 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e41e9e01-66b3-4abc-b5f3-e3679c36be33-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 17 13:56:24 crc kubenswrapper[4768]: I0217 13:56:24.333981 4768 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e41e9e01-66b3-4abc-b5f3-e3679c36be33-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 17 13:56:24 crc kubenswrapper[4768]: I0217 13:56:24.334000 4768 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e41e9e01-66b3-4abc-b5f3-e3679c36be33-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 13:56:24 crc kubenswrapper[4768]: I0217 13:56:24.336465 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e41e9e01-66b3-4abc-b5f3-e3679c36be33-config-data" (OuterVolumeSpecName: "config-data") pod "e41e9e01-66b3-4abc-b5f3-e3679c36be33" (UID: "e41e9e01-66b3-4abc-b5f3-e3679c36be33"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 13:56:24 crc kubenswrapper[4768]: I0217 13:56:24.436246 4768 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e41e9e01-66b3-4abc-b5f3-e3679c36be33-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 13:56:24 crc kubenswrapper[4768]: I0217 13:56:24.539963 4768 generic.go:334] "Generic (PLEG): container finished" podID="e41e9e01-66b3-4abc-b5f3-e3679c36be33" containerID="5fa7c315ecddcca2b1de68ee42a6f06db97d07f3d00e61737d0adf10f5b0dfdb" exitCode=0 Feb 17 13:56:24 crc kubenswrapper[4768]: I0217 13:56:24.540115 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e41e9e01-66b3-4abc-b5f3-e3679c36be33","Type":"ContainerDied","Data":"5fa7c315ecddcca2b1de68ee42a6f06db97d07f3d00e61737d0adf10f5b0dfdb"} Feb 17 13:56:24 crc kubenswrapper[4768]: I0217 13:56:24.540410 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e41e9e01-66b3-4abc-b5f3-e3679c36be33","Type":"ContainerDied","Data":"f42fd80fe21648b512a25013fef19367337ed2159aa61e4ac086d21d3559b34f"} Feb 17 13:56:24 crc kubenswrapper[4768]: I0217 13:56:24.540206 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 17 13:56:24 crc kubenswrapper[4768]: I0217 13:56:24.540450 4768 scope.go:117] "RemoveContainer" containerID="9584bda6fe96f289ee3a52380eb9c5d37c9edc61eb98e4c6e2cc650e326f5a1c" Feb 17 13:56:24 crc kubenswrapper[4768]: I0217 13:56:24.561468 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 17 13:56:24 crc kubenswrapper[4768]: I0217 13:56:24.566593 4768 scope.go:117] "RemoveContainer" containerID="8f327c5a1381c4409f94baff651bd100f059c0771c4c52d93852ea73b65ef0a3" Feb 17 13:56:24 crc kubenswrapper[4768]: I0217 13:56:24.583421 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 17 13:56:24 crc kubenswrapper[4768]: I0217 13:56:24.591851 4768 scope.go:117] "RemoveContainer" containerID="19b892b320c19a000f0d4f971fe8d740085de7e9329d7502691155f570151d98" Feb 17 13:56:24 crc kubenswrapper[4768]: I0217 13:56:24.593459 4768 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 17 13:56:24 crc kubenswrapper[4768]: I0217 13:56:24.623654 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 17 13:56:24 crc kubenswrapper[4768]: E0217 13:56:24.624364 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e41e9e01-66b3-4abc-b5f3-e3679c36be33" containerName="proxy-httpd" Feb 17 13:56:24 crc kubenswrapper[4768]: I0217 13:56:24.624389 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="e41e9e01-66b3-4abc-b5f3-e3679c36be33" containerName="proxy-httpd" Feb 17 13:56:24 crc kubenswrapper[4768]: E0217 13:56:24.624421 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e41e9e01-66b3-4abc-b5f3-e3679c36be33" containerName="sg-core" Feb 17 13:56:24 crc kubenswrapper[4768]: I0217 13:56:24.624427 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="e41e9e01-66b3-4abc-b5f3-e3679c36be33" containerName="sg-core" Feb 17 13:56:24 crc kubenswrapper[4768]: E0217 13:56:24.624453 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e41e9e01-66b3-4abc-b5f3-e3679c36be33" containerName="ceilometer-notification-agent" Feb 17 13:56:24 crc kubenswrapper[4768]: I0217 13:56:24.624459 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="e41e9e01-66b3-4abc-b5f3-e3679c36be33" containerName="ceilometer-notification-agent" Feb 17 13:56:24 crc kubenswrapper[4768]: E0217 13:56:24.624481 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e41e9e01-66b3-4abc-b5f3-e3679c36be33" containerName="ceilometer-central-agent" Feb 17 13:56:24 crc kubenswrapper[4768]: I0217 13:56:24.624487 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="e41e9e01-66b3-4abc-b5f3-e3679c36be33" containerName="ceilometer-central-agent" Feb 17 13:56:24 crc kubenswrapper[4768]: I0217 13:56:24.624814 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="e41e9e01-66b3-4abc-b5f3-e3679c36be33" containerName="ceilometer-central-agent" Feb 17 13:56:24 crc kubenswrapper[4768]: I0217 13:56:24.624848 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="e41e9e01-66b3-4abc-b5f3-e3679c36be33" containerName="proxy-httpd" Feb 17 13:56:24 crc kubenswrapper[4768]: I0217 13:56:24.624867 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="e41e9e01-66b3-4abc-b5f3-e3679c36be33" containerName="ceilometer-notification-agent" Feb 17 13:56:24 crc kubenswrapper[4768]: I0217 13:56:24.624878 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="e41e9e01-66b3-4abc-b5f3-e3679c36be33" containerName="sg-core" Feb 17 13:56:24 crc kubenswrapper[4768]: I0217 13:56:24.628938 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 17 13:56:24 crc kubenswrapper[4768]: I0217 13:56:24.631609 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 17 13:56:24 crc kubenswrapper[4768]: I0217 13:56:24.631807 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Feb 17 13:56:24 crc kubenswrapper[4768]: I0217 13:56:24.635256 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 17 13:56:24 crc kubenswrapper[4768]: I0217 13:56:24.638667 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 17 13:56:24 crc kubenswrapper[4768]: I0217 13:56:24.651685 4768 scope.go:117] "RemoveContainer" containerID="5fa7c315ecddcca2b1de68ee42a6f06db97d07f3d00e61737d0adf10f5b0dfdb" Feb 17 13:56:24 crc kubenswrapper[4768]: I0217 13:56:24.709362 4768 scope.go:117] "RemoveContainer" containerID="9584bda6fe96f289ee3a52380eb9c5d37c9edc61eb98e4c6e2cc650e326f5a1c" Feb 17 13:56:24 crc kubenswrapper[4768]: E0217 13:56:24.710061 4768 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9584bda6fe96f289ee3a52380eb9c5d37c9edc61eb98e4c6e2cc650e326f5a1c\": container with ID starting with 9584bda6fe96f289ee3a52380eb9c5d37c9edc61eb98e4c6e2cc650e326f5a1c not found: ID does not exist" containerID="9584bda6fe96f289ee3a52380eb9c5d37c9edc61eb98e4c6e2cc650e326f5a1c" Feb 17 13:56:24 crc kubenswrapper[4768]: I0217 13:56:24.710161 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9584bda6fe96f289ee3a52380eb9c5d37c9edc61eb98e4c6e2cc650e326f5a1c"} err="failed to get container status \"9584bda6fe96f289ee3a52380eb9c5d37c9edc61eb98e4c6e2cc650e326f5a1c\": rpc error: code = NotFound desc = could not find container \"9584bda6fe96f289ee3a52380eb9c5d37c9edc61eb98e4c6e2cc650e326f5a1c\": container with ID starting with 9584bda6fe96f289ee3a52380eb9c5d37c9edc61eb98e4c6e2cc650e326f5a1c not found: ID does not exist" Feb 17 13:56:24 crc kubenswrapper[4768]: I0217 13:56:24.710208 4768 scope.go:117] "RemoveContainer" containerID="8f327c5a1381c4409f94baff651bd100f059c0771c4c52d93852ea73b65ef0a3" Feb 17 13:56:24 crc kubenswrapper[4768]: E0217 13:56:24.711309 4768 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8f327c5a1381c4409f94baff651bd100f059c0771c4c52d93852ea73b65ef0a3\": container with ID starting with 8f327c5a1381c4409f94baff651bd100f059c0771c4c52d93852ea73b65ef0a3 not found: ID does not exist" containerID="8f327c5a1381c4409f94baff651bd100f059c0771c4c52d93852ea73b65ef0a3" Feb 17 13:56:24 crc kubenswrapper[4768]: I0217 13:56:24.711342 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8f327c5a1381c4409f94baff651bd100f059c0771c4c52d93852ea73b65ef0a3"} err="failed to get container status \"8f327c5a1381c4409f94baff651bd100f059c0771c4c52d93852ea73b65ef0a3\": rpc error: code = NotFound desc = could not find container \"8f327c5a1381c4409f94baff651bd100f059c0771c4c52d93852ea73b65ef0a3\": container with ID starting with 8f327c5a1381c4409f94baff651bd100f059c0771c4c52d93852ea73b65ef0a3 not found: ID does not exist" Feb 17 13:56:24 crc kubenswrapper[4768]: I0217 13:56:24.711365 4768 scope.go:117] "RemoveContainer" containerID="19b892b320c19a000f0d4f971fe8d740085de7e9329d7502691155f570151d98" Feb 17 13:56:24 crc kubenswrapper[4768]: E0217 13:56:24.712063 4768 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"19b892b320c19a000f0d4f971fe8d740085de7e9329d7502691155f570151d98\": container with ID starting with 19b892b320c19a000f0d4f971fe8d740085de7e9329d7502691155f570151d98 not found: ID does not exist" containerID="19b892b320c19a000f0d4f971fe8d740085de7e9329d7502691155f570151d98" Feb 17 13:56:24 crc kubenswrapper[4768]: I0217 13:56:24.712094 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"19b892b320c19a000f0d4f971fe8d740085de7e9329d7502691155f570151d98"} err="failed to get container status \"19b892b320c19a000f0d4f971fe8d740085de7e9329d7502691155f570151d98\": rpc error: code = NotFound desc = could not find container \"19b892b320c19a000f0d4f971fe8d740085de7e9329d7502691155f570151d98\": container with ID starting with 19b892b320c19a000f0d4f971fe8d740085de7e9329d7502691155f570151d98 not found: ID does not exist" Feb 17 13:56:24 crc kubenswrapper[4768]: I0217 13:56:24.712127 4768 scope.go:117] "RemoveContainer" containerID="5fa7c315ecddcca2b1de68ee42a6f06db97d07f3d00e61737d0adf10f5b0dfdb" Feb 17 13:56:24 crc kubenswrapper[4768]: E0217 13:56:24.712452 4768 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5fa7c315ecddcca2b1de68ee42a6f06db97d07f3d00e61737d0adf10f5b0dfdb\": container with ID starting with 5fa7c315ecddcca2b1de68ee42a6f06db97d07f3d00e61737d0adf10f5b0dfdb not found: ID does not exist" containerID="5fa7c315ecddcca2b1de68ee42a6f06db97d07f3d00e61737d0adf10f5b0dfdb" Feb 17 13:56:24 crc kubenswrapper[4768]: I0217 13:56:24.712479 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5fa7c315ecddcca2b1de68ee42a6f06db97d07f3d00e61737d0adf10f5b0dfdb"} err="failed to get container status \"5fa7c315ecddcca2b1de68ee42a6f06db97d07f3d00e61737d0adf10f5b0dfdb\": rpc error: code = NotFound desc = could not find container \"5fa7c315ecddcca2b1de68ee42a6f06db97d07f3d00e61737d0adf10f5b0dfdb\": container with ID starting with 5fa7c315ecddcca2b1de68ee42a6f06db97d07f3d00e61737d0adf10f5b0dfdb not found: ID does not exist" Feb 17 13:56:24 crc kubenswrapper[4768]: I0217 13:56:24.742395 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b0805602-c3b7-4644-a94a-3d1c7d55844e-config-data\") pod \"ceilometer-0\" (UID: \"b0805602-c3b7-4644-a94a-3d1c7d55844e\") " pod="openstack/ceilometer-0" Feb 17 13:56:24 crc kubenswrapper[4768]: I0217 13:56:24.742508 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/b0805602-c3b7-4644-a94a-3d1c7d55844e-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"b0805602-c3b7-4644-a94a-3d1c7d55844e\") " pod="openstack/ceilometer-0" Feb 17 13:56:24 crc kubenswrapper[4768]: I0217 13:56:24.742529 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b0805602-c3b7-4644-a94a-3d1c7d55844e-run-httpd\") pod \"ceilometer-0\" (UID: \"b0805602-c3b7-4644-a94a-3d1c7d55844e\") " pod="openstack/ceilometer-0" Feb 17 13:56:24 crc kubenswrapper[4768]: I0217 13:56:24.742556 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b0805602-c3b7-4644-a94a-3d1c7d55844e-log-httpd\") pod \"ceilometer-0\" (UID: \"b0805602-c3b7-4644-a94a-3d1c7d55844e\") " pod="openstack/ceilometer-0" Feb 17 13:56:24 crc kubenswrapper[4768]: I0217 13:56:24.742601 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b0805602-c3b7-4644-a94a-3d1c7d55844e-scripts\") pod \"ceilometer-0\" (UID: \"b0805602-c3b7-4644-a94a-3d1c7d55844e\") " pod="openstack/ceilometer-0" Feb 17 13:56:24 crc kubenswrapper[4768]: I0217 13:56:24.742631 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8x2lz\" (UniqueName: \"kubernetes.io/projected/b0805602-c3b7-4644-a94a-3d1c7d55844e-kube-api-access-8x2lz\") pod \"ceilometer-0\" (UID: \"b0805602-c3b7-4644-a94a-3d1c7d55844e\") " pod="openstack/ceilometer-0" Feb 17 13:56:24 crc kubenswrapper[4768]: I0217 13:56:24.742654 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b0805602-c3b7-4644-a94a-3d1c7d55844e-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"b0805602-c3b7-4644-a94a-3d1c7d55844e\") " pod="openstack/ceilometer-0" Feb 17 13:56:24 crc kubenswrapper[4768]: I0217 13:56:24.742689 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b0805602-c3b7-4644-a94a-3d1c7d55844e-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"b0805602-c3b7-4644-a94a-3d1c7d55844e\") " pod="openstack/ceilometer-0" Feb 17 13:56:24 crc kubenswrapper[4768]: I0217 13:56:24.844356 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b0805602-c3b7-4644-a94a-3d1c7d55844e-scripts\") pod \"ceilometer-0\" (UID: \"b0805602-c3b7-4644-a94a-3d1c7d55844e\") " pod="openstack/ceilometer-0" Feb 17 13:56:24 crc kubenswrapper[4768]: I0217 13:56:24.844440 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8x2lz\" (UniqueName: \"kubernetes.io/projected/b0805602-c3b7-4644-a94a-3d1c7d55844e-kube-api-access-8x2lz\") pod \"ceilometer-0\" (UID: \"b0805602-c3b7-4644-a94a-3d1c7d55844e\") " pod="openstack/ceilometer-0" Feb 17 13:56:24 crc kubenswrapper[4768]: I0217 13:56:24.844473 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b0805602-c3b7-4644-a94a-3d1c7d55844e-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"b0805602-c3b7-4644-a94a-3d1c7d55844e\") " pod="openstack/ceilometer-0" Feb 17 13:56:24 crc kubenswrapper[4768]: I0217 13:56:24.844510 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b0805602-c3b7-4644-a94a-3d1c7d55844e-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"b0805602-c3b7-4644-a94a-3d1c7d55844e\") " pod="openstack/ceilometer-0" Feb 17 13:56:24 crc kubenswrapper[4768]: I0217 13:56:24.844567 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b0805602-c3b7-4644-a94a-3d1c7d55844e-config-data\") pod \"ceilometer-0\" (UID: \"b0805602-c3b7-4644-a94a-3d1c7d55844e\") " pod="openstack/ceilometer-0" Feb 17 13:56:24 crc kubenswrapper[4768]: I0217 13:56:24.844593 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/b0805602-c3b7-4644-a94a-3d1c7d55844e-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"b0805602-c3b7-4644-a94a-3d1c7d55844e\") " pod="openstack/ceilometer-0" Feb 17 13:56:24 crc kubenswrapper[4768]: I0217 13:56:24.844608 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b0805602-c3b7-4644-a94a-3d1c7d55844e-run-httpd\") pod \"ceilometer-0\" (UID: \"b0805602-c3b7-4644-a94a-3d1c7d55844e\") " pod="openstack/ceilometer-0" Feb 17 13:56:24 crc kubenswrapper[4768]: I0217 13:56:24.844654 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b0805602-c3b7-4644-a94a-3d1c7d55844e-log-httpd\") pod \"ceilometer-0\" (UID: \"b0805602-c3b7-4644-a94a-3d1c7d55844e\") " pod="openstack/ceilometer-0" Feb 17 13:56:24 crc kubenswrapper[4768]: I0217 13:56:24.848564 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b0805602-c3b7-4644-a94a-3d1c7d55844e-log-httpd\") pod \"ceilometer-0\" (UID: \"b0805602-c3b7-4644-a94a-3d1c7d55844e\") " pod="openstack/ceilometer-0" Feb 17 13:56:24 crc kubenswrapper[4768]: I0217 13:56:24.849328 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b0805602-c3b7-4644-a94a-3d1c7d55844e-run-httpd\") pod \"ceilometer-0\" (UID: \"b0805602-c3b7-4644-a94a-3d1c7d55844e\") " pod="openstack/ceilometer-0" Feb 17 13:56:24 crc kubenswrapper[4768]: I0217 13:56:24.849460 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b0805602-c3b7-4644-a94a-3d1c7d55844e-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"b0805602-c3b7-4644-a94a-3d1c7d55844e\") " pod="openstack/ceilometer-0" Feb 17 13:56:24 crc kubenswrapper[4768]: I0217 13:56:24.850859 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b0805602-c3b7-4644-a94a-3d1c7d55844e-scripts\") pod \"ceilometer-0\" (UID: \"b0805602-c3b7-4644-a94a-3d1c7d55844e\") " pod="openstack/ceilometer-0" Feb 17 13:56:24 crc kubenswrapper[4768]: I0217 13:56:24.851269 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b0805602-c3b7-4644-a94a-3d1c7d55844e-config-data\") pod \"ceilometer-0\" (UID: \"b0805602-c3b7-4644-a94a-3d1c7d55844e\") " pod="openstack/ceilometer-0" Feb 17 13:56:24 crc kubenswrapper[4768]: I0217 13:56:24.852733 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/b0805602-c3b7-4644-a94a-3d1c7d55844e-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"b0805602-c3b7-4644-a94a-3d1c7d55844e\") " pod="openstack/ceilometer-0" Feb 17 13:56:24 crc kubenswrapper[4768]: I0217 13:56:24.854521 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b0805602-c3b7-4644-a94a-3d1c7d55844e-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"b0805602-c3b7-4644-a94a-3d1c7d55844e\") " pod="openstack/ceilometer-0" Feb 17 13:56:24 crc kubenswrapper[4768]: I0217 13:56:24.863754 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8x2lz\" (UniqueName: \"kubernetes.io/projected/b0805602-c3b7-4644-a94a-3d1c7d55844e-kube-api-access-8x2lz\") pod \"ceilometer-0\" (UID: \"b0805602-c3b7-4644-a94a-3d1c7d55844e\") " pod="openstack/ceilometer-0" Feb 17 13:56:24 crc kubenswrapper[4768]: I0217 13:56:24.956735 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 17 13:56:25 crc kubenswrapper[4768]: I0217 13:56:25.453899 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 17 13:56:25 crc kubenswrapper[4768]: I0217 13:56:25.546670 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a6e3afdd-2e51-4f0a-9208-5784a5900c96" path="/var/lib/kubelet/pods/a6e3afdd-2e51-4f0a-9208-5784a5900c96/volumes" Feb 17 13:56:25 crc kubenswrapper[4768]: I0217 13:56:25.548922 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e41e9e01-66b3-4abc-b5f3-e3679c36be33" path="/var/lib/kubelet/pods/e41e9e01-66b3-4abc-b5f3-e3679c36be33/volumes" Feb 17 13:56:25 crc kubenswrapper[4768]: I0217 13:56:25.567311 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"46ee7793-1245-4648-aa12-ae11b1db13ca","Type":"ContainerStarted","Data":"411b39a374f46c1a0cb51a0f3471a6eb3c4ed265306121bb2110a1603ceda9d5"} Feb 17 13:56:25 crc kubenswrapper[4768]: I0217 13:56:25.567357 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"46ee7793-1245-4648-aa12-ae11b1db13ca","Type":"ContainerStarted","Data":"211b09298705c2a6a3c00deebca68ee280fa97a0bf46544022452518cd200cba"} Feb 17 13:56:25 crc kubenswrapper[4768]: I0217 13:56:25.569116 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b0805602-c3b7-4644-a94a-3d1c7d55844e","Type":"ContainerStarted","Data":"6bf645155a08b4cdf6c6f86b23148fbebc8d1c20bd44f70b7c301ba948f4e2d2"} Feb 17 13:56:25 crc kubenswrapper[4768]: I0217 13:56:25.617257 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 17 13:56:26 crc kubenswrapper[4768]: I0217 13:56:26.577502 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"46ee7793-1245-4648-aa12-ae11b1db13ca","Type":"ContainerStarted","Data":"8dcd6f128524d1b4b0721daaca44fba3f7510f24e7ff55484715377b76ea454e"} Feb 17 13:56:26 crc kubenswrapper[4768]: I0217 13:56:26.580579 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b0805602-c3b7-4644-a94a-3d1c7d55844e","Type":"ContainerStarted","Data":"ab2e5eabb27b33b6e2f649f9bd60b7a4a585d450e7491d28b7e5eb56865946c2"} Feb 17 13:56:26 crc kubenswrapper[4768]: I0217 13:56:26.608862 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=3.60882953 podStartE2EDuration="3.60882953s" podCreationTimestamp="2026-02-17 13:56:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 13:56:26.599967444 +0000 UTC m=+1205.879353886" watchObservedRunningTime="2026-02-17 13:56:26.60882953 +0000 UTC m=+1205.888215972" Feb 17 13:56:26 crc kubenswrapper[4768]: I0217 13:56:26.728028 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Feb 17 13:56:27 crc kubenswrapper[4768]: I0217 13:56:27.591851 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b0805602-c3b7-4644-a94a-3d1c7d55844e","Type":"ContainerStarted","Data":"37636f611a4026bad57698d28844676493572866cabb321d162de802764be4e7"} Feb 17 13:56:27 crc kubenswrapper[4768]: I0217 13:56:27.592205 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b0805602-c3b7-4644-a94a-3d1c7d55844e","Type":"ContainerStarted","Data":"4b5af501f9c4b3c1bdd8415879a0e5f8587909dc3bc62517f61ee3e942ee38d4"} Feb 17 13:56:29 crc kubenswrapper[4768]: E0217 13:56:29.604271 4768 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc20ad4a2_cf3e_4390_9141_1cc58518fd2b.slice/crio-1d17c1ad4b2966fe20b3c50b7d3e8591d321560f133eb3c2ff7c07184fbb95e5\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc20ad4a2_cf3e_4390_9141_1cc58518fd2b.slice\": RecentStats: unable to find data in memory cache]" Feb 17 13:56:30 crc kubenswrapper[4768]: I0217 13:56:30.617310 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b0805602-c3b7-4644-a94a-3d1c7d55844e","Type":"ContainerStarted","Data":"33c01961b593d1cefa883cafbb64a9e2dde45c5497eb62006f755e150dd59b38"} Feb 17 13:56:30 crc kubenswrapper[4768]: I0217 13:56:30.617664 4768 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="b0805602-c3b7-4644-a94a-3d1c7d55844e" containerName="sg-core" containerID="cri-o://37636f611a4026bad57698d28844676493572866cabb321d162de802764be4e7" gracePeriod=30 Feb 17 13:56:30 crc kubenswrapper[4768]: I0217 13:56:30.617588 4768 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="b0805602-c3b7-4644-a94a-3d1c7d55844e" containerName="proxy-httpd" containerID="cri-o://33c01961b593d1cefa883cafbb64a9e2dde45c5497eb62006f755e150dd59b38" gracePeriod=30 Feb 17 13:56:30 crc kubenswrapper[4768]: I0217 13:56:30.617612 4768 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="b0805602-c3b7-4644-a94a-3d1c7d55844e" containerName="ceilometer-notification-agent" containerID="cri-o://4b5af501f9c4b3c1bdd8415879a0e5f8587909dc3bc62517f61ee3e942ee38d4" gracePeriod=30 Feb 17 13:56:30 crc kubenswrapper[4768]: I0217 13:56:30.617508 4768 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="b0805602-c3b7-4644-a94a-3d1c7d55844e" containerName="ceilometer-central-agent" containerID="cri-o://ab2e5eabb27b33b6e2f649f9bd60b7a4a585d450e7491d28b7e5eb56865946c2" gracePeriod=30 Feb 17 13:56:30 crc kubenswrapper[4768]: I0217 13:56:30.620159 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 17 13:56:30 crc kubenswrapper[4768]: I0217 13:56:30.662457 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.73517621 podStartE2EDuration="6.662432101s" podCreationTimestamp="2026-02-17 13:56:24 +0000 UTC" firstStartedPulling="2026-02-17 13:56:25.461667862 +0000 UTC m=+1204.741054304" lastFinishedPulling="2026-02-17 13:56:29.388923753 +0000 UTC m=+1208.668310195" observedRunningTime="2026-02-17 13:56:30.646050027 +0000 UTC m=+1209.925436499" watchObservedRunningTime="2026-02-17 13:56:30.662432101 +0000 UTC m=+1209.941818543" Feb 17 13:56:30 crc kubenswrapper[4768]: I0217 13:56:30.855632 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Feb 17 13:56:30 crc kubenswrapper[4768]: I0217 13:56:30.855685 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Feb 17 13:56:30 crc kubenswrapper[4768]: I0217 13:56:30.902921 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Feb 17 13:56:30 crc kubenswrapper[4768]: I0217 13:56:30.908818 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Feb 17 13:56:31 crc kubenswrapper[4768]: I0217 13:56:31.598583 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-db-create-9c5h6"] Feb 17 13:56:31 crc kubenswrapper[4768]: I0217 13:56:31.599827 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-9c5h6" Feb 17 13:56:31 crc kubenswrapper[4768]: I0217 13:56:31.615808 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-9c5h6"] Feb 17 13:56:31 crc kubenswrapper[4768]: I0217 13:56:31.640158 4768 generic.go:334] "Generic (PLEG): container finished" podID="b0805602-c3b7-4644-a94a-3d1c7d55844e" containerID="33c01961b593d1cefa883cafbb64a9e2dde45c5497eb62006f755e150dd59b38" exitCode=0 Feb 17 13:56:31 crc kubenswrapper[4768]: I0217 13:56:31.640214 4768 generic.go:334] "Generic (PLEG): container finished" podID="b0805602-c3b7-4644-a94a-3d1c7d55844e" containerID="37636f611a4026bad57698d28844676493572866cabb321d162de802764be4e7" exitCode=2 Feb 17 13:56:31 crc kubenswrapper[4768]: I0217 13:56:31.640222 4768 generic.go:334] "Generic (PLEG): container finished" podID="b0805602-c3b7-4644-a94a-3d1c7d55844e" containerID="4b5af501f9c4b3c1bdd8415879a0e5f8587909dc3bc62517f61ee3e942ee38d4" exitCode=0 Feb 17 13:56:31 crc kubenswrapper[4768]: I0217 13:56:31.640300 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b0805602-c3b7-4644-a94a-3d1c7d55844e","Type":"ContainerDied","Data":"33c01961b593d1cefa883cafbb64a9e2dde45c5497eb62006f755e150dd59b38"} Feb 17 13:56:31 crc kubenswrapper[4768]: I0217 13:56:31.640327 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b0805602-c3b7-4644-a94a-3d1c7d55844e","Type":"ContainerDied","Data":"37636f611a4026bad57698d28844676493572866cabb321d162de802764be4e7"} Feb 17 13:56:31 crc kubenswrapper[4768]: I0217 13:56:31.640337 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b0805602-c3b7-4644-a94a-3d1c7d55844e","Type":"ContainerDied","Data":"4b5af501f9c4b3c1bdd8415879a0e5f8587909dc3bc62517f61ee3e942ee38d4"} Feb 17 13:56:31 crc kubenswrapper[4768]: I0217 13:56:31.642873 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Feb 17 13:56:31 crc kubenswrapper[4768]: I0217 13:56:31.642956 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Feb 17 13:56:31 crc kubenswrapper[4768]: I0217 13:56:31.683862 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bpnjf\" (UniqueName: \"kubernetes.io/projected/8dc679a7-9d70-46d9-a89b-69e761fcf366-kube-api-access-bpnjf\") pod \"nova-api-db-create-9c5h6\" (UID: \"8dc679a7-9d70-46d9-a89b-69e761fcf366\") " pod="openstack/nova-api-db-create-9c5h6" Feb 17 13:56:31 crc kubenswrapper[4768]: I0217 13:56:31.683907 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8dc679a7-9d70-46d9-a89b-69e761fcf366-operator-scripts\") pod \"nova-api-db-create-9c5h6\" (UID: \"8dc679a7-9d70-46d9-a89b-69e761fcf366\") " pod="openstack/nova-api-db-create-9c5h6" Feb 17 13:56:31 crc kubenswrapper[4768]: I0217 13:56:31.698267 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-db-create-wbqmg"] Feb 17 13:56:31 crc kubenswrapper[4768]: I0217 13:56:31.699481 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-wbqmg" Feb 17 13:56:31 crc kubenswrapper[4768]: I0217 13:56:31.706849 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-wbqmg"] Feb 17 13:56:31 crc kubenswrapper[4768]: I0217 13:56:31.785626 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f159e76f-1606-4a1d-8ce3-647851c11669-operator-scripts\") pod \"nova-cell0-db-create-wbqmg\" (UID: \"f159e76f-1606-4a1d-8ce3-647851c11669\") " pod="openstack/nova-cell0-db-create-wbqmg" Feb 17 13:56:31 crc kubenswrapper[4768]: I0217 13:56:31.785802 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q6gm6\" (UniqueName: \"kubernetes.io/projected/f159e76f-1606-4a1d-8ce3-647851c11669-kube-api-access-q6gm6\") pod \"nova-cell0-db-create-wbqmg\" (UID: \"f159e76f-1606-4a1d-8ce3-647851c11669\") " pod="openstack/nova-cell0-db-create-wbqmg" Feb 17 13:56:31 crc kubenswrapper[4768]: I0217 13:56:31.791176 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bpnjf\" (UniqueName: \"kubernetes.io/projected/8dc679a7-9d70-46d9-a89b-69e761fcf366-kube-api-access-bpnjf\") pod \"nova-api-db-create-9c5h6\" (UID: \"8dc679a7-9d70-46d9-a89b-69e761fcf366\") " pod="openstack/nova-api-db-create-9c5h6" Feb 17 13:56:31 crc kubenswrapper[4768]: I0217 13:56:31.791225 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8dc679a7-9d70-46d9-a89b-69e761fcf366-operator-scripts\") pod \"nova-api-db-create-9c5h6\" (UID: \"8dc679a7-9d70-46d9-a89b-69e761fcf366\") " pod="openstack/nova-api-db-create-9c5h6" Feb 17 13:56:31 crc kubenswrapper[4768]: I0217 13:56:31.792093 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8dc679a7-9d70-46d9-a89b-69e761fcf366-operator-scripts\") pod \"nova-api-db-create-9c5h6\" (UID: \"8dc679a7-9d70-46d9-a89b-69e761fcf366\") " pod="openstack/nova-api-db-create-9c5h6" Feb 17 13:56:31 crc kubenswrapper[4768]: I0217 13:56:31.803745 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-db-create-ggdjk"] Feb 17 13:56:31 crc kubenswrapper[4768]: I0217 13:56:31.805129 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-ggdjk" Feb 17 13:56:31 crc kubenswrapper[4768]: I0217 13:56:31.818402 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-930a-account-create-update-vzjbq"] Feb 17 13:56:31 crc kubenswrapper[4768]: I0217 13:56:31.819579 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-930a-account-create-update-vzjbq" Feb 17 13:56:31 crc kubenswrapper[4768]: I0217 13:56:31.822333 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-db-secret" Feb 17 13:56:31 crc kubenswrapper[4768]: I0217 13:56:31.830026 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-930a-account-create-update-vzjbq"] Feb 17 13:56:31 crc kubenswrapper[4768]: I0217 13:56:31.838791 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-ggdjk"] Feb 17 13:56:31 crc kubenswrapper[4768]: I0217 13:56:31.859875 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bpnjf\" (UniqueName: \"kubernetes.io/projected/8dc679a7-9d70-46d9-a89b-69e761fcf366-kube-api-access-bpnjf\") pod \"nova-api-db-create-9c5h6\" (UID: \"8dc679a7-9d70-46d9-a89b-69e761fcf366\") " pod="openstack/nova-api-db-create-9c5h6" Feb 17 13:56:31 crc kubenswrapper[4768]: I0217 13:56:31.893246 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zqgvp\" (UniqueName: \"kubernetes.io/projected/b9421cc5-76da-4822-984c-7ac27c814dfe-kube-api-access-zqgvp\") pod \"nova-api-930a-account-create-update-vzjbq\" (UID: \"b9421cc5-76da-4822-984c-7ac27c814dfe\") " pod="openstack/nova-api-930a-account-create-update-vzjbq" Feb 17 13:56:31 crc kubenswrapper[4768]: I0217 13:56:31.893591 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f159e76f-1606-4a1d-8ce3-647851c11669-operator-scripts\") pod \"nova-cell0-db-create-wbqmg\" (UID: \"f159e76f-1606-4a1d-8ce3-647851c11669\") " pod="openstack/nova-cell0-db-create-wbqmg" Feb 17 13:56:31 crc kubenswrapper[4768]: I0217 13:56:31.893638 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/11dd3938-0363-4020-b8c3-4a1510d0d400-operator-scripts\") pod \"nova-cell1-db-create-ggdjk\" (UID: \"11dd3938-0363-4020-b8c3-4a1510d0d400\") " pod="openstack/nova-cell1-db-create-ggdjk" Feb 17 13:56:31 crc kubenswrapper[4768]: I0217 13:56:31.893676 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b9421cc5-76da-4822-984c-7ac27c814dfe-operator-scripts\") pod \"nova-api-930a-account-create-update-vzjbq\" (UID: \"b9421cc5-76da-4822-984c-7ac27c814dfe\") " pod="openstack/nova-api-930a-account-create-update-vzjbq" Feb 17 13:56:31 crc kubenswrapper[4768]: I0217 13:56:31.893711 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jz76b\" (UniqueName: \"kubernetes.io/projected/11dd3938-0363-4020-b8c3-4a1510d0d400-kube-api-access-jz76b\") pod \"nova-cell1-db-create-ggdjk\" (UID: \"11dd3938-0363-4020-b8c3-4a1510d0d400\") " pod="openstack/nova-cell1-db-create-ggdjk" Feb 17 13:56:31 crc kubenswrapper[4768]: I0217 13:56:31.893737 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q6gm6\" (UniqueName: \"kubernetes.io/projected/f159e76f-1606-4a1d-8ce3-647851c11669-kube-api-access-q6gm6\") pod \"nova-cell0-db-create-wbqmg\" (UID: \"f159e76f-1606-4a1d-8ce3-647851c11669\") " pod="openstack/nova-cell0-db-create-wbqmg" Feb 17 13:56:31 crc kubenswrapper[4768]: I0217 13:56:31.895690 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f159e76f-1606-4a1d-8ce3-647851c11669-operator-scripts\") pod \"nova-cell0-db-create-wbqmg\" (UID: \"f159e76f-1606-4a1d-8ce3-647851c11669\") " pod="openstack/nova-cell0-db-create-wbqmg" Feb 17 13:56:31 crc kubenswrapper[4768]: I0217 13:56:31.915046 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q6gm6\" (UniqueName: \"kubernetes.io/projected/f159e76f-1606-4a1d-8ce3-647851c11669-kube-api-access-q6gm6\") pod \"nova-cell0-db-create-wbqmg\" (UID: \"f159e76f-1606-4a1d-8ce3-647851c11669\") " pod="openstack/nova-cell0-db-create-wbqmg" Feb 17 13:56:31 crc kubenswrapper[4768]: I0217 13:56:31.930493 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-9c5h6" Feb 17 13:56:32 crc kubenswrapper[4768]: I0217 13:56:32.005084 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-f310-account-create-update-pssrg"] Feb 17 13:56:32 crc kubenswrapper[4768]: I0217 13:56:32.006468 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zqgvp\" (UniqueName: \"kubernetes.io/projected/b9421cc5-76da-4822-984c-7ac27c814dfe-kube-api-access-zqgvp\") pod \"nova-api-930a-account-create-update-vzjbq\" (UID: \"b9421cc5-76da-4822-984c-7ac27c814dfe\") " pod="openstack/nova-api-930a-account-create-update-vzjbq" Feb 17 13:56:32 crc kubenswrapper[4768]: I0217 13:56:32.008981 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/11dd3938-0363-4020-b8c3-4a1510d0d400-operator-scripts\") pod \"nova-cell1-db-create-ggdjk\" (UID: \"11dd3938-0363-4020-b8c3-4a1510d0d400\") " pod="openstack/nova-cell1-db-create-ggdjk" Feb 17 13:56:32 crc kubenswrapper[4768]: I0217 13:56:32.009051 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b9421cc5-76da-4822-984c-7ac27c814dfe-operator-scripts\") pod \"nova-api-930a-account-create-update-vzjbq\" (UID: \"b9421cc5-76da-4822-984c-7ac27c814dfe\") " pod="openstack/nova-api-930a-account-create-update-vzjbq" Feb 17 13:56:32 crc kubenswrapper[4768]: I0217 13:56:32.009122 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jz76b\" (UniqueName: \"kubernetes.io/projected/11dd3938-0363-4020-b8c3-4a1510d0d400-kube-api-access-jz76b\") pod \"nova-cell1-db-create-ggdjk\" (UID: \"11dd3938-0363-4020-b8c3-4a1510d0d400\") " pod="openstack/nova-cell1-db-create-ggdjk" Feb 17 13:56:32 crc kubenswrapper[4768]: I0217 13:56:32.010322 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/11dd3938-0363-4020-b8c3-4a1510d0d400-operator-scripts\") pod \"nova-cell1-db-create-ggdjk\" (UID: \"11dd3938-0363-4020-b8c3-4a1510d0d400\") " pod="openstack/nova-cell1-db-create-ggdjk" Feb 17 13:56:32 crc kubenswrapper[4768]: I0217 13:56:32.010948 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b9421cc5-76da-4822-984c-7ac27c814dfe-operator-scripts\") pod \"nova-api-930a-account-create-update-vzjbq\" (UID: \"b9421cc5-76da-4822-984c-7ac27c814dfe\") " pod="openstack/nova-api-930a-account-create-update-vzjbq" Feb 17 13:56:32 crc kubenswrapper[4768]: I0217 13:56:32.011327 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-f310-account-create-update-pssrg" Feb 17 13:56:32 crc kubenswrapper[4768]: I0217 13:56:32.015271 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-db-secret" Feb 17 13:56:32 crc kubenswrapper[4768]: I0217 13:56:32.020960 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-wbqmg" Feb 17 13:56:32 crc kubenswrapper[4768]: I0217 13:56:32.025008 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zqgvp\" (UniqueName: \"kubernetes.io/projected/b9421cc5-76da-4822-984c-7ac27c814dfe-kube-api-access-zqgvp\") pod \"nova-api-930a-account-create-update-vzjbq\" (UID: \"b9421cc5-76da-4822-984c-7ac27c814dfe\") " pod="openstack/nova-api-930a-account-create-update-vzjbq" Feb 17 13:56:32 crc kubenswrapper[4768]: I0217 13:56:32.037788 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jz76b\" (UniqueName: \"kubernetes.io/projected/11dd3938-0363-4020-b8c3-4a1510d0d400-kube-api-access-jz76b\") pod \"nova-cell1-db-create-ggdjk\" (UID: \"11dd3938-0363-4020-b8c3-4a1510d0d400\") " pod="openstack/nova-cell1-db-create-ggdjk" Feb 17 13:56:32 crc kubenswrapper[4768]: I0217 13:56:32.037883 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-f310-account-create-update-pssrg"] Feb 17 13:56:32 crc kubenswrapper[4768]: I0217 13:56:32.110494 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c4j7p\" (UniqueName: \"kubernetes.io/projected/d4fed022-7a29-4dd3-8660-be750880438c-kube-api-access-c4j7p\") pod \"nova-cell0-f310-account-create-update-pssrg\" (UID: \"d4fed022-7a29-4dd3-8660-be750880438c\") " pod="openstack/nova-cell0-f310-account-create-update-pssrg" Feb 17 13:56:32 crc kubenswrapper[4768]: I0217 13:56:32.110841 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d4fed022-7a29-4dd3-8660-be750880438c-operator-scripts\") pod \"nova-cell0-f310-account-create-update-pssrg\" (UID: \"d4fed022-7a29-4dd3-8660-be750880438c\") " pod="openstack/nova-cell0-f310-account-create-update-pssrg" Feb 17 13:56:32 crc kubenswrapper[4768]: I0217 13:56:32.126961 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-ggdjk" Feb 17 13:56:32 crc kubenswrapper[4768]: I0217 13:56:32.141551 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-930a-account-create-update-vzjbq" Feb 17 13:56:32 crc kubenswrapper[4768]: I0217 13:56:32.212365 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c4j7p\" (UniqueName: \"kubernetes.io/projected/d4fed022-7a29-4dd3-8660-be750880438c-kube-api-access-c4j7p\") pod \"nova-cell0-f310-account-create-update-pssrg\" (UID: \"d4fed022-7a29-4dd3-8660-be750880438c\") " pod="openstack/nova-cell0-f310-account-create-update-pssrg" Feb 17 13:56:32 crc kubenswrapper[4768]: I0217 13:56:32.212435 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d4fed022-7a29-4dd3-8660-be750880438c-operator-scripts\") pod \"nova-cell0-f310-account-create-update-pssrg\" (UID: \"d4fed022-7a29-4dd3-8660-be750880438c\") " pod="openstack/nova-cell0-f310-account-create-update-pssrg" Feb 17 13:56:32 crc kubenswrapper[4768]: I0217 13:56:32.212856 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-ef60-account-create-update-262kh"] Feb 17 13:56:32 crc kubenswrapper[4768]: I0217 13:56:32.214278 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-ef60-account-create-update-262kh" Feb 17 13:56:32 crc kubenswrapper[4768]: I0217 13:56:32.215587 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d4fed022-7a29-4dd3-8660-be750880438c-operator-scripts\") pod \"nova-cell0-f310-account-create-update-pssrg\" (UID: \"d4fed022-7a29-4dd3-8660-be750880438c\") " pod="openstack/nova-cell0-f310-account-create-update-pssrg" Feb 17 13:56:32 crc kubenswrapper[4768]: I0217 13:56:32.222703 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-db-secret" Feb 17 13:56:32 crc kubenswrapper[4768]: I0217 13:56:32.234169 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-ef60-account-create-update-262kh"] Feb 17 13:56:32 crc kubenswrapper[4768]: I0217 13:56:32.237929 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c4j7p\" (UniqueName: \"kubernetes.io/projected/d4fed022-7a29-4dd3-8660-be750880438c-kube-api-access-c4j7p\") pod \"nova-cell0-f310-account-create-update-pssrg\" (UID: \"d4fed022-7a29-4dd3-8660-be750880438c\") " pod="openstack/nova-cell0-f310-account-create-update-pssrg" Feb 17 13:56:32 crc kubenswrapper[4768]: I0217 13:56:32.314464 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/127b440a-bcde-4b51-ae43-221b093dcdb7-operator-scripts\") pod \"nova-cell1-ef60-account-create-update-262kh\" (UID: \"127b440a-bcde-4b51-ae43-221b093dcdb7\") " pod="openstack/nova-cell1-ef60-account-create-update-262kh" Feb 17 13:56:32 crc kubenswrapper[4768]: I0217 13:56:32.314516 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c7x4t\" (UniqueName: \"kubernetes.io/projected/127b440a-bcde-4b51-ae43-221b093dcdb7-kube-api-access-c7x4t\") pod \"nova-cell1-ef60-account-create-update-262kh\" (UID: \"127b440a-bcde-4b51-ae43-221b093dcdb7\") " pod="openstack/nova-cell1-ef60-account-create-update-262kh" Feb 17 13:56:32 crc kubenswrapper[4768]: I0217 13:56:32.409488 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-f310-account-create-update-pssrg" Feb 17 13:56:32 crc kubenswrapper[4768]: I0217 13:56:32.415891 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/127b440a-bcde-4b51-ae43-221b093dcdb7-operator-scripts\") pod \"nova-cell1-ef60-account-create-update-262kh\" (UID: \"127b440a-bcde-4b51-ae43-221b093dcdb7\") " pod="openstack/nova-cell1-ef60-account-create-update-262kh" Feb 17 13:56:32 crc kubenswrapper[4768]: I0217 13:56:32.415938 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c7x4t\" (UniqueName: \"kubernetes.io/projected/127b440a-bcde-4b51-ae43-221b093dcdb7-kube-api-access-c7x4t\") pod \"nova-cell1-ef60-account-create-update-262kh\" (UID: \"127b440a-bcde-4b51-ae43-221b093dcdb7\") " pod="openstack/nova-cell1-ef60-account-create-update-262kh" Feb 17 13:56:32 crc kubenswrapper[4768]: I0217 13:56:32.417268 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/127b440a-bcde-4b51-ae43-221b093dcdb7-operator-scripts\") pod \"nova-cell1-ef60-account-create-update-262kh\" (UID: \"127b440a-bcde-4b51-ae43-221b093dcdb7\") " pod="openstack/nova-cell1-ef60-account-create-update-262kh" Feb 17 13:56:32 crc kubenswrapper[4768]: I0217 13:56:32.438775 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c7x4t\" (UniqueName: \"kubernetes.io/projected/127b440a-bcde-4b51-ae43-221b093dcdb7-kube-api-access-c7x4t\") pod \"nova-cell1-ef60-account-create-update-262kh\" (UID: \"127b440a-bcde-4b51-ae43-221b093dcdb7\") " pod="openstack/nova-cell1-ef60-account-create-update-262kh" Feb 17 13:56:32 crc kubenswrapper[4768]: I0217 13:56:32.495890 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-9c5h6"] Feb 17 13:56:32 crc kubenswrapper[4768]: W0217 13:56:32.510506 4768 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8dc679a7_9d70_46d9_a89b_69e761fcf366.slice/crio-e4a3b68f42af935238418adf03dc486df6b6f9891ac83126d75f47cc2ee3de4d WatchSource:0}: Error finding container e4a3b68f42af935238418adf03dc486df6b6f9891ac83126d75f47cc2ee3de4d: Status 404 returned error can't find the container with id e4a3b68f42af935238418adf03dc486df6b6f9891ac83126d75f47cc2ee3de4d Feb 17 13:56:32 crc kubenswrapper[4768]: I0217 13:56:32.548709 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-ef60-account-create-update-262kh" Feb 17 13:56:32 crc kubenswrapper[4768]: I0217 13:56:32.600086 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-wbqmg"] Feb 17 13:56:32 crc kubenswrapper[4768]: I0217 13:56:32.700721 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-ggdjk"] Feb 17 13:56:32 crc kubenswrapper[4768]: I0217 13:56:32.707466 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-wbqmg" event={"ID":"f159e76f-1606-4a1d-8ce3-647851c11669","Type":"ContainerStarted","Data":"d1e565379c16f17c98bdf24c918bac2fb2a172f20b2fb9f7d07bdcc261355c3b"} Feb 17 13:56:32 crc kubenswrapper[4768]: I0217 13:56:32.716461 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-9c5h6" event={"ID":"8dc679a7-9d70-46d9-a89b-69e761fcf366","Type":"ContainerStarted","Data":"e4a3b68f42af935238418adf03dc486df6b6f9891ac83126d75f47cc2ee3de4d"} Feb 17 13:56:32 crc kubenswrapper[4768]: I0217 13:56:32.800800 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-930a-account-create-update-vzjbq"] Feb 17 13:56:32 crc kubenswrapper[4768]: I0217 13:56:32.976852 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-f310-account-create-update-pssrg"] Feb 17 13:56:33 crc kubenswrapper[4768]: I0217 13:56:33.125549 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-ef60-account-create-update-262kh"] Feb 17 13:56:33 crc kubenswrapper[4768]: W0217 13:56:33.157321 4768 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod127b440a_bcde_4b51_ae43_221b093dcdb7.slice/crio-1332fa8cdfae800fe665430eaa94b6443605730b0f8eb7207feb2997ad77dedc WatchSource:0}: Error finding container 1332fa8cdfae800fe665430eaa94b6443605730b0f8eb7207feb2997ad77dedc: Status 404 returned error can't find the container with id 1332fa8cdfae800fe665430eaa94b6443605730b0f8eb7207feb2997ad77dedc Feb 17 13:56:33 crc kubenswrapper[4768]: I0217 13:56:33.726918 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-930a-account-create-update-vzjbq" event={"ID":"b9421cc5-76da-4822-984c-7ac27c814dfe","Type":"ContainerStarted","Data":"bbf8f18df6c232b107faf4ca4b5b269de1cf55797370bd038d67d754d01b5dc3"} Feb 17 13:56:33 crc kubenswrapper[4768]: I0217 13:56:33.727288 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-930a-account-create-update-vzjbq" event={"ID":"b9421cc5-76da-4822-984c-7ac27c814dfe","Type":"ContainerStarted","Data":"4f3ab6e85cf66ea8baf90cc7d625c2c4de99cd3a291f8685fd475d177d2e5318"} Feb 17 13:56:33 crc kubenswrapper[4768]: I0217 13:56:33.728561 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-f310-account-create-update-pssrg" event={"ID":"d4fed022-7a29-4dd3-8660-be750880438c","Type":"ContainerStarted","Data":"6f1ffb80a0ea190bf7de58c9964e9e6d33de99fa982223dce6cb8f70bf07c3a0"} Feb 17 13:56:33 crc kubenswrapper[4768]: I0217 13:56:33.728632 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-f310-account-create-update-pssrg" event={"ID":"d4fed022-7a29-4dd3-8660-be750880438c","Type":"ContainerStarted","Data":"2fd19981072b1e3eba0e2bb5ff57e2f8e79a9c4f87db15ff76e5640d90e3bc38"} Feb 17 13:56:33 crc kubenswrapper[4768]: I0217 13:56:33.730338 4768 generic.go:334] "Generic (PLEG): container finished" podID="f159e76f-1606-4a1d-8ce3-647851c11669" containerID="ed2461ef02a61bee063e89b94b574e2a569467f23a6acfaec8cb90a7beed37ed" exitCode=0 Feb 17 13:56:33 crc kubenswrapper[4768]: I0217 13:56:33.730395 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-wbqmg" event={"ID":"f159e76f-1606-4a1d-8ce3-647851c11669","Type":"ContainerDied","Data":"ed2461ef02a61bee063e89b94b574e2a569467f23a6acfaec8cb90a7beed37ed"} Feb 17 13:56:33 crc kubenswrapper[4768]: I0217 13:56:33.732317 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-ef60-account-create-update-262kh" event={"ID":"127b440a-bcde-4b51-ae43-221b093dcdb7","Type":"ContainerStarted","Data":"0d2430c2ff9b73a4bba8bdc12e39f50afd5d02c15a1929ec3be4ef7304e75580"} Feb 17 13:56:33 crc kubenswrapper[4768]: I0217 13:56:33.732359 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-ef60-account-create-update-262kh" event={"ID":"127b440a-bcde-4b51-ae43-221b093dcdb7","Type":"ContainerStarted","Data":"1332fa8cdfae800fe665430eaa94b6443605730b0f8eb7207feb2997ad77dedc"} Feb 17 13:56:33 crc kubenswrapper[4768]: I0217 13:56:33.734747 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-ggdjk" event={"ID":"11dd3938-0363-4020-b8c3-4a1510d0d400","Type":"ContainerStarted","Data":"7f6431dc27c7a6bdb46e8e5182986983ab2f9b0e28a950cee33fb278b3006033"} Feb 17 13:56:33 crc kubenswrapper[4768]: I0217 13:56:33.734790 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-ggdjk" event={"ID":"11dd3938-0363-4020-b8c3-4a1510d0d400","Type":"ContainerStarted","Data":"404a5b3903e8859f8601cb862fd9925ae7686baf3a08f379e9846d36f5bcf1cd"} Feb 17 13:56:33 crc kubenswrapper[4768]: I0217 13:56:33.738175 4768 generic.go:334] "Generic (PLEG): container finished" podID="8dc679a7-9d70-46d9-a89b-69e761fcf366" containerID="851b6d25464a9f9ecf281705fffd30f7e68254b23646d78f4935dc0709f2790d" exitCode=0 Feb 17 13:56:33 crc kubenswrapper[4768]: I0217 13:56:33.738275 4768 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 17 13:56:33 crc kubenswrapper[4768]: I0217 13:56:33.738289 4768 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 17 13:56:33 crc kubenswrapper[4768]: I0217 13:56:33.739228 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-9c5h6" event={"ID":"8dc679a7-9d70-46d9-a89b-69e761fcf366","Type":"ContainerDied","Data":"851b6d25464a9f9ecf281705fffd30f7e68254b23646d78f4935dc0709f2790d"} Feb 17 13:56:33 crc kubenswrapper[4768]: I0217 13:56:33.753721 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-930a-account-create-update-vzjbq" podStartSLOduration=2.753698133 podStartE2EDuration="2.753698133s" podCreationTimestamp="2026-02-17 13:56:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 13:56:33.745524827 +0000 UTC m=+1213.024911279" watchObservedRunningTime="2026-02-17 13:56:33.753698133 +0000 UTC m=+1213.033084585" Feb 17 13:56:33 crc kubenswrapper[4768]: I0217 13:56:33.823053 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-db-create-ggdjk" podStartSLOduration=2.823031174 podStartE2EDuration="2.823031174s" podCreationTimestamp="2026-02-17 13:56:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 13:56:33.806013333 +0000 UTC m=+1213.085399775" watchObservedRunningTime="2026-02-17 13:56:33.823031174 +0000 UTC m=+1213.102417616" Feb 17 13:56:33 crc kubenswrapper[4768]: I0217 13:56:33.826828 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-f310-account-create-update-pssrg" podStartSLOduration=2.826798948 podStartE2EDuration="2.826798948s" podCreationTimestamp="2026-02-17 13:56:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 13:56:33.820197946 +0000 UTC m=+1213.099584388" watchObservedRunningTime="2026-02-17 13:56:33.826798948 +0000 UTC m=+1213.106185390" Feb 17 13:56:34 crc kubenswrapper[4768]: I0217 13:56:34.006476 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Feb 17 13:56:34 crc kubenswrapper[4768]: I0217 13:56:34.006526 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Feb 17 13:56:34 crc kubenswrapper[4768]: I0217 13:56:34.038603 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Feb 17 13:56:34 crc kubenswrapper[4768]: I0217 13:56:34.054423 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Feb 17 13:56:34 crc kubenswrapper[4768]: I0217 13:56:34.072606 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-ef60-account-create-update-262kh" podStartSLOduration=2.072585917 podStartE2EDuration="2.072585917s" podCreationTimestamp="2026-02-17 13:56:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 13:56:33.841287539 +0000 UTC m=+1213.120673981" watchObservedRunningTime="2026-02-17 13:56:34.072585917 +0000 UTC m=+1213.351972359" Feb 17 13:56:34 crc kubenswrapper[4768]: I0217 13:56:34.269376 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Feb 17 13:56:34 crc kubenswrapper[4768]: I0217 13:56:34.425873 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Feb 17 13:56:34 crc kubenswrapper[4768]: I0217 13:56:34.763856 4768 generic.go:334] "Generic (PLEG): container finished" podID="d4fed022-7a29-4dd3-8660-be750880438c" containerID="6f1ffb80a0ea190bf7de58c9964e9e6d33de99fa982223dce6cb8f70bf07c3a0" exitCode=0 Feb 17 13:56:34 crc kubenswrapper[4768]: I0217 13:56:34.763927 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-f310-account-create-update-pssrg" event={"ID":"d4fed022-7a29-4dd3-8660-be750880438c","Type":"ContainerDied","Data":"6f1ffb80a0ea190bf7de58c9964e9e6d33de99fa982223dce6cb8f70bf07c3a0"} Feb 17 13:56:34 crc kubenswrapper[4768]: I0217 13:56:34.767945 4768 generic.go:334] "Generic (PLEG): container finished" podID="11dd3938-0363-4020-b8c3-4a1510d0d400" containerID="7f6431dc27c7a6bdb46e8e5182986983ab2f9b0e28a950cee33fb278b3006033" exitCode=0 Feb 17 13:56:34 crc kubenswrapper[4768]: I0217 13:56:34.768057 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-ggdjk" event={"ID":"11dd3938-0363-4020-b8c3-4a1510d0d400","Type":"ContainerDied","Data":"7f6431dc27c7a6bdb46e8e5182986983ab2f9b0e28a950cee33fb278b3006033"} Feb 17 13:56:34 crc kubenswrapper[4768]: I0217 13:56:34.770156 4768 generic.go:334] "Generic (PLEG): container finished" podID="127b440a-bcde-4b51-ae43-221b093dcdb7" containerID="0d2430c2ff9b73a4bba8bdc12e39f50afd5d02c15a1929ec3be4ef7304e75580" exitCode=0 Feb 17 13:56:34 crc kubenswrapper[4768]: I0217 13:56:34.770229 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-ef60-account-create-update-262kh" event={"ID":"127b440a-bcde-4b51-ae43-221b093dcdb7","Type":"ContainerDied","Data":"0d2430c2ff9b73a4bba8bdc12e39f50afd5d02c15a1929ec3be4ef7304e75580"} Feb 17 13:56:34 crc kubenswrapper[4768]: I0217 13:56:34.772278 4768 generic.go:334] "Generic (PLEG): container finished" podID="b9421cc5-76da-4822-984c-7ac27c814dfe" containerID="bbf8f18df6c232b107faf4ca4b5b269de1cf55797370bd038d67d754d01b5dc3" exitCode=0 Feb 17 13:56:34 crc kubenswrapper[4768]: I0217 13:56:34.772342 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-930a-account-create-update-vzjbq" event={"ID":"b9421cc5-76da-4822-984c-7ac27c814dfe","Type":"ContainerDied","Data":"bbf8f18df6c232b107faf4ca4b5b269de1cf55797370bd038d67d754d01b5dc3"} Feb 17 13:56:34 crc kubenswrapper[4768]: I0217 13:56:34.772845 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Feb 17 13:56:34 crc kubenswrapper[4768]: I0217 13:56:34.772902 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Feb 17 13:56:35 crc kubenswrapper[4768]: I0217 13:56:35.206722 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-wbqmg" Feb 17 13:56:35 crc kubenswrapper[4768]: I0217 13:56:35.303513 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f159e76f-1606-4a1d-8ce3-647851c11669-operator-scripts\") pod \"f159e76f-1606-4a1d-8ce3-647851c11669\" (UID: \"f159e76f-1606-4a1d-8ce3-647851c11669\") " Feb 17 13:56:35 crc kubenswrapper[4768]: I0217 13:56:35.303628 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q6gm6\" (UniqueName: \"kubernetes.io/projected/f159e76f-1606-4a1d-8ce3-647851c11669-kube-api-access-q6gm6\") pod \"f159e76f-1606-4a1d-8ce3-647851c11669\" (UID: \"f159e76f-1606-4a1d-8ce3-647851c11669\") " Feb 17 13:56:35 crc kubenswrapper[4768]: I0217 13:56:35.305771 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f159e76f-1606-4a1d-8ce3-647851c11669-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "f159e76f-1606-4a1d-8ce3-647851c11669" (UID: "f159e76f-1606-4a1d-8ce3-647851c11669"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 13:56:35 crc kubenswrapper[4768]: I0217 13:56:35.313528 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f159e76f-1606-4a1d-8ce3-647851c11669-kube-api-access-q6gm6" (OuterVolumeSpecName: "kube-api-access-q6gm6") pod "f159e76f-1606-4a1d-8ce3-647851c11669" (UID: "f159e76f-1606-4a1d-8ce3-647851c11669"). InnerVolumeSpecName "kube-api-access-q6gm6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 13:56:35 crc kubenswrapper[4768]: I0217 13:56:35.358132 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-9c5h6" Feb 17 13:56:35 crc kubenswrapper[4768]: I0217 13:56:35.405186 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8dc679a7-9d70-46d9-a89b-69e761fcf366-operator-scripts\") pod \"8dc679a7-9d70-46d9-a89b-69e761fcf366\" (UID: \"8dc679a7-9d70-46d9-a89b-69e761fcf366\") " Feb 17 13:56:35 crc kubenswrapper[4768]: I0217 13:56:35.405364 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bpnjf\" (UniqueName: \"kubernetes.io/projected/8dc679a7-9d70-46d9-a89b-69e761fcf366-kube-api-access-bpnjf\") pod \"8dc679a7-9d70-46d9-a89b-69e761fcf366\" (UID: \"8dc679a7-9d70-46d9-a89b-69e761fcf366\") " Feb 17 13:56:35 crc kubenswrapper[4768]: I0217 13:56:35.405849 4768 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f159e76f-1606-4a1d-8ce3-647851c11669-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 13:56:35 crc kubenswrapper[4768]: I0217 13:56:35.405878 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q6gm6\" (UniqueName: \"kubernetes.io/projected/f159e76f-1606-4a1d-8ce3-647851c11669-kube-api-access-q6gm6\") on node \"crc\" DevicePath \"\"" Feb 17 13:56:35 crc kubenswrapper[4768]: I0217 13:56:35.406245 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8dc679a7-9d70-46d9-a89b-69e761fcf366-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "8dc679a7-9d70-46d9-a89b-69e761fcf366" (UID: "8dc679a7-9d70-46d9-a89b-69e761fcf366"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 13:56:35 crc kubenswrapper[4768]: I0217 13:56:35.409781 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8dc679a7-9d70-46d9-a89b-69e761fcf366-kube-api-access-bpnjf" (OuterVolumeSpecName: "kube-api-access-bpnjf") pod "8dc679a7-9d70-46d9-a89b-69e761fcf366" (UID: "8dc679a7-9d70-46d9-a89b-69e761fcf366"). InnerVolumeSpecName "kube-api-access-bpnjf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 13:56:35 crc kubenswrapper[4768]: I0217 13:56:35.508175 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bpnjf\" (UniqueName: \"kubernetes.io/projected/8dc679a7-9d70-46d9-a89b-69e761fcf366-kube-api-access-bpnjf\") on node \"crc\" DevicePath \"\"" Feb 17 13:56:35 crc kubenswrapper[4768]: I0217 13:56:35.508225 4768 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8dc679a7-9d70-46d9-a89b-69e761fcf366-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 13:56:35 crc kubenswrapper[4768]: I0217 13:56:35.784082 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-wbqmg" event={"ID":"f159e76f-1606-4a1d-8ce3-647851c11669","Type":"ContainerDied","Data":"d1e565379c16f17c98bdf24c918bac2fb2a172f20b2fb9f7d07bdcc261355c3b"} Feb 17 13:56:35 crc kubenswrapper[4768]: I0217 13:56:35.784414 4768 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d1e565379c16f17c98bdf24c918bac2fb2a172f20b2fb9f7d07bdcc261355c3b" Feb 17 13:56:35 crc kubenswrapper[4768]: I0217 13:56:35.784769 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-wbqmg" Feb 17 13:56:35 crc kubenswrapper[4768]: I0217 13:56:35.785900 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-9c5h6" event={"ID":"8dc679a7-9d70-46d9-a89b-69e761fcf366","Type":"ContainerDied","Data":"e4a3b68f42af935238418adf03dc486df6b6f9891ac83126d75f47cc2ee3de4d"} Feb 17 13:56:35 crc kubenswrapper[4768]: I0217 13:56:35.786010 4768 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e4a3b68f42af935238418adf03dc486df6b6f9891ac83126d75f47cc2ee3de4d" Feb 17 13:56:35 crc kubenswrapper[4768]: I0217 13:56:35.785915 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-9c5h6" Feb 17 13:56:36 crc kubenswrapper[4768]: I0217 13:56:36.235511 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-ef60-account-create-update-262kh" Feb 17 13:56:36 crc kubenswrapper[4768]: I0217 13:56:36.327141 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c7x4t\" (UniqueName: \"kubernetes.io/projected/127b440a-bcde-4b51-ae43-221b093dcdb7-kube-api-access-c7x4t\") pod \"127b440a-bcde-4b51-ae43-221b093dcdb7\" (UID: \"127b440a-bcde-4b51-ae43-221b093dcdb7\") " Feb 17 13:56:36 crc kubenswrapper[4768]: I0217 13:56:36.327826 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/127b440a-bcde-4b51-ae43-221b093dcdb7-operator-scripts\") pod \"127b440a-bcde-4b51-ae43-221b093dcdb7\" (UID: \"127b440a-bcde-4b51-ae43-221b093dcdb7\") " Feb 17 13:56:36 crc kubenswrapper[4768]: I0217 13:56:36.328416 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/127b440a-bcde-4b51-ae43-221b093dcdb7-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "127b440a-bcde-4b51-ae43-221b093dcdb7" (UID: "127b440a-bcde-4b51-ae43-221b093dcdb7"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 13:56:36 crc kubenswrapper[4768]: I0217 13:56:36.328888 4768 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/127b440a-bcde-4b51-ae43-221b093dcdb7-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 13:56:36 crc kubenswrapper[4768]: I0217 13:56:36.332411 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/127b440a-bcde-4b51-ae43-221b093dcdb7-kube-api-access-c7x4t" (OuterVolumeSpecName: "kube-api-access-c7x4t") pod "127b440a-bcde-4b51-ae43-221b093dcdb7" (UID: "127b440a-bcde-4b51-ae43-221b093dcdb7"). InnerVolumeSpecName "kube-api-access-c7x4t". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 13:56:36 crc kubenswrapper[4768]: I0217 13:56:36.380953 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-f310-account-create-update-pssrg" Feb 17 13:56:36 crc kubenswrapper[4768]: I0217 13:56:36.387062 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-930a-account-create-update-vzjbq" Feb 17 13:56:36 crc kubenswrapper[4768]: I0217 13:56:36.396173 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-ggdjk" Feb 17 13:56:36 crc kubenswrapper[4768]: I0217 13:56:36.435296 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jz76b\" (UniqueName: \"kubernetes.io/projected/11dd3938-0363-4020-b8c3-4a1510d0d400-kube-api-access-jz76b\") pod \"11dd3938-0363-4020-b8c3-4a1510d0d400\" (UID: \"11dd3938-0363-4020-b8c3-4a1510d0d400\") " Feb 17 13:56:36 crc kubenswrapper[4768]: I0217 13:56:36.435383 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/11dd3938-0363-4020-b8c3-4a1510d0d400-operator-scripts\") pod \"11dd3938-0363-4020-b8c3-4a1510d0d400\" (UID: \"11dd3938-0363-4020-b8c3-4a1510d0d400\") " Feb 17 13:56:36 crc kubenswrapper[4768]: I0217 13:56:36.435476 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zqgvp\" (UniqueName: \"kubernetes.io/projected/b9421cc5-76da-4822-984c-7ac27c814dfe-kube-api-access-zqgvp\") pod \"b9421cc5-76da-4822-984c-7ac27c814dfe\" (UID: \"b9421cc5-76da-4822-984c-7ac27c814dfe\") " Feb 17 13:56:36 crc kubenswrapper[4768]: I0217 13:56:36.435525 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c4j7p\" (UniqueName: \"kubernetes.io/projected/d4fed022-7a29-4dd3-8660-be750880438c-kube-api-access-c4j7p\") pod \"d4fed022-7a29-4dd3-8660-be750880438c\" (UID: \"d4fed022-7a29-4dd3-8660-be750880438c\") " Feb 17 13:56:36 crc kubenswrapper[4768]: I0217 13:56:36.435585 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b9421cc5-76da-4822-984c-7ac27c814dfe-operator-scripts\") pod \"b9421cc5-76da-4822-984c-7ac27c814dfe\" (UID: \"b9421cc5-76da-4822-984c-7ac27c814dfe\") " Feb 17 13:56:36 crc kubenswrapper[4768]: I0217 13:56:36.435613 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d4fed022-7a29-4dd3-8660-be750880438c-operator-scripts\") pod \"d4fed022-7a29-4dd3-8660-be750880438c\" (UID: \"d4fed022-7a29-4dd3-8660-be750880438c\") " Feb 17 13:56:36 crc kubenswrapper[4768]: I0217 13:56:36.437753 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c7x4t\" (UniqueName: \"kubernetes.io/projected/127b440a-bcde-4b51-ae43-221b093dcdb7-kube-api-access-c7x4t\") on node \"crc\" DevicePath \"\"" Feb 17 13:56:36 crc kubenswrapper[4768]: I0217 13:56:36.438473 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b9421cc5-76da-4822-984c-7ac27c814dfe-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "b9421cc5-76da-4822-984c-7ac27c814dfe" (UID: "b9421cc5-76da-4822-984c-7ac27c814dfe"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 13:56:36 crc kubenswrapper[4768]: I0217 13:56:36.438576 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/11dd3938-0363-4020-b8c3-4a1510d0d400-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "11dd3938-0363-4020-b8c3-4a1510d0d400" (UID: "11dd3938-0363-4020-b8c3-4a1510d0d400"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 13:56:36 crc kubenswrapper[4768]: I0217 13:56:36.439410 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d4fed022-7a29-4dd3-8660-be750880438c-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "d4fed022-7a29-4dd3-8660-be750880438c" (UID: "d4fed022-7a29-4dd3-8660-be750880438c"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 13:56:36 crc kubenswrapper[4768]: I0217 13:56:36.443195 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/11dd3938-0363-4020-b8c3-4a1510d0d400-kube-api-access-jz76b" (OuterVolumeSpecName: "kube-api-access-jz76b") pod "11dd3938-0363-4020-b8c3-4a1510d0d400" (UID: "11dd3938-0363-4020-b8c3-4a1510d0d400"). InnerVolumeSpecName "kube-api-access-jz76b". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 13:56:36 crc kubenswrapper[4768]: I0217 13:56:36.444726 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d4fed022-7a29-4dd3-8660-be750880438c-kube-api-access-c4j7p" (OuterVolumeSpecName: "kube-api-access-c4j7p") pod "d4fed022-7a29-4dd3-8660-be750880438c" (UID: "d4fed022-7a29-4dd3-8660-be750880438c"). InnerVolumeSpecName "kube-api-access-c4j7p". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 13:56:36 crc kubenswrapper[4768]: I0217 13:56:36.464440 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b9421cc5-76da-4822-984c-7ac27c814dfe-kube-api-access-zqgvp" (OuterVolumeSpecName: "kube-api-access-zqgvp") pod "b9421cc5-76da-4822-984c-7ac27c814dfe" (UID: "b9421cc5-76da-4822-984c-7ac27c814dfe"). InnerVolumeSpecName "kube-api-access-zqgvp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 13:56:36 crc kubenswrapper[4768]: I0217 13:56:36.539342 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jz76b\" (UniqueName: \"kubernetes.io/projected/11dd3938-0363-4020-b8c3-4a1510d0d400-kube-api-access-jz76b\") on node \"crc\" DevicePath \"\"" Feb 17 13:56:36 crc kubenswrapper[4768]: I0217 13:56:36.539387 4768 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/11dd3938-0363-4020-b8c3-4a1510d0d400-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 13:56:36 crc kubenswrapper[4768]: I0217 13:56:36.539400 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zqgvp\" (UniqueName: \"kubernetes.io/projected/b9421cc5-76da-4822-984c-7ac27c814dfe-kube-api-access-zqgvp\") on node \"crc\" DevicePath \"\"" Feb 17 13:56:36 crc kubenswrapper[4768]: I0217 13:56:36.539412 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c4j7p\" (UniqueName: \"kubernetes.io/projected/d4fed022-7a29-4dd3-8660-be750880438c-kube-api-access-c4j7p\") on node \"crc\" DevicePath \"\"" Feb 17 13:56:36 crc kubenswrapper[4768]: I0217 13:56:36.539424 4768 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b9421cc5-76da-4822-984c-7ac27c814dfe-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 13:56:36 crc kubenswrapper[4768]: I0217 13:56:36.539435 4768 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d4fed022-7a29-4dd3-8660-be750880438c-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 13:56:36 crc kubenswrapper[4768]: I0217 13:56:36.797784 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-ggdjk" event={"ID":"11dd3938-0363-4020-b8c3-4a1510d0d400","Type":"ContainerDied","Data":"404a5b3903e8859f8601cb862fd9925ae7686baf3a08f379e9846d36f5bcf1cd"} Feb 17 13:56:36 crc kubenswrapper[4768]: I0217 13:56:36.797821 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-ggdjk" Feb 17 13:56:36 crc kubenswrapper[4768]: I0217 13:56:36.797831 4768 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="404a5b3903e8859f8601cb862fd9925ae7686baf3a08f379e9846d36f5bcf1cd" Feb 17 13:56:36 crc kubenswrapper[4768]: I0217 13:56:36.800609 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-ef60-account-create-update-262kh" event={"ID":"127b440a-bcde-4b51-ae43-221b093dcdb7","Type":"ContainerDied","Data":"1332fa8cdfae800fe665430eaa94b6443605730b0f8eb7207feb2997ad77dedc"} Feb 17 13:56:36 crc kubenswrapper[4768]: I0217 13:56:36.800744 4768 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1332fa8cdfae800fe665430eaa94b6443605730b0f8eb7207feb2997ad77dedc" Feb 17 13:56:36 crc kubenswrapper[4768]: I0217 13:56:36.800898 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-ef60-account-create-update-262kh" Feb 17 13:56:36 crc kubenswrapper[4768]: I0217 13:56:36.804448 4768 generic.go:334] "Generic (PLEG): container finished" podID="b0805602-c3b7-4644-a94a-3d1c7d55844e" containerID="ab2e5eabb27b33b6e2f649f9bd60b7a4a585d450e7491d28b7e5eb56865946c2" exitCode=0 Feb 17 13:56:36 crc kubenswrapper[4768]: I0217 13:56:36.804575 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b0805602-c3b7-4644-a94a-3d1c7d55844e","Type":"ContainerDied","Data":"ab2e5eabb27b33b6e2f649f9bd60b7a4a585d450e7491d28b7e5eb56865946c2"} Feb 17 13:56:36 crc kubenswrapper[4768]: I0217 13:56:36.806347 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-930a-account-create-update-vzjbq" event={"ID":"b9421cc5-76da-4822-984c-7ac27c814dfe","Type":"ContainerDied","Data":"4f3ab6e85cf66ea8baf90cc7d625c2c4de99cd3a291f8685fd475d177d2e5318"} Feb 17 13:56:36 crc kubenswrapper[4768]: I0217 13:56:36.806377 4768 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4f3ab6e85cf66ea8baf90cc7d625c2c4de99cd3a291f8685fd475d177d2e5318" Feb 17 13:56:36 crc kubenswrapper[4768]: I0217 13:56:36.806456 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-930a-account-create-update-vzjbq" Feb 17 13:56:36 crc kubenswrapper[4768]: I0217 13:56:36.810009 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-f310-account-create-update-pssrg" event={"ID":"d4fed022-7a29-4dd3-8660-be750880438c","Type":"ContainerDied","Data":"2fd19981072b1e3eba0e2bb5ff57e2f8e79a9c4f87db15ff76e5640d90e3bc38"} Feb 17 13:56:36 crc kubenswrapper[4768]: I0217 13:56:36.810209 4768 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2fd19981072b1e3eba0e2bb5ff57e2f8e79a9c4f87db15ff76e5640d90e3bc38" Feb 17 13:56:36 crc kubenswrapper[4768]: I0217 13:56:36.810340 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-f310-account-create-update-pssrg" Feb 17 13:56:36 crc kubenswrapper[4768]: I0217 13:56:36.986686 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 17 13:56:37 crc kubenswrapper[4768]: I0217 13:56:37.051791 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b0805602-c3b7-4644-a94a-3d1c7d55844e-log-httpd\") pod \"b0805602-c3b7-4644-a94a-3d1c7d55844e\" (UID: \"b0805602-c3b7-4644-a94a-3d1c7d55844e\") " Feb 17 13:56:37 crc kubenswrapper[4768]: I0217 13:56:37.051866 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b0805602-c3b7-4644-a94a-3d1c7d55844e-combined-ca-bundle\") pod \"b0805602-c3b7-4644-a94a-3d1c7d55844e\" (UID: \"b0805602-c3b7-4644-a94a-3d1c7d55844e\") " Feb 17 13:56:37 crc kubenswrapper[4768]: I0217 13:56:37.051928 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b0805602-c3b7-4644-a94a-3d1c7d55844e-run-httpd\") pod \"b0805602-c3b7-4644-a94a-3d1c7d55844e\" (UID: \"b0805602-c3b7-4644-a94a-3d1c7d55844e\") " Feb 17 13:56:37 crc kubenswrapper[4768]: I0217 13:56:37.051990 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b0805602-c3b7-4644-a94a-3d1c7d55844e-sg-core-conf-yaml\") pod \"b0805602-c3b7-4644-a94a-3d1c7d55844e\" (UID: \"b0805602-c3b7-4644-a94a-3d1c7d55844e\") " Feb 17 13:56:37 crc kubenswrapper[4768]: I0217 13:56:37.052023 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b0805602-c3b7-4644-a94a-3d1c7d55844e-config-data\") pod \"b0805602-c3b7-4644-a94a-3d1c7d55844e\" (UID: \"b0805602-c3b7-4644-a94a-3d1c7d55844e\") " Feb 17 13:56:37 crc kubenswrapper[4768]: I0217 13:56:37.052063 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/b0805602-c3b7-4644-a94a-3d1c7d55844e-ceilometer-tls-certs\") pod \"b0805602-c3b7-4644-a94a-3d1c7d55844e\" (UID: \"b0805602-c3b7-4644-a94a-3d1c7d55844e\") " Feb 17 13:56:37 crc kubenswrapper[4768]: I0217 13:56:37.052196 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b0805602-c3b7-4644-a94a-3d1c7d55844e-scripts\") pod \"b0805602-c3b7-4644-a94a-3d1c7d55844e\" (UID: \"b0805602-c3b7-4644-a94a-3d1c7d55844e\") " Feb 17 13:56:37 crc kubenswrapper[4768]: I0217 13:56:37.052227 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8x2lz\" (UniqueName: \"kubernetes.io/projected/b0805602-c3b7-4644-a94a-3d1c7d55844e-kube-api-access-8x2lz\") pod \"b0805602-c3b7-4644-a94a-3d1c7d55844e\" (UID: \"b0805602-c3b7-4644-a94a-3d1c7d55844e\") " Feb 17 13:56:37 crc kubenswrapper[4768]: I0217 13:56:37.058510 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b0805602-c3b7-4644-a94a-3d1c7d55844e-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "b0805602-c3b7-4644-a94a-3d1c7d55844e" (UID: "b0805602-c3b7-4644-a94a-3d1c7d55844e"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 13:56:37 crc kubenswrapper[4768]: I0217 13:56:37.059035 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b0805602-c3b7-4644-a94a-3d1c7d55844e-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "b0805602-c3b7-4644-a94a-3d1c7d55844e" (UID: "b0805602-c3b7-4644-a94a-3d1c7d55844e"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 13:56:37 crc kubenswrapper[4768]: I0217 13:56:37.060504 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b0805602-c3b7-4644-a94a-3d1c7d55844e-kube-api-access-8x2lz" (OuterVolumeSpecName: "kube-api-access-8x2lz") pod "b0805602-c3b7-4644-a94a-3d1c7d55844e" (UID: "b0805602-c3b7-4644-a94a-3d1c7d55844e"). InnerVolumeSpecName "kube-api-access-8x2lz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 13:56:37 crc kubenswrapper[4768]: I0217 13:56:37.061469 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b0805602-c3b7-4644-a94a-3d1c7d55844e-scripts" (OuterVolumeSpecName: "scripts") pod "b0805602-c3b7-4644-a94a-3d1c7d55844e" (UID: "b0805602-c3b7-4644-a94a-3d1c7d55844e"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 13:56:37 crc kubenswrapper[4768]: I0217 13:56:37.100145 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b0805602-c3b7-4644-a94a-3d1c7d55844e-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "b0805602-c3b7-4644-a94a-3d1c7d55844e" (UID: "b0805602-c3b7-4644-a94a-3d1c7d55844e"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 13:56:37 crc kubenswrapper[4768]: I0217 13:56:37.133423 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b0805602-c3b7-4644-a94a-3d1c7d55844e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b0805602-c3b7-4644-a94a-3d1c7d55844e" (UID: "b0805602-c3b7-4644-a94a-3d1c7d55844e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 13:56:37 crc kubenswrapper[4768]: I0217 13:56:37.151451 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b0805602-c3b7-4644-a94a-3d1c7d55844e-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "b0805602-c3b7-4644-a94a-3d1c7d55844e" (UID: "b0805602-c3b7-4644-a94a-3d1c7d55844e"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 13:56:37 crc kubenswrapper[4768]: I0217 13:56:37.154316 4768 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b0805602-c3b7-4644-a94a-3d1c7d55844e-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 13:56:37 crc kubenswrapper[4768]: I0217 13:56:37.154347 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8x2lz\" (UniqueName: \"kubernetes.io/projected/b0805602-c3b7-4644-a94a-3d1c7d55844e-kube-api-access-8x2lz\") on node \"crc\" DevicePath \"\"" Feb 17 13:56:37 crc kubenswrapper[4768]: I0217 13:56:37.154359 4768 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b0805602-c3b7-4644-a94a-3d1c7d55844e-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 17 13:56:37 crc kubenswrapper[4768]: I0217 13:56:37.154370 4768 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b0805602-c3b7-4644-a94a-3d1c7d55844e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 13:56:37 crc kubenswrapper[4768]: I0217 13:56:37.154379 4768 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b0805602-c3b7-4644-a94a-3d1c7d55844e-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 17 13:56:37 crc kubenswrapper[4768]: I0217 13:56:37.154387 4768 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b0805602-c3b7-4644-a94a-3d1c7d55844e-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 17 13:56:37 crc kubenswrapper[4768]: I0217 13:56:37.154394 4768 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/b0805602-c3b7-4644-a94a-3d1c7d55844e-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 17 13:56:37 crc kubenswrapper[4768]: I0217 13:56:37.176372 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b0805602-c3b7-4644-a94a-3d1c7d55844e-config-data" (OuterVolumeSpecName: "config-data") pod "b0805602-c3b7-4644-a94a-3d1c7d55844e" (UID: "b0805602-c3b7-4644-a94a-3d1c7d55844e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 13:56:37 crc kubenswrapper[4768]: I0217 13:56:37.256700 4768 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b0805602-c3b7-4644-a94a-3d1c7d55844e-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 13:56:37 crc kubenswrapper[4768]: I0217 13:56:37.273383 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Feb 17 13:56:37 crc kubenswrapper[4768]: I0217 13:56:37.273547 4768 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 17 13:56:37 crc kubenswrapper[4768]: I0217 13:56:37.319877 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Feb 17 13:56:37 crc kubenswrapper[4768]: I0217 13:56:37.818831 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 17 13:56:37 crc kubenswrapper[4768]: I0217 13:56:37.818862 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b0805602-c3b7-4644-a94a-3d1c7d55844e","Type":"ContainerDied","Data":"6bf645155a08b4cdf6c6f86b23148fbebc8d1c20bd44f70b7c301ba948f4e2d2"} Feb 17 13:56:37 crc kubenswrapper[4768]: I0217 13:56:37.818930 4768 scope.go:117] "RemoveContainer" containerID="33c01961b593d1cefa883cafbb64a9e2dde45c5497eb62006f755e150dd59b38" Feb 17 13:56:37 crc kubenswrapper[4768]: I0217 13:56:37.844852 4768 scope.go:117] "RemoveContainer" containerID="37636f611a4026bad57698d28844676493572866cabb321d162de802764be4e7" Feb 17 13:56:37 crc kubenswrapper[4768]: I0217 13:56:37.845617 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 17 13:56:37 crc kubenswrapper[4768]: I0217 13:56:37.854731 4768 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 17 13:56:37 crc kubenswrapper[4768]: I0217 13:56:37.870005 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 17 13:56:37 crc kubenswrapper[4768]: E0217 13:56:37.870561 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="127b440a-bcde-4b51-ae43-221b093dcdb7" containerName="mariadb-account-create-update" Feb 17 13:56:37 crc kubenswrapper[4768]: I0217 13:56:37.870581 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="127b440a-bcde-4b51-ae43-221b093dcdb7" containerName="mariadb-account-create-update" Feb 17 13:56:37 crc kubenswrapper[4768]: E0217 13:56:37.870619 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d4fed022-7a29-4dd3-8660-be750880438c" containerName="mariadb-account-create-update" Feb 17 13:56:37 crc kubenswrapper[4768]: I0217 13:56:37.870627 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="d4fed022-7a29-4dd3-8660-be750880438c" containerName="mariadb-account-create-update" Feb 17 13:56:37 crc kubenswrapper[4768]: E0217 13:56:37.870637 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b9421cc5-76da-4822-984c-7ac27c814dfe" containerName="mariadb-account-create-update" Feb 17 13:56:37 crc kubenswrapper[4768]: I0217 13:56:37.870654 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="b9421cc5-76da-4822-984c-7ac27c814dfe" containerName="mariadb-account-create-update" Feb 17 13:56:37 crc kubenswrapper[4768]: E0217 13:56:37.870706 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="11dd3938-0363-4020-b8c3-4a1510d0d400" containerName="mariadb-database-create" Feb 17 13:56:37 crc kubenswrapper[4768]: I0217 13:56:37.870714 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="11dd3938-0363-4020-b8c3-4a1510d0d400" containerName="mariadb-database-create" Feb 17 13:56:37 crc kubenswrapper[4768]: E0217 13:56:37.870723 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8dc679a7-9d70-46d9-a89b-69e761fcf366" containerName="mariadb-database-create" Feb 17 13:56:37 crc kubenswrapper[4768]: I0217 13:56:37.870729 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="8dc679a7-9d70-46d9-a89b-69e761fcf366" containerName="mariadb-database-create" Feb 17 13:56:37 crc kubenswrapper[4768]: E0217 13:56:37.870739 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f159e76f-1606-4a1d-8ce3-647851c11669" containerName="mariadb-database-create" Feb 17 13:56:37 crc kubenswrapper[4768]: I0217 13:56:37.870744 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="f159e76f-1606-4a1d-8ce3-647851c11669" containerName="mariadb-database-create" Feb 17 13:56:37 crc kubenswrapper[4768]: E0217 13:56:37.870778 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b0805602-c3b7-4644-a94a-3d1c7d55844e" containerName="proxy-httpd" Feb 17 13:56:37 crc kubenswrapper[4768]: I0217 13:56:37.870786 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="b0805602-c3b7-4644-a94a-3d1c7d55844e" containerName="proxy-httpd" Feb 17 13:56:37 crc kubenswrapper[4768]: E0217 13:56:37.870875 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b0805602-c3b7-4644-a94a-3d1c7d55844e" containerName="sg-core" Feb 17 13:56:37 crc kubenswrapper[4768]: I0217 13:56:37.870885 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="b0805602-c3b7-4644-a94a-3d1c7d55844e" containerName="sg-core" Feb 17 13:56:37 crc kubenswrapper[4768]: E0217 13:56:37.870905 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b0805602-c3b7-4644-a94a-3d1c7d55844e" containerName="ceilometer-central-agent" Feb 17 13:56:37 crc kubenswrapper[4768]: I0217 13:56:37.870910 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="b0805602-c3b7-4644-a94a-3d1c7d55844e" containerName="ceilometer-central-agent" Feb 17 13:56:37 crc kubenswrapper[4768]: E0217 13:56:37.870923 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b0805602-c3b7-4644-a94a-3d1c7d55844e" containerName="ceilometer-notification-agent" Feb 17 13:56:37 crc kubenswrapper[4768]: I0217 13:56:37.870929 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="b0805602-c3b7-4644-a94a-3d1c7d55844e" containerName="ceilometer-notification-agent" Feb 17 13:56:37 crc kubenswrapper[4768]: I0217 13:56:37.871400 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="d4fed022-7a29-4dd3-8660-be750880438c" containerName="mariadb-account-create-update" Feb 17 13:56:37 crc kubenswrapper[4768]: I0217 13:56:37.871499 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="b0805602-c3b7-4644-a94a-3d1c7d55844e" containerName="ceilometer-notification-agent" Feb 17 13:56:37 crc kubenswrapper[4768]: I0217 13:56:37.871516 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="b0805602-c3b7-4644-a94a-3d1c7d55844e" containerName="proxy-httpd" Feb 17 13:56:37 crc kubenswrapper[4768]: I0217 13:56:37.871553 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="b9421cc5-76da-4822-984c-7ac27c814dfe" containerName="mariadb-account-create-update" Feb 17 13:56:37 crc kubenswrapper[4768]: I0217 13:56:37.871562 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="b0805602-c3b7-4644-a94a-3d1c7d55844e" containerName="sg-core" Feb 17 13:56:37 crc kubenswrapper[4768]: I0217 13:56:37.871571 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="127b440a-bcde-4b51-ae43-221b093dcdb7" containerName="mariadb-account-create-update" Feb 17 13:56:37 crc kubenswrapper[4768]: I0217 13:56:37.871581 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="b0805602-c3b7-4644-a94a-3d1c7d55844e" containerName="ceilometer-central-agent" Feb 17 13:56:37 crc kubenswrapper[4768]: I0217 13:56:37.871594 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="8dc679a7-9d70-46d9-a89b-69e761fcf366" containerName="mariadb-database-create" Feb 17 13:56:37 crc kubenswrapper[4768]: I0217 13:56:37.871603 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="f159e76f-1606-4a1d-8ce3-647851c11669" containerName="mariadb-database-create" Feb 17 13:56:37 crc kubenswrapper[4768]: I0217 13:56:37.871640 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="11dd3938-0363-4020-b8c3-4a1510d0d400" containerName="mariadb-database-create" Feb 17 13:56:37 crc kubenswrapper[4768]: I0217 13:56:37.872201 4768 scope.go:117] "RemoveContainer" containerID="4b5af501f9c4b3c1bdd8415879a0e5f8587909dc3bc62517f61ee3e942ee38d4" Feb 17 13:56:37 crc kubenswrapper[4768]: I0217 13:56:37.877250 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 17 13:56:37 crc kubenswrapper[4768]: I0217 13:56:37.880268 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 17 13:56:37 crc kubenswrapper[4768]: I0217 13:56:37.880500 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Feb 17 13:56:37 crc kubenswrapper[4768]: I0217 13:56:37.880817 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 17 13:56:37 crc kubenswrapper[4768]: I0217 13:56:37.885549 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 17 13:56:37 crc kubenswrapper[4768]: I0217 13:56:37.937766 4768 scope.go:117] "RemoveContainer" containerID="ab2e5eabb27b33b6e2f649f9bd60b7a4a585d450e7491d28b7e5eb56865946c2" Feb 17 13:56:37 crc kubenswrapper[4768]: I0217 13:56:37.972201 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/d2bf4260-3d56-4912-9f47-8b27ce491e59-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"d2bf4260-3d56-4912-9f47-8b27ce491e59\") " pod="openstack/ceilometer-0" Feb 17 13:56:37 crc kubenswrapper[4768]: I0217 13:56:37.972307 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sbfbf\" (UniqueName: \"kubernetes.io/projected/d2bf4260-3d56-4912-9f47-8b27ce491e59-kube-api-access-sbfbf\") pod \"ceilometer-0\" (UID: \"d2bf4260-3d56-4912-9f47-8b27ce491e59\") " pod="openstack/ceilometer-0" Feb 17 13:56:37 crc kubenswrapper[4768]: I0217 13:56:37.972379 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d2bf4260-3d56-4912-9f47-8b27ce491e59-config-data\") pod \"ceilometer-0\" (UID: \"d2bf4260-3d56-4912-9f47-8b27ce491e59\") " pod="openstack/ceilometer-0" Feb 17 13:56:37 crc kubenswrapper[4768]: I0217 13:56:37.972433 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d2bf4260-3d56-4912-9f47-8b27ce491e59-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"d2bf4260-3d56-4912-9f47-8b27ce491e59\") " pod="openstack/ceilometer-0" Feb 17 13:56:37 crc kubenswrapper[4768]: I0217 13:56:37.972468 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d2bf4260-3d56-4912-9f47-8b27ce491e59-scripts\") pod \"ceilometer-0\" (UID: \"d2bf4260-3d56-4912-9f47-8b27ce491e59\") " pod="openstack/ceilometer-0" Feb 17 13:56:37 crc kubenswrapper[4768]: I0217 13:56:37.972493 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d2bf4260-3d56-4912-9f47-8b27ce491e59-log-httpd\") pod \"ceilometer-0\" (UID: \"d2bf4260-3d56-4912-9f47-8b27ce491e59\") " pod="openstack/ceilometer-0" Feb 17 13:56:37 crc kubenswrapper[4768]: I0217 13:56:37.972583 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d2bf4260-3d56-4912-9f47-8b27ce491e59-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"d2bf4260-3d56-4912-9f47-8b27ce491e59\") " pod="openstack/ceilometer-0" Feb 17 13:56:37 crc kubenswrapper[4768]: I0217 13:56:37.972739 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d2bf4260-3d56-4912-9f47-8b27ce491e59-run-httpd\") pod \"ceilometer-0\" (UID: \"d2bf4260-3d56-4912-9f47-8b27ce491e59\") " pod="openstack/ceilometer-0" Feb 17 13:56:38 crc kubenswrapper[4768]: I0217 13:56:38.074763 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d2bf4260-3d56-4912-9f47-8b27ce491e59-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"d2bf4260-3d56-4912-9f47-8b27ce491e59\") " pod="openstack/ceilometer-0" Feb 17 13:56:38 crc kubenswrapper[4768]: I0217 13:56:38.074839 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d2bf4260-3d56-4912-9f47-8b27ce491e59-run-httpd\") pod \"ceilometer-0\" (UID: \"d2bf4260-3d56-4912-9f47-8b27ce491e59\") " pod="openstack/ceilometer-0" Feb 17 13:56:38 crc kubenswrapper[4768]: I0217 13:56:38.074893 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/d2bf4260-3d56-4912-9f47-8b27ce491e59-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"d2bf4260-3d56-4912-9f47-8b27ce491e59\") " pod="openstack/ceilometer-0" Feb 17 13:56:38 crc kubenswrapper[4768]: I0217 13:56:38.074954 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sbfbf\" (UniqueName: \"kubernetes.io/projected/d2bf4260-3d56-4912-9f47-8b27ce491e59-kube-api-access-sbfbf\") pod \"ceilometer-0\" (UID: \"d2bf4260-3d56-4912-9f47-8b27ce491e59\") " pod="openstack/ceilometer-0" Feb 17 13:56:38 crc kubenswrapper[4768]: I0217 13:56:38.075006 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d2bf4260-3d56-4912-9f47-8b27ce491e59-config-data\") pod \"ceilometer-0\" (UID: \"d2bf4260-3d56-4912-9f47-8b27ce491e59\") " pod="openstack/ceilometer-0" Feb 17 13:56:38 crc kubenswrapper[4768]: I0217 13:56:38.075054 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d2bf4260-3d56-4912-9f47-8b27ce491e59-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"d2bf4260-3d56-4912-9f47-8b27ce491e59\") " pod="openstack/ceilometer-0" Feb 17 13:56:38 crc kubenswrapper[4768]: I0217 13:56:38.075087 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d2bf4260-3d56-4912-9f47-8b27ce491e59-scripts\") pod \"ceilometer-0\" (UID: \"d2bf4260-3d56-4912-9f47-8b27ce491e59\") " pod="openstack/ceilometer-0" Feb 17 13:56:38 crc kubenswrapper[4768]: I0217 13:56:38.075137 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d2bf4260-3d56-4912-9f47-8b27ce491e59-log-httpd\") pod \"ceilometer-0\" (UID: \"d2bf4260-3d56-4912-9f47-8b27ce491e59\") " pod="openstack/ceilometer-0" Feb 17 13:56:38 crc kubenswrapper[4768]: I0217 13:56:38.075430 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d2bf4260-3d56-4912-9f47-8b27ce491e59-run-httpd\") pod \"ceilometer-0\" (UID: \"d2bf4260-3d56-4912-9f47-8b27ce491e59\") " pod="openstack/ceilometer-0" Feb 17 13:56:38 crc kubenswrapper[4768]: I0217 13:56:38.075688 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d2bf4260-3d56-4912-9f47-8b27ce491e59-log-httpd\") pod \"ceilometer-0\" (UID: \"d2bf4260-3d56-4912-9f47-8b27ce491e59\") " pod="openstack/ceilometer-0" Feb 17 13:56:38 crc kubenswrapper[4768]: I0217 13:56:38.078837 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d2bf4260-3d56-4912-9f47-8b27ce491e59-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"d2bf4260-3d56-4912-9f47-8b27ce491e59\") " pod="openstack/ceilometer-0" Feb 17 13:56:38 crc kubenswrapper[4768]: I0217 13:56:38.079372 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d2bf4260-3d56-4912-9f47-8b27ce491e59-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"d2bf4260-3d56-4912-9f47-8b27ce491e59\") " pod="openstack/ceilometer-0" Feb 17 13:56:38 crc kubenswrapper[4768]: I0217 13:56:38.079731 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/d2bf4260-3d56-4912-9f47-8b27ce491e59-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"d2bf4260-3d56-4912-9f47-8b27ce491e59\") " pod="openstack/ceilometer-0" Feb 17 13:56:38 crc kubenswrapper[4768]: I0217 13:56:38.079993 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d2bf4260-3d56-4912-9f47-8b27ce491e59-scripts\") pod \"ceilometer-0\" (UID: \"d2bf4260-3d56-4912-9f47-8b27ce491e59\") " pod="openstack/ceilometer-0" Feb 17 13:56:38 crc kubenswrapper[4768]: I0217 13:56:38.081341 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d2bf4260-3d56-4912-9f47-8b27ce491e59-config-data\") pod \"ceilometer-0\" (UID: \"d2bf4260-3d56-4912-9f47-8b27ce491e59\") " pod="openstack/ceilometer-0" Feb 17 13:56:38 crc kubenswrapper[4768]: I0217 13:56:38.095934 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sbfbf\" (UniqueName: \"kubernetes.io/projected/d2bf4260-3d56-4912-9f47-8b27ce491e59-kube-api-access-sbfbf\") pod \"ceilometer-0\" (UID: \"d2bf4260-3d56-4912-9f47-8b27ce491e59\") " pod="openstack/ceilometer-0" Feb 17 13:56:38 crc kubenswrapper[4768]: I0217 13:56:38.230685 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 17 13:56:38 crc kubenswrapper[4768]: I0217 13:56:38.673177 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 17 13:56:38 crc kubenswrapper[4768]: I0217 13:56:38.829028 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d2bf4260-3d56-4912-9f47-8b27ce491e59","Type":"ContainerStarted","Data":"31d48febdcb3b4ee17a2d1040edf19ab8becde2376cfbc76c75d5051a9660287"} Feb 17 13:56:39 crc kubenswrapper[4768]: I0217 13:56:39.551132 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b0805602-c3b7-4644-a94a-3d1c7d55844e" path="/var/lib/kubelet/pods/b0805602-c3b7-4644-a94a-3d1c7d55844e/volumes" Feb 17 13:56:39 crc kubenswrapper[4768]: I0217 13:56:39.849602 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d2bf4260-3d56-4912-9f47-8b27ce491e59","Type":"ContainerStarted","Data":"a512b9f2e66d66f2b5c629200551a062a09f64b4b60f11c5bdb8e4f8513fadf4"} Feb 17 13:56:39 crc kubenswrapper[4768]: E0217 13:56:39.870404 4768 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc20ad4a2_cf3e_4390_9141_1cc58518fd2b.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc20ad4a2_cf3e_4390_9141_1cc58518fd2b.slice/crio-1d17c1ad4b2966fe20b3c50b7d3e8591d321560f133eb3c2ff7c07184fbb95e5\": RecentStats: unable to find data in memory cache]" Feb 17 13:56:40 crc kubenswrapper[4768]: I0217 13:56:40.867529 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d2bf4260-3d56-4912-9f47-8b27ce491e59","Type":"ContainerStarted","Data":"196888e320405329900e43fa42705bac7ded1817257cf06665ae0dc01560c7fa"} Feb 17 13:56:40 crc kubenswrapper[4768]: I0217 13:56:40.868077 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d2bf4260-3d56-4912-9f47-8b27ce491e59","Type":"ContainerStarted","Data":"4799f6e6b236fc9f2b45e7eb1fc1b502ec9ae9f084922676ff8490f6ac5cc5d1"} Feb 17 13:56:42 crc kubenswrapper[4768]: I0217 13:56:42.410663 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-db-sync-lqvwx"] Feb 17 13:56:42 crc kubenswrapper[4768]: I0217 13:56:42.412353 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-lqvwx" Feb 17 13:56:42 crc kubenswrapper[4768]: I0217 13:56:42.414426 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-scripts" Feb 17 13:56:42 crc kubenswrapper[4768]: I0217 13:56:42.414660 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-9thk7" Feb 17 13:56:42 crc kubenswrapper[4768]: I0217 13:56:42.414735 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Feb 17 13:56:42 crc kubenswrapper[4768]: I0217 13:56:42.444796 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-lqvwx"] Feb 17 13:56:42 crc kubenswrapper[4768]: I0217 13:56:42.462136 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e37a5596-9bc4-4df1-af63-e4475450a07f-scripts\") pod \"nova-cell0-conductor-db-sync-lqvwx\" (UID: \"e37a5596-9bc4-4df1-af63-e4475450a07f\") " pod="openstack/nova-cell0-conductor-db-sync-lqvwx" Feb 17 13:56:42 crc kubenswrapper[4768]: I0217 13:56:42.462187 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e37a5596-9bc4-4df1-af63-e4475450a07f-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-lqvwx\" (UID: \"e37a5596-9bc4-4df1-af63-e4475450a07f\") " pod="openstack/nova-cell0-conductor-db-sync-lqvwx" Feb 17 13:56:42 crc kubenswrapper[4768]: I0217 13:56:42.462216 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e37a5596-9bc4-4df1-af63-e4475450a07f-config-data\") pod \"nova-cell0-conductor-db-sync-lqvwx\" (UID: \"e37a5596-9bc4-4df1-af63-e4475450a07f\") " pod="openstack/nova-cell0-conductor-db-sync-lqvwx" Feb 17 13:56:42 crc kubenswrapper[4768]: I0217 13:56:42.462280 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-np9kr\" (UniqueName: \"kubernetes.io/projected/e37a5596-9bc4-4df1-af63-e4475450a07f-kube-api-access-np9kr\") pod \"nova-cell0-conductor-db-sync-lqvwx\" (UID: \"e37a5596-9bc4-4df1-af63-e4475450a07f\") " pod="openstack/nova-cell0-conductor-db-sync-lqvwx" Feb 17 13:56:42 crc kubenswrapper[4768]: I0217 13:56:42.564009 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e37a5596-9bc4-4df1-af63-e4475450a07f-scripts\") pod \"nova-cell0-conductor-db-sync-lqvwx\" (UID: \"e37a5596-9bc4-4df1-af63-e4475450a07f\") " pod="openstack/nova-cell0-conductor-db-sync-lqvwx" Feb 17 13:56:42 crc kubenswrapper[4768]: I0217 13:56:42.564370 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e37a5596-9bc4-4df1-af63-e4475450a07f-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-lqvwx\" (UID: \"e37a5596-9bc4-4df1-af63-e4475450a07f\") " pod="openstack/nova-cell0-conductor-db-sync-lqvwx" Feb 17 13:56:42 crc kubenswrapper[4768]: I0217 13:56:42.564411 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e37a5596-9bc4-4df1-af63-e4475450a07f-config-data\") pod \"nova-cell0-conductor-db-sync-lqvwx\" (UID: \"e37a5596-9bc4-4df1-af63-e4475450a07f\") " pod="openstack/nova-cell0-conductor-db-sync-lqvwx" Feb 17 13:56:42 crc kubenswrapper[4768]: I0217 13:56:42.564596 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-np9kr\" (UniqueName: \"kubernetes.io/projected/e37a5596-9bc4-4df1-af63-e4475450a07f-kube-api-access-np9kr\") pod \"nova-cell0-conductor-db-sync-lqvwx\" (UID: \"e37a5596-9bc4-4df1-af63-e4475450a07f\") " pod="openstack/nova-cell0-conductor-db-sync-lqvwx" Feb 17 13:56:42 crc kubenswrapper[4768]: I0217 13:56:42.569179 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e37a5596-9bc4-4df1-af63-e4475450a07f-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-lqvwx\" (UID: \"e37a5596-9bc4-4df1-af63-e4475450a07f\") " pod="openstack/nova-cell0-conductor-db-sync-lqvwx" Feb 17 13:56:42 crc kubenswrapper[4768]: I0217 13:56:42.569244 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e37a5596-9bc4-4df1-af63-e4475450a07f-scripts\") pod \"nova-cell0-conductor-db-sync-lqvwx\" (UID: \"e37a5596-9bc4-4df1-af63-e4475450a07f\") " pod="openstack/nova-cell0-conductor-db-sync-lqvwx" Feb 17 13:56:42 crc kubenswrapper[4768]: I0217 13:56:42.570797 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e37a5596-9bc4-4df1-af63-e4475450a07f-config-data\") pod \"nova-cell0-conductor-db-sync-lqvwx\" (UID: \"e37a5596-9bc4-4df1-af63-e4475450a07f\") " pod="openstack/nova-cell0-conductor-db-sync-lqvwx" Feb 17 13:56:42 crc kubenswrapper[4768]: I0217 13:56:42.584527 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-np9kr\" (UniqueName: \"kubernetes.io/projected/e37a5596-9bc4-4df1-af63-e4475450a07f-kube-api-access-np9kr\") pod \"nova-cell0-conductor-db-sync-lqvwx\" (UID: \"e37a5596-9bc4-4df1-af63-e4475450a07f\") " pod="openstack/nova-cell0-conductor-db-sync-lqvwx" Feb 17 13:56:42 crc kubenswrapper[4768]: I0217 13:56:42.828161 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-lqvwx" Feb 17 13:56:42 crc kubenswrapper[4768]: I0217 13:56:42.888746 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d2bf4260-3d56-4912-9f47-8b27ce491e59","Type":"ContainerStarted","Data":"601982ee909457f5212417f463a7fedb03e90e91c76709d9dbcc6705ff59b5d9"} Feb 17 13:56:42 crc kubenswrapper[4768]: I0217 13:56:42.889033 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 17 13:56:42 crc kubenswrapper[4768]: I0217 13:56:42.917761 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.3081386999999998 podStartE2EDuration="5.91773562s" podCreationTimestamp="2026-02-17 13:56:37 +0000 UTC" firstStartedPulling="2026-02-17 13:56:38.688038063 +0000 UTC m=+1217.967424505" lastFinishedPulling="2026-02-17 13:56:42.297634993 +0000 UTC m=+1221.577021425" observedRunningTime="2026-02-17 13:56:42.91411543 +0000 UTC m=+1222.193501872" watchObservedRunningTime="2026-02-17 13:56:42.91773562 +0000 UTC m=+1222.197122062" Feb 17 13:56:43 crc kubenswrapper[4768]: I0217 13:56:43.337967 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-lqvwx"] Feb 17 13:56:43 crc kubenswrapper[4768]: W0217 13:56:43.340815 4768 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode37a5596_9bc4_4df1_af63_e4475450a07f.slice/crio-9976726d9aa693b8b16b0fd35fd5969474e97281e1536366194d29035e3634ca WatchSource:0}: Error finding container 9976726d9aa693b8b16b0fd35fd5969474e97281e1536366194d29035e3634ca: Status 404 returned error can't find the container with id 9976726d9aa693b8b16b0fd35fd5969474e97281e1536366194d29035e3634ca Feb 17 13:56:43 crc kubenswrapper[4768]: I0217 13:56:43.896619 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-lqvwx" event={"ID":"e37a5596-9bc4-4df1-af63-e4475450a07f","Type":"ContainerStarted","Data":"9976726d9aa693b8b16b0fd35fd5969474e97281e1536366194d29035e3634ca"} Feb 17 13:56:44 crc kubenswrapper[4768]: I0217 13:56:44.412655 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 17 13:56:44 crc kubenswrapper[4768]: I0217 13:56:44.907371 4768 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="d2bf4260-3d56-4912-9f47-8b27ce491e59" containerName="ceilometer-notification-agent" containerID="cri-o://4799f6e6b236fc9f2b45e7eb1fc1b502ec9ae9f084922676ff8490f6ac5cc5d1" gracePeriod=30 Feb 17 13:56:44 crc kubenswrapper[4768]: I0217 13:56:44.907392 4768 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="d2bf4260-3d56-4912-9f47-8b27ce491e59" containerName="ceilometer-central-agent" containerID="cri-o://a512b9f2e66d66f2b5c629200551a062a09f64b4b60f11c5bdb8e4f8513fadf4" gracePeriod=30 Feb 17 13:56:44 crc kubenswrapper[4768]: I0217 13:56:44.907386 4768 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="d2bf4260-3d56-4912-9f47-8b27ce491e59" containerName="sg-core" containerID="cri-o://196888e320405329900e43fa42705bac7ded1817257cf06665ae0dc01560c7fa" gracePeriod=30 Feb 17 13:56:44 crc kubenswrapper[4768]: I0217 13:56:44.907435 4768 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="d2bf4260-3d56-4912-9f47-8b27ce491e59" containerName="proxy-httpd" containerID="cri-o://601982ee909457f5212417f463a7fedb03e90e91c76709d9dbcc6705ff59b5d9" gracePeriod=30 Feb 17 13:56:45 crc kubenswrapper[4768]: I0217 13:56:45.956633 4768 generic.go:334] "Generic (PLEG): container finished" podID="d2bf4260-3d56-4912-9f47-8b27ce491e59" containerID="601982ee909457f5212417f463a7fedb03e90e91c76709d9dbcc6705ff59b5d9" exitCode=0 Feb 17 13:56:45 crc kubenswrapper[4768]: I0217 13:56:45.956667 4768 generic.go:334] "Generic (PLEG): container finished" podID="d2bf4260-3d56-4912-9f47-8b27ce491e59" containerID="196888e320405329900e43fa42705bac7ded1817257cf06665ae0dc01560c7fa" exitCode=2 Feb 17 13:56:45 crc kubenswrapper[4768]: I0217 13:56:45.956677 4768 generic.go:334] "Generic (PLEG): container finished" podID="d2bf4260-3d56-4912-9f47-8b27ce491e59" containerID="4799f6e6b236fc9f2b45e7eb1fc1b502ec9ae9f084922676ff8490f6ac5cc5d1" exitCode=0 Feb 17 13:56:45 crc kubenswrapper[4768]: I0217 13:56:45.956715 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d2bf4260-3d56-4912-9f47-8b27ce491e59","Type":"ContainerDied","Data":"601982ee909457f5212417f463a7fedb03e90e91c76709d9dbcc6705ff59b5d9"} Feb 17 13:56:45 crc kubenswrapper[4768]: I0217 13:56:45.956781 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d2bf4260-3d56-4912-9f47-8b27ce491e59","Type":"ContainerDied","Data":"196888e320405329900e43fa42705bac7ded1817257cf06665ae0dc01560c7fa"} Feb 17 13:56:45 crc kubenswrapper[4768]: I0217 13:56:45.956794 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d2bf4260-3d56-4912-9f47-8b27ce491e59","Type":"ContainerDied","Data":"4799f6e6b236fc9f2b45e7eb1fc1b502ec9ae9f084922676ff8490f6ac5cc5d1"} Feb 17 13:56:49 crc kubenswrapper[4768]: I0217 13:56:49.992214 4768 generic.go:334] "Generic (PLEG): container finished" podID="d2bf4260-3d56-4912-9f47-8b27ce491e59" containerID="a512b9f2e66d66f2b5c629200551a062a09f64b4b60f11c5bdb8e4f8513fadf4" exitCode=0 Feb 17 13:56:49 crc kubenswrapper[4768]: I0217 13:56:49.992308 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d2bf4260-3d56-4912-9f47-8b27ce491e59","Type":"ContainerDied","Data":"a512b9f2e66d66f2b5c629200551a062a09f64b4b60f11c5bdb8e4f8513fadf4"} Feb 17 13:56:50 crc kubenswrapper[4768]: I0217 13:56:50.167143 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 17 13:56:50 crc kubenswrapper[4768]: E0217 13:56:50.167168 4768 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc20ad4a2_cf3e_4390_9141_1cc58518fd2b.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc20ad4a2_cf3e_4390_9141_1cc58518fd2b.slice/crio-1d17c1ad4b2966fe20b3c50b7d3e8591d321560f133eb3c2ff7c07184fbb95e5\": RecentStats: unable to find data in memory cache]" Feb 17 13:56:50 crc kubenswrapper[4768]: I0217 13:56:50.205656 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d2bf4260-3d56-4912-9f47-8b27ce491e59-run-httpd\") pod \"d2bf4260-3d56-4912-9f47-8b27ce491e59\" (UID: \"d2bf4260-3d56-4912-9f47-8b27ce491e59\") " Feb 17 13:56:50 crc kubenswrapper[4768]: I0217 13:56:50.206097 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d2bf4260-3d56-4912-9f47-8b27ce491e59-scripts\") pod \"d2bf4260-3d56-4912-9f47-8b27ce491e59\" (UID: \"d2bf4260-3d56-4912-9f47-8b27ce491e59\") " Feb 17 13:56:50 crc kubenswrapper[4768]: I0217 13:56:50.206151 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/d2bf4260-3d56-4912-9f47-8b27ce491e59-ceilometer-tls-certs\") pod \"d2bf4260-3d56-4912-9f47-8b27ce491e59\" (UID: \"d2bf4260-3d56-4912-9f47-8b27ce491e59\") " Feb 17 13:56:50 crc kubenswrapper[4768]: I0217 13:56:50.206202 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d2bf4260-3d56-4912-9f47-8b27ce491e59-config-data\") pod \"d2bf4260-3d56-4912-9f47-8b27ce491e59\" (UID: \"d2bf4260-3d56-4912-9f47-8b27ce491e59\") " Feb 17 13:56:50 crc kubenswrapper[4768]: I0217 13:56:50.206271 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d2bf4260-3d56-4912-9f47-8b27ce491e59-log-httpd\") pod \"d2bf4260-3d56-4912-9f47-8b27ce491e59\" (UID: \"d2bf4260-3d56-4912-9f47-8b27ce491e59\") " Feb 17 13:56:50 crc kubenswrapper[4768]: I0217 13:56:50.206306 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d2bf4260-3d56-4912-9f47-8b27ce491e59-sg-core-conf-yaml\") pod \"d2bf4260-3d56-4912-9f47-8b27ce491e59\" (UID: \"d2bf4260-3d56-4912-9f47-8b27ce491e59\") " Feb 17 13:56:50 crc kubenswrapper[4768]: I0217 13:56:50.206388 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d2bf4260-3d56-4912-9f47-8b27ce491e59-combined-ca-bundle\") pod \"d2bf4260-3d56-4912-9f47-8b27ce491e59\" (UID: \"d2bf4260-3d56-4912-9f47-8b27ce491e59\") " Feb 17 13:56:50 crc kubenswrapper[4768]: I0217 13:56:50.206431 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sbfbf\" (UniqueName: \"kubernetes.io/projected/d2bf4260-3d56-4912-9f47-8b27ce491e59-kube-api-access-sbfbf\") pod \"d2bf4260-3d56-4912-9f47-8b27ce491e59\" (UID: \"d2bf4260-3d56-4912-9f47-8b27ce491e59\") " Feb 17 13:56:50 crc kubenswrapper[4768]: I0217 13:56:50.208584 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d2bf4260-3d56-4912-9f47-8b27ce491e59-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "d2bf4260-3d56-4912-9f47-8b27ce491e59" (UID: "d2bf4260-3d56-4912-9f47-8b27ce491e59"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 13:56:50 crc kubenswrapper[4768]: I0217 13:56:50.208770 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d2bf4260-3d56-4912-9f47-8b27ce491e59-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "d2bf4260-3d56-4912-9f47-8b27ce491e59" (UID: "d2bf4260-3d56-4912-9f47-8b27ce491e59"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 13:56:50 crc kubenswrapper[4768]: I0217 13:56:50.220362 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d2bf4260-3d56-4912-9f47-8b27ce491e59-kube-api-access-sbfbf" (OuterVolumeSpecName: "kube-api-access-sbfbf") pod "d2bf4260-3d56-4912-9f47-8b27ce491e59" (UID: "d2bf4260-3d56-4912-9f47-8b27ce491e59"). InnerVolumeSpecName "kube-api-access-sbfbf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 13:56:50 crc kubenswrapper[4768]: I0217 13:56:50.220565 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d2bf4260-3d56-4912-9f47-8b27ce491e59-scripts" (OuterVolumeSpecName: "scripts") pod "d2bf4260-3d56-4912-9f47-8b27ce491e59" (UID: "d2bf4260-3d56-4912-9f47-8b27ce491e59"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 13:56:50 crc kubenswrapper[4768]: I0217 13:56:50.237734 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d2bf4260-3d56-4912-9f47-8b27ce491e59-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "d2bf4260-3d56-4912-9f47-8b27ce491e59" (UID: "d2bf4260-3d56-4912-9f47-8b27ce491e59"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 13:56:50 crc kubenswrapper[4768]: I0217 13:56:50.259663 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d2bf4260-3d56-4912-9f47-8b27ce491e59-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "d2bf4260-3d56-4912-9f47-8b27ce491e59" (UID: "d2bf4260-3d56-4912-9f47-8b27ce491e59"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 13:56:50 crc kubenswrapper[4768]: I0217 13:56:50.291242 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d2bf4260-3d56-4912-9f47-8b27ce491e59-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d2bf4260-3d56-4912-9f47-8b27ce491e59" (UID: "d2bf4260-3d56-4912-9f47-8b27ce491e59"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 13:56:50 crc kubenswrapper[4768]: I0217 13:56:50.308406 4768 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d2bf4260-3d56-4912-9f47-8b27ce491e59-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 13:56:50 crc kubenswrapper[4768]: I0217 13:56:50.308453 4768 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/d2bf4260-3d56-4912-9f47-8b27ce491e59-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 17 13:56:50 crc kubenswrapper[4768]: I0217 13:56:50.308463 4768 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d2bf4260-3d56-4912-9f47-8b27ce491e59-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 17 13:56:50 crc kubenswrapper[4768]: I0217 13:56:50.308472 4768 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d2bf4260-3d56-4912-9f47-8b27ce491e59-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 17 13:56:50 crc kubenswrapper[4768]: I0217 13:56:50.308480 4768 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d2bf4260-3d56-4912-9f47-8b27ce491e59-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 13:56:50 crc kubenswrapper[4768]: I0217 13:56:50.308503 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sbfbf\" (UniqueName: \"kubernetes.io/projected/d2bf4260-3d56-4912-9f47-8b27ce491e59-kube-api-access-sbfbf\") on node \"crc\" DevicePath \"\"" Feb 17 13:56:50 crc kubenswrapper[4768]: I0217 13:56:50.308514 4768 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d2bf4260-3d56-4912-9f47-8b27ce491e59-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 17 13:56:50 crc kubenswrapper[4768]: I0217 13:56:50.337827 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d2bf4260-3d56-4912-9f47-8b27ce491e59-config-data" (OuterVolumeSpecName: "config-data") pod "d2bf4260-3d56-4912-9f47-8b27ce491e59" (UID: "d2bf4260-3d56-4912-9f47-8b27ce491e59"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 13:56:50 crc kubenswrapper[4768]: I0217 13:56:50.410325 4768 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d2bf4260-3d56-4912-9f47-8b27ce491e59-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 13:56:51 crc kubenswrapper[4768]: I0217 13:56:51.029610 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 17 13:56:51 crc kubenswrapper[4768]: I0217 13:56:51.034309 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d2bf4260-3d56-4912-9f47-8b27ce491e59","Type":"ContainerDied","Data":"31d48febdcb3b4ee17a2d1040edf19ab8becde2376cfbc76c75d5051a9660287"} Feb 17 13:56:51 crc kubenswrapper[4768]: I0217 13:56:51.034427 4768 scope.go:117] "RemoveContainer" containerID="601982ee909457f5212417f463a7fedb03e90e91c76709d9dbcc6705ff59b5d9" Feb 17 13:56:51 crc kubenswrapper[4768]: I0217 13:56:51.035148 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-lqvwx" event={"ID":"e37a5596-9bc4-4df1-af63-e4475450a07f","Type":"ContainerStarted","Data":"2ecb5297fc86f40a5e569044850dc88193c82cec590e64e12d688999dccf833d"} Feb 17 13:56:51 crc kubenswrapper[4768]: I0217 13:56:51.062311 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-db-sync-lqvwx" podStartSLOduration=2.5789632 podStartE2EDuration="9.062295368s" podCreationTimestamp="2026-02-17 13:56:42 +0000 UTC" firstStartedPulling="2026-02-17 13:56:43.342788055 +0000 UTC m=+1222.622174497" lastFinishedPulling="2026-02-17 13:56:49.826120233 +0000 UTC m=+1229.105506665" observedRunningTime="2026-02-17 13:56:51.059545431 +0000 UTC m=+1230.338931873" watchObservedRunningTime="2026-02-17 13:56:51.062295368 +0000 UTC m=+1230.341681810" Feb 17 13:56:51 crc kubenswrapper[4768]: I0217 13:56:51.089898 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 17 13:56:51 crc kubenswrapper[4768]: I0217 13:56:51.110969 4768 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 17 13:56:51 crc kubenswrapper[4768]: I0217 13:56:51.112391 4768 scope.go:117] "RemoveContainer" containerID="196888e320405329900e43fa42705bac7ded1817257cf06665ae0dc01560c7fa" Feb 17 13:56:51 crc kubenswrapper[4768]: I0217 13:56:51.121965 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 17 13:56:51 crc kubenswrapper[4768]: E0217 13:56:51.122503 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d2bf4260-3d56-4912-9f47-8b27ce491e59" containerName="ceilometer-central-agent" Feb 17 13:56:51 crc kubenswrapper[4768]: I0217 13:56:51.122529 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="d2bf4260-3d56-4912-9f47-8b27ce491e59" containerName="ceilometer-central-agent" Feb 17 13:56:51 crc kubenswrapper[4768]: E0217 13:56:51.122548 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d2bf4260-3d56-4912-9f47-8b27ce491e59" containerName="proxy-httpd" Feb 17 13:56:51 crc kubenswrapper[4768]: I0217 13:56:51.122557 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="d2bf4260-3d56-4912-9f47-8b27ce491e59" containerName="proxy-httpd" Feb 17 13:56:51 crc kubenswrapper[4768]: E0217 13:56:51.122587 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d2bf4260-3d56-4912-9f47-8b27ce491e59" containerName="ceilometer-notification-agent" Feb 17 13:56:51 crc kubenswrapper[4768]: I0217 13:56:51.122595 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="d2bf4260-3d56-4912-9f47-8b27ce491e59" containerName="ceilometer-notification-agent" Feb 17 13:56:51 crc kubenswrapper[4768]: E0217 13:56:51.122609 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d2bf4260-3d56-4912-9f47-8b27ce491e59" containerName="sg-core" Feb 17 13:56:51 crc kubenswrapper[4768]: I0217 13:56:51.122616 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="d2bf4260-3d56-4912-9f47-8b27ce491e59" containerName="sg-core" Feb 17 13:56:51 crc kubenswrapper[4768]: I0217 13:56:51.122849 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="d2bf4260-3d56-4912-9f47-8b27ce491e59" containerName="sg-core" Feb 17 13:56:51 crc kubenswrapper[4768]: I0217 13:56:51.122875 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="d2bf4260-3d56-4912-9f47-8b27ce491e59" containerName="ceilometer-central-agent" Feb 17 13:56:51 crc kubenswrapper[4768]: I0217 13:56:51.122884 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="d2bf4260-3d56-4912-9f47-8b27ce491e59" containerName="proxy-httpd" Feb 17 13:56:51 crc kubenswrapper[4768]: I0217 13:56:51.122898 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="d2bf4260-3d56-4912-9f47-8b27ce491e59" containerName="ceilometer-notification-agent" Feb 17 13:56:51 crc kubenswrapper[4768]: I0217 13:56:51.127872 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 17 13:56:51 crc kubenswrapper[4768]: I0217 13:56:51.133468 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 17 13:56:51 crc kubenswrapper[4768]: I0217 13:56:51.133696 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 17 13:56:51 crc kubenswrapper[4768]: I0217 13:56:51.133891 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Feb 17 13:56:51 crc kubenswrapper[4768]: I0217 13:56:51.138009 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 17 13:56:51 crc kubenswrapper[4768]: I0217 13:56:51.165350 4768 scope.go:117] "RemoveContainer" containerID="4799f6e6b236fc9f2b45e7eb1fc1b502ec9ae9f084922676ff8490f6ac5cc5d1" Feb 17 13:56:51 crc kubenswrapper[4768]: I0217 13:56:51.185007 4768 scope.go:117] "RemoveContainer" containerID="a512b9f2e66d66f2b5c629200551a062a09f64b4b60f11c5bdb8e4f8513fadf4" Feb 17 13:56:51 crc kubenswrapper[4768]: I0217 13:56:51.226340 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/85ca1076-5485-492d-a920-d51cf7b376f8-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"85ca1076-5485-492d-a920-d51cf7b376f8\") " pod="openstack/ceilometer-0" Feb 17 13:56:51 crc kubenswrapper[4768]: I0217 13:56:51.226397 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/85ca1076-5485-492d-a920-d51cf7b376f8-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"85ca1076-5485-492d-a920-d51cf7b376f8\") " pod="openstack/ceilometer-0" Feb 17 13:56:51 crc kubenswrapper[4768]: I0217 13:56:51.226442 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/85ca1076-5485-492d-a920-d51cf7b376f8-run-httpd\") pod \"ceilometer-0\" (UID: \"85ca1076-5485-492d-a920-d51cf7b376f8\") " pod="openstack/ceilometer-0" Feb 17 13:56:51 crc kubenswrapper[4768]: I0217 13:56:51.226477 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9wv74\" (UniqueName: \"kubernetes.io/projected/85ca1076-5485-492d-a920-d51cf7b376f8-kube-api-access-9wv74\") pod \"ceilometer-0\" (UID: \"85ca1076-5485-492d-a920-d51cf7b376f8\") " pod="openstack/ceilometer-0" Feb 17 13:56:51 crc kubenswrapper[4768]: I0217 13:56:51.226498 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/85ca1076-5485-492d-a920-d51cf7b376f8-scripts\") pod \"ceilometer-0\" (UID: \"85ca1076-5485-492d-a920-d51cf7b376f8\") " pod="openstack/ceilometer-0" Feb 17 13:56:51 crc kubenswrapper[4768]: I0217 13:56:51.226575 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/85ca1076-5485-492d-a920-d51cf7b376f8-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"85ca1076-5485-492d-a920-d51cf7b376f8\") " pod="openstack/ceilometer-0" Feb 17 13:56:51 crc kubenswrapper[4768]: I0217 13:56:51.226624 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/85ca1076-5485-492d-a920-d51cf7b376f8-log-httpd\") pod \"ceilometer-0\" (UID: \"85ca1076-5485-492d-a920-d51cf7b376f8\") " pod="openstack/ceilometer-0" Feb 17 13:56:51 crc kubenswrapper[4768]: I0217 13:56:51.226683 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/85ca1076-5485-492d-a920-d51cf7b376f8-config-data\") pod \"ceilometer-0\" (UID: \"85ca1076-5485-492d-a920-d51cf7b376f8\") " pod="openstack/ceilometer-0" Feb 17 13:56:51 crc kubenswrapper[4768]: I0217 13:56:51.328064 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/85ca1076-5485-492d-a920-d51cf7b376f8-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"85ca1076-5485-492d-a920-d51cf7b376f8\") " pod="openstack/ceilometer-0" Feb 17 13:56:51 crc kubenswrapper[4768]: I0217 13:56:51.328152 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/85ca1076-5485-492d-a920-d51cf7b376f8-log-httpd\") pod \"ceilometer-0\" (UID: \"85ca1076-5485-492d-a920-d51cf7b376f8\") " pod="openstack/ceilometer-0" Feb 17 13:56:51 crc kubenswrapper[4768]: I0217 13:56:51.328218 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/85ca1076-5485-492d-a920-d51cf7b376f8-config-data\") pod \"ceilometer-0\" (UID: \"85ca1076-5485-492d-a920-d51cf7b376f8\") " pod="openstack/ceilometer-0" Feb 17 13:56:51 crc kubenswrapper[4768]: I0217 13:56:51.328244 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/85ca1076-5485-492d-a920-d51cf7b376f8-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"85ca1076-5485-492d-a920-d51cf7b376f8\") " pod="openstack/ceilometer-0" Feb 17 13:56:51 crc kubenswrapper[4768]: I0217 13:56:51.328286 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/85ca1076-5485-492d-a920-d51cf7b376f8-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"85ca1076-5485-492d-a920-d51cf7b376f8\") " pod="openstack/ceilometer-0" Feb 17 13:56:51 crc kubenswrapper[4768]: I0217 13:56:51.328324 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/85ca1076-5485-492d-a920-d51cf7b376f8-run-httpd\") pod \"ceilometer-0\" (UID: \"85ca1076-5485-492d-a920-d51cf7b376f8\") " pod="openstack/ceilometer-0" Feb 17 13:56:51 crc kubenswrapper[4768]: I0217 13:56:51.328359 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9wv74\" (UniqueName: \"kubernetes.io/projected/85ca1076-5485-492d-a920-d51cf7b376f8-kube-api-access-9wv74\") pod \"ceilometer-0\" (UID: \"85ca1076-5485-492d-a920-d51cf7b376f8\") " pod="openstack/ceilometer-0" Feb 17 13:56:51 crc kubenswrapper[4768]: I0217 13:56:51.328379 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/85ca1076-5485-492d-a920-d51cf7b376f8-scripts\") pod \"ceilometer-0\" (UID: \"85ca1076-5485-492d-a920-d51cf7b376f8\") " pod="openstack/ceilometer-0" Feb 17 13:56:51 crc kubenswrapper[4768]: I0217 13:56:51.329409 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/85ca1076-5485-492d-a920-d51cf7b376f8-run-httpd\") pod \"ceilometer-0\" (UID: \"85ca1076-5485-492d-a920-d51cf7b376f8\") " pod="openstack/ceilometer-0" Feb 17 13:56:51 crc kubenswrapper[4768]: I0217 13:56:51.329931 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/85ca1076-5485-492d-a920-d51cf7b376f8-log-httpd\") pod \"ceilometer-0\" (UID: \"85ca1076-5485-492d-a920-d51cf7b376f8\") " pod="openstack/ceilometer-0" Feb 17 13:56:51 crc kubenswrapper[4768]: I0217 13:56:51.336962 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/85ca1076-5485-492d-a920-d51cf7b376f8-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"85ca1076-5485-492d-a920-d51cf7b376f8\") " pod="openstack/ceilometer-0" Feb 17 13:56:51 crc kubenswrapper[4768]: I0217 13:56:51.344403 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/85ca1076-5485-492d-a920-d51cf7b376f8-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"85ca1076-5485-492d-a920-d51cf7b376f8\") " pod="openstack/ceilometer-0" Feb 17 13:56:51 crc kubenswrapper[4768]: I0217 13:56:51.345281 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/85ca1076-5485-492d-a920-d51cf7b376f8-scripts\") pod \"ceilometer-0\" (UID: \"85ca1076-5485-492d-a920-d51cf7b376f8\") " pod="openstack/ceilometer-0" Feb 17 13:56:51 crc kubenswrapper[4768]: I0217 13:56:51.350995 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/85ca1076-5485-492d-a920-d51cf7b376f8-config-data\") pod \"ceilometer-0\" (UID: \"85ca1076-5485-492d-a920-d51cf7b376f8\") " pod="openstack/ceilometer-0" Feb 17 13:56:51 crc kubenswrapper[4768]: I0217 13:56:51.351366 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/85ca1076-5485-492d-a920-d51cf7b376f8-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"85ca1076-5485-492d-a920-d51cf7b376f8\") " pod="openstack/ceilometer-0" Feb 17 13:56:51 crc kubenswrapper[4768]: I0217 13:56:51.352846 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9wv74\" (UniqueName: \"kubernetes.io/projected/85ca1076-5485-492d-a920-d51cf7b376f8-kube-api-access-9wv74\") pod \"ceilometer-0\" (UID: \"85ca1076-5485-492d-a920-d51cf7b376f8\") " pod="openstack/ceilometer-0" Feb 17 13:56:51 crc kubenswrapper[4768]: I0217 13:56:51.464005 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 17 13:56:51 crc kubenswrapper[4768]: I0217 13:56:51.553878 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d2bf4260-3d56-4912-9f47-8b27ce491e59" path="/var/lib/kubelet/pods/d2bf4260-3d56-4912-9f47-8b27ce491e59/volumes" Feb 17 13:56:51 crc kubenswrapper[4768]: W0217 13:56:51.963898 4768 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod85ca1076_5485_492d_a920_d51cf7b376f8.slice/crio-c4f9405d0a29c454e7119dc5d24c76eb1b4ff5aa43f039040e548d296343a894 WatchSource:0}: Error finding container c4f9405d0a29c454e7119dc5d24c76eb1b4ff5aa43f039040e548d296343a894: Status 404 returned error can't find the container with id c4f9405d0a29c454e7119dc5d24c76eb1b4ff5aa43f039040e548d296343a894 Feb 17 13:56:51 crc kubenswrapper[4768]: I0217 13:56:51.976675 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 17 13:56:52 crc kubenswrapper[4768]: I0217 13:56:52.043378 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"85ca1076-5485-492d-a920-d51cf7b376f8","Type":"ContainerStarted","Data":"c4f9405d0a29c454e7119dc5d24c76eb1b4ff5aa43f039040e548d296343a894"} Feb 17 13:56:53 crc kubenswrapper[4768]: I0217 13:56:53.055434 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"85ca1076-5485-492d-a920-d51cf7b376f8","Type":"ContainerStarted","Data":"ac7ddf9eb06d6f18d7342a13b09c6dba4c27fc305d6af225083538a2f6409c04"} Feb 17 13:56:56 crc kubenswrapper[4768]: I0217 13:56:56.098545 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"85ca1076-5485-492d-a920-d51cf7b376f8","Type":"ContainerStarted","Data":"2a7e21d4192de968fb6d5d8f9994980f98e62ef160bd5436c786b903aaa24c6c"} Feb 17 13:56:56 crc kubenswrapper[4768]: I0217 13:56:56.099932 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"85ca1076-5485-492d-a920-d51cf7b376f8","Type":"ContainerStarted","Data":"615d8f7a44e0f946114808e2d38caf13a2c7181d71f6317fcce1c0f02c291289"} Feb 17 13:56:58 crc kubenswrapper[4768]: I0217 13:56:58.123286 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"85ca1076-5485-492d-a920-d51cf7b376f8","Type":"ContainerStarted","Data":"2df5118ed0669bb5e795d62dcce34e68c36a4595b69a6d6101d81fbc8264afa0"} Feb 17 13:56:58 crc kubenswrapper[4768]: I0217 13:56:58.123671 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 17 13:56:58 crc kubenswrapper[4768]: I0217 13:56:58.167927 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=1.747767649 podStartE2EDuration="7.167909284s" podCreationTimestamp="2026-02-17 13:56:51 +0000 UTC" firstStartedPulling="2026-02-17 13:56:51.967078452 +0000 UTC m=+1231.246464894" lastFinishedPulling="2026-02-17 13:56:57.387220047 +0000 UTC m=+1236.666606529" observedRunningTime="2026-02-17 13:56:58.155323173 +0000 UTC m=+1237.434709675" watchObservedRunningTime="2026-02-17 13:56:58.167909284 +0000 UTC m=+1237.447295726" Feb 17 13:57:00 crc kubenswrapper[4768]: I0217 13:57:00.146561 4768 generic.go:334] "Generic (PLEG): container finished" podID="e37a5596-9bc4-4df1-af63-e4475450a07f" containerID="2ecb5297fc86f40a5e569044850dc88193c82cec590e64e12d688999dccf833d" exitCode=0 Feb 17 13:57:00 crc kubenswrapper[4768]: I0217 13:57:00.146702 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-lqvwx" event={"ID":"e37a5596-9bc4-4df1-af63-e4475450a07f","Type":"ContainerDied","Data":"2ecb5297fc86f40a5e569044850dc88193c82cec590e64e12d688999dccf833d"} Feb 17 13:57:00 crc kubenswrapper[4768]: E0217 13:57:00.414151 4768 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc20ad4a2_cf3e_4390_9141_1cc58518fd2b.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc20ad4a2_cf3e_4390_9141_1cc58518fd2b.slice/crio-1d17c1ad4b2966fe20b3c50b7d3e8591d321560f133eb3c2ff7c07184fbb95e5\": RecentStats: unable to find data in memory cache]" Feb 17 13:57:01 crc kubenswrapper[4768]: I0217 13:57:01.492743 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-lqvwx" Feb 17 13:57:01 crc kubenswrapper[4768]: I0217 13:57:01.618345 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e37a5596-9bc4-4df1-af63-e4475450a07f-combined-ca-bundle\") pod \"e37a5596-9bc4-4df1-af63-e4475450a07f\" (UID: \"e37a5596-9bc4-4df1-af63-e4475450a07f\") " Feb 17 13:57:01 crc kubenswrapper[4768]: I0217 13:57:01.618768 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e37a5596-9bc4-4df1-af63-e4475450a07f-scripts\") pod \"e37a5596-9bc4-4df1-af63-e4475450a07f\" (UID: \"e37a5596-9bc4-4df1-af63-e4475450a07f\") " Feb 17 13:57:01 crc kubenswrapper[4768]: I0217 13:57:01.618869 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e37a5596-9bc4-4df1-af63-e4475450a07f-config-data\") pod \"e37a5596-9bc4-4df1-af63-e4475450a07f\" (UID: \"e37a5596-9bc4-4df1-af63-e4475450a07f\") " Feb 17 13:57:01 crc kubenswrapper[4768]: I0217 13:57:01.618977 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-np9kr\" (UniqueName: \"kubernetes.io/projected/e37a5596-9bc4-4df1-af63-e4475450a07f-kube-api-access-np9kr\") pod \"e37a5596-9bc4-4df1-af63-e4475450a07f\" (UID: \"e37a5596-9bc4-4df1-af63-e4475450a07f\") " Feb 17 13:57:01 crc kubenswrapper[4768]: I0217 13:57:01.627553 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e37a5596-9bc4-4df1-af63-e4475450a07f-kube-api-access-np9kr" (OuterVolumeSpecName: "kube-api-access-np9kr") pod "e37a5596-9bc4-4df1-af63-e4475450a07f" (UID: "e37a5596-9bc4-4df1-af63-e4475450a07f"). InnerVolumeSpecName "kube-api-access-np9kr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 13:57:01 crc kubenswrapper[4768]: I0217 13:57:01.628061 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e37a5596-9bc4-4df1-af63-e4475450a07f-scripts" (OuterVolumeSpecName: "scripts") pod "e37a5596-9bc4-4df1-af63-e4475450a07f" (UID: "e37a5596-9bc4-4df1-af63-e4475450a07f"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 13:57:01 crc kubenswrapper[4768]: I0217 13:57:01.645177 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e37a5596-9bc4-4df1-af63-e4475450a07f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e37a5596-9bc4-4df1-af63-e4475450a07f" (UID: "e37a5596-9bc4-4df1-af63-e4475450a07f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 13:57:01 crc kubenswrapper[4768]: I0217 13:57:01.650051 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e37a5596-9bc4-4df1-af63-e4475450a07f-config-data" (OuterVolumeSpecName: "config-data") pod "e37a5596-9bc4-4df1-af63-e4475450a07f" (UID: "e37a5596-9bc4-4df1-af63-e4475450a07f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 13:57:01 crc kubenswrapper[4768]: I0217 13:57:01.720754 4768 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e37a5596-9bc4-4df1-af63-e4475450a07f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 13:57:01 crc kubenswrapper[4768]: I0217 13:57:01.720786 4768 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e37a5596-9bc4-4df1-af63-e4475450a07f-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 13:57:01 crc kubenswrapper[4768]: I0217 13:57:01.720795 4768 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e37a5596-9bc4-4df1-af63-e4475450a07f-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 13:57:01 crc kubenswrapper[4768]: I0217 13:57:01.720805 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-np9kr\" (UniqueName: \"kubernetes.io/projected/e37a5596-9bc4-4df1-af63-e4475450a07f-kube-api-access-np9kr\") on node \"crc\" DevicePath \"\"" Feb 17 13:57:02 crc kubenswrapper[4768]: I0217 13:57:02.170833 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-lqvwx" event={"ID":"e37a5596-9bc4-4df1-af63-e4475450a07f","Type":"ContainerDied","Data":"9976726d9aa693b8b16b0fd35fd5969474e97281e1536366194d29035e3634ca"} Feb 17 13:57:02 crc kubenswrapper[4768]: I0217 13:57:02.170878 4768 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9976726d9aa693b8b16b0fd35fd5969474e97281e1536366194d29035e3634ca" Feb 17 13:57:02 crc kubenswrapper[4768]: I0217 13:57:02.170945 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-lqvwx" Feb 17 13:57:02 crc kubenswrapper[4768]: I0217 13:57:02.262898 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-0"] Feb 17 13:57:02 crc kubenswrapper[4768]: E0217 13:57:02.265788 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e37a5596-9bc4-4df1-af63-e4475450a07f" containerName="nova-cell0-conductor-db-sync" Feb 17 13:57:02 crc kubenswrapper[4768]: I0217 13:57:02.265824 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="e37a5596-9bc4-4df1-af63-e4475450a07f" containerName="nova-cell0-conductor-db-sync" Feb 17 13:57:02 crc kubenswrapper[4768]: I0217 13:57:02.266177 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="e37a5596-9bc4-4df1-af63-e4475450a07f" containerName="nova-cell0-conductor-db-sync" Feb 17 13:57:02 crc kubenswrapper[4768]: I0217 13:57:02.267145 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Feb 17 13:57:02 crc kubenswrapper[4768]: I0217 13:57:02.275585 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Feb 17 13:57:02 crc kubenswrapper[4768]: I0217 13:57:02.283330 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-9thk7" Feb 17 13:57:02 crc kubenswrapper[4768]: I0217 13:57:02.311623 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Feb 17 13:57:02 crc kubenswrapper[4768]: I0217 13:57:02.433466 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/96c0f340-0c30-46ee-8c25-b4c96718d2b0-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"96c0f340-0c30-46ee-8c25-b4c96718d2b0\") " pod="openstack/nova-cell0-conductor-0" Feb 17 13:57:02 crc kubenswrapper[4768]: I0217 13:57:02.433590 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v7pq6\" (UniqueName: \"kubernetes.io/projected/96c0f340-0c30-46ee-8c25-b4c96718d2b0-kube-api-access-v7pq6\") pod \"nova-cell0-conductor-0\" (UID: \"96c0f340-0c30-46ee-8c25-b4c96718d2b0\") " pod="openstack/nova-cell0-conductor-0" Feb 17 13:57:02 crc kubenswrapper[4768]: I0217 13:57:02.433639 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/96c0f340-0c30-46ee-8c25-b4c96718d2b0-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"96c0f340-0c30-46ee-8c25-b4c96718d2b0\") " pod="openstack/nova-cell0-conductor-0" Feb 17 13:57:02 crc kubenswrapper[4768]: I0217 13:57:02.535222 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v7pq6\" (UniqueName: \"kubernetes.io/projected/96c0f340-0c30-46ee-8c25-b4c96718d2b0-kube-api-access-v7pq6\") pod \"nova-cell0-conductor-0\" (UID: \"96c0f340-0c30-46ee-8c25-b4c96718d2b0\") " pod="openstack/nova-cell0-conductor-0" Feb 17 13:57:02 crc kubenswrapper[4768]: I0217 13:57:02.535560 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/96c0f340-0c30-46ee-8c25-b4c96718d2b0-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"96c0f340-0c30-46ee-8c25-b4c96718d2b0\") " pod="openstack/nova-cell0-conductor-0" Feb 17 13:57:02 crc kubenswrapper[4768]: I0217 13:57:02.535650 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/96c0f340-0c30-46ee-8c25-b4c96718d2b0-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"96c0f340-0c30-46ee-8c25-b4c96718d2b0\") " pod="openstack/nova-cell0-conductor-0" Feb 17 13:57:02 crc kubenswrapper[4768]: I0217 13:57:02.540533 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/96c0f340-0c30-46ee-8c25-b4c96718d2b0-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"96c0f340-0c30-46ee-8c25-b4c96718d2b0\") " pod="openstack/nova-cell0-conductor-0" Feb 17 13:57:02 crc kubenswrapper[4768]: I0217 13:57:02.553979 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/96c0f340-0c30-46ee-8c25-b4c96718d2b0-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"96c0f340-0c30-46ee-8c25-b4c96718d2b0\") " pod="openstack/nova-cell0-conductor-0" Feb 17 13:57:02 crc kubenswrapper[4768]: I0217 13:57:02.556588 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v7pq6\" (UniqueName: \"kubernetes.io/projected/96c0f340-0c30-46ee-8c25-b4c96718d2b0-kube-api-access-v7pq6\") pod \"nova-cell0-conductor-0\" (UID: \"96c0f340-0c30-46ee-8c25-b4c96718d2b0\") " pod="openstack/nova-cell0-conductor-0" Feb 17 13:57:02 crc kubenswrapper[4768]: I0217 13:57:02.589665 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Feb 17 13:57:03 crc kubenswrapper[4768]: W0217 13:57:03.081621 4768 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod96c0f340_0c30_46ee_8c25_b4c96718d2b0.slice/crio-405167ade947116b6d0c1aede359233812492410463c9f92e07395c829b60216 WatchSource:0}: Error finding container 405167ade947116b6d0c1aede359233812492410463c9f92e07395c829b60216: Status 404 returned error can't find the container with id 405167ade947116b6d0c1aede359233812492410463c9f92e07395c829b60216 Feb 17 13:57:03 crc kubenswrapper[4768]: I0217 13:57:03.083700 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Feb 17 13:57:03 crc kubenswrapper[4768]: I0217 13:57:03.179308 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"96c0f340-0c30-46ee-8c25-b4c96718d2b0","Type":"ContainerStarted","Data":"405167ade947116b6d0c1aede359233812492410463c9f92e07395c829b60216"} Feb 17 13:57:04 crc kubenswrapper[4768]: I0217 13:57:04.223175 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"96c0f340-0c30-46ee-8c25-b4c96718d2b0","Type":"ContainerStarted","Data":"b50d386326eebf12706d535bca7021601321555b98d62e086fa98ef6730f8cf4"} Feb 17 13:57:04 crc kubenswrapper[4768]: I0217 13:57:04.223268 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell0-conductor-0" Feb 17 13:57:04 crc kubenswrapper[4768]: I0217 13:57:04.246400 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-0" podStartSLOduration=2.246378222 podStartE2EDuration="2.246378222s" podCreationTimestamp="2026-02-17 13:57:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 13:57:04.240132253 +0000 UTC m=+1243.519518705" watchObservedRunningTime="2026-02-17 13:57:04.246378222 +0000 UTC m=+1243.525764664" Feb 17 13:57:10 crc kubenswrapper[4768]: E0217 13:57:10.639491 4768 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc20ad4a2_cf3e_4390_9141_1cc58518fd2b.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc20ad4a2_cf3e_4390_9141_1cc58518fd2b.slice/crio-1d17c1ad4b2966fe20b3c50b7d3e8591d321560f133eb3c2ff7c07184fbb95e5\": RecentStats: unable to find data in memory cache]" Feb 17 13:57:12 crc kubenswrapper[4768]: I0217 13:57:12.616368 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell0-conductor-0" Feb 17 13:57:13 crc kubenswrapper[4768]: I0217 13:57:13.097148 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-cell-mapping-j6688"] Feb 17 13:57:13 crc kubenswrapper[4768]: I0217 13:57:13.098492 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-j6688" Feb 17 13:57:13 crc kubenswrapper[4768]: I0217 13:57:13.100994 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-config-data" Feb 17 13:57:13 crc kubenswrapper[4768]: I0217 13:57:13.101528 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-scripts" Feb 17 13:57:13 crc kubenswrapper[4768]: I0217 13:57:13.110003 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-j6688"] Feb 17 13:57:13 crc kubenswrapper[4768]: I0217 13:57:13.261288 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ab5558d8-d8ab-4e56-a053-bd878be0dfb7-scripts\") pod \"nova-cell0-cell-mapping-j6688\" (UID: \"ab5558d8-d8ab-4e56-a053-bd878be0dfb7\") " pod="openstack/nova-cell0-cell-mapping-j6688" Feb 17 13:57:13 crc kubenswrapper[4768]: I0217 13:57:13.261360 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ab5558d8-d8ab-4e56-a053-bd878be0dfb7-config-data\") pod \"nova-cell0-cell-mapping-j6688\" (UID: \"ab5558d8-d8ab-4e56-a053-bd878be0dfb7\") " pod="openstack/nova-cell0-cell-mapping-j6688" Feb 17 13:57:13 crc kubenswrapper[4768]: I0217 13:57:13.261432 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c67h8\" (UniqueName: \"kubernetes.io/projected/ab5558d8-d8ab-4e56-a053-bd878be0dfb7-kube-api-access-c67h8\") pod \"nova-cell0-cell-mapping-j6688\" (UID: \"ab5558d8-d8ab-4e56-a053-bd878be0dfb7\") " pod="openstack/nova-cell0-cell-mapping-j6688" Feb 17 13:57:13 crc kubenswrapper[4768]: I0217 13:57:13.261534 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ab5558d8-d8ab-4e56-a053-bd878be0dfb7-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-j6688\" (UID: \"ab5558d8-d8ab-4e56-a053-bd878be0dfb7\") " pod="openstack/nova-cell0-cell-mapping-j6688" Feb 17 13:57:13 crc kubenswrapper[4768]: I0217 13:57:13.304544 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 17 13:57:13 crc kubenswrapper[4768]: I0217 13:57:13.306040 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 17 13:57:13 crc kubenswrapper[4768]: I0217 13:57:13.322440 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Feb 17 13:57:13 crc kubenswrapper[4768]: I0217 13:57:13.333206 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 17 13:57:13 crc kubenswrapper[4768]: I0217 13:57:13.363471 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ab5558d8-d8ab-4e56-a053-bd878be0dfb7-config-data\") pod \"nova-cell0-cell-mapping-j6688\" (UID: \"ab5558d8-d8ab-4e56-a053-bd878be0dfb7\") " pod="openstack/nova-cell0-cell-mapping-j6688" Feb 17 13:57:13 crc kubenswrapper[4768]: I0217 13:57:13.363556 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7549ca8c-a029-408c-b740-6c376751dbf6-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"7549ca8c-a029-408c-b740-6c376751dbf6\") " pod="openstack/nova-cell1-novncproxy-0" Feb 17 13:57:13 crc kubenswrapper[4768]: I0217 13:57:13.363587 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c67h8\" (UniqueName: \"kubernetes.io/projected/ab5558d8-d8ab-4e56-a053-bd878be0dfb7-kube-api-access-c67h8\") pod \"nova-cell0-cell-mapping-j6688\" (UID: \"ab5558d8-d8ab-4e56-a053-bd878be0dfb7\") " pod="openstack/nova-cell0-cell-mapping-j6688" Feb 17 13:57:13 crc kubenswrapper[4768]: I0217 13:57:13.363615 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7549ca8c-a029-408c-b740-6c376751dbf6-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"7549ca8c-a029-408c-b740-6c376751dbf6\") " pod="openstack/nova-cell1-novncproxy-0" Feb 17 13:57:13 crc kubenswrapper[4768]: I0217 13:57:13.363708 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ab5558d8-d8ab-4e56-a053-bd878be0dfb7-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-j6688\" (UID: \"ab5558d8-d8ab-4e56-a053-bd878be0dfb7\") " pod="openstack/nova-cell0-cell-mapping-j6688" Feb 17 13:57:13 crc kubenswrapper[4768]: I0217 13:57:13.363749 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5gnzv\" (UniqueName: \"kubernetes.io/projected/7549ca8c-a029-408c-b740-6c376751dbf6-kube-api-access-5gnzv\") pod \"nova-cell1-novncproxy-0\" (UID: \"7549ca8c-a029-408c-b740-6c376751dbf6\") " pod="openstack/nova-cell1-novncproxy-0" Feb 17 13:57:13 crc kubenswrapper[4768]: I0217 13:57:13.363809 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ab5558d8-d8ab-4e56-a053-bd878be0dfb7-scripts\") pod \"nova-cell0-cell-mapping-j6688\" (UID: \"ab5558d8-d8ab-4e56-a053-bd878be0dfb7\") " pod="openstack/nova-cell0-cell-mapping-j6688" Feb 17 13:57:13 crc kubenswrapper[4768]: I0217 13:57:13.370251 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ab5558d8-d8ab-4e56-a053-bd878be0dfb7-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-j6688\" (UID: \"ab5558d8-d8ab-4e56-a053-bd878be0dfb7\") " pod="openstack/nova-cell0-cell-mapping-j6688" Feb 17 13:57:13 crc kubenswrapper[4768]: I0217 13:57:13.374165 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ab5558d8-d8ab-4e56-a053-bd878be0dfb7-config-data\") pod \"nova-cell0-cell-mapping-j6688\" (UID: \"ab5558d8-d8ab-4e56-a053-bd878be0dfb7\") " pod="openstack/nova-cell0-cell-mapping-j6688" Feb 17 13:57:13 crc kubenswrapper[4768]: I0217 13:57:13.384825 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c67h8\" (UniqueName: \"kubernetes.io/projected/ab5558d8-d8ab-4e56-a053-bd878be0dfb7-kube-api-access-c67h8\") pod \"nova-cell0-cell-mapping-j6688\" (UID: \"ab5558d8-d8ab-4e56-a053-bd878be0dfb7\") " pod="openstack/nova-cell0-cell-mapping-j6688" Feb 17 13:57:13 crc kubenswrapper[4768]: I0217 13:57:13.399034 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ab5558d8-d8ab-4e56-a053-bd878be0dfb7-scripts\") pod \"nova-cell0-cell-mapping-j6688\" (UID: \"ab5558d8-d8ab-4e56-a053-bd878be0dfb7\") " pod="openstack/nova-cell0-cell-mapping-j6688" Feb 17 13:57:13 crc kubenswrapper[4768]: I0217 13:57:13.432164 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Feb 17 13:57:13 crc kubenswrapper[4768]: I0217 13:57:13.433876 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 17 13:57:13 crc kubenswrapper[4768]: I0217 13:57:13.441233 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 17 13:57:13 crc kubenswrapper[4768]: I0217 13:57:13.441631 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-j6688" Feb 17 13:57:13 crc kubenswrapper[4768]: I0217 13:57:13.448489 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Feb 17 13:57:13 crc kubenswrapper[4768]: I0217 13:57:13.471853 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6ebd0ede-8ec0-405a-9a8c-75ad432bd515-config-data\") pod \"nova-metadata-0\" (UID: \"6ebd0ede-8ec0-405a-9a8c-75ad432bd515\") " pod="openstack/nova-metadata-0" Feb 17 13:57:13 crc kubenswrapper[4768]: I0217 13:57:13.471923 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5gnzv\" (UniqueName: \"kubernetes.io/projected/7549ca8c-a029-408c-b740-6c376751dbf6-kube-api-access-5gnzv\") pod \"nova-cell1-novncproxy-0\" (UID: \"7549ca8c-a029-408c-b740-6c376751dbf6\") " pod="openstack/nova-cell1-novncproxy-0" Feb 17 13:57:13 crc kubenswrapper[4768]: I0217 13:57:13.471997 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6ebd0ede-8ec0-405a-9a8c-75ad432bd515-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"6ebd0ede-8ec0-405a-9a8c-75ad432bd515\") " pod="openstack/nova-metadata-0" Feb 17 13:57:13 crc kubenswrapper[4768]: I0217 13:57:13.472026 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4k7ml\" (UniqueName: \"kubernetes.io/projected/6ebd0ede-8ec0-405a-9a8c-75ad432bd515-kube-api-access-4k7ml\") pod \"nova-metadata-0\" (UID: \"6ebd0ede-8ec0-405a-9a8c-75ad432bd515\") " pod="openstack/nova-metadata-0" Feb 17 13:57:13 crc kubenswrapper[4768]: I0217 13:57:13.472074 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7549ca8c-a029-408c-b740-6c376751dbf6-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"7549ca8c-a029-408c-b740-6c376751dbf6\") " pod="openstack/nova-cell1-novncproxy-0" Feb 17 13:57:13 crc kubenswrapper[4768]: I0217 13:57:13.472123 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7549ca8c-a029-408c-b740-6c376751dbf6-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"7549ca8c-a029-408c-b740-6c376751dbf6\") " pod="openstack/nova-cell1-novncproxy-0" Feb 17 13:57:13 crc kubenswrapper[4768]: I0217 13:57:13.472146 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6ebd0ede-8ec0-405a-9a8c-75ad432bd515-logs\") pod \"nova-metadata-0\" (UID: \"6ebd0ede-8ec0-405a-9a8c-75ad432bd515\") " pod="openstack/nova-metadata-0" Feb 17 13:57:13 crc kubenswrapper[4768]: I0217 13:57:13.480621 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7549ca8c-a029-408c-b740-6c376751dbf6-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"7549ca8c-a029-408c-b740-6c376751dbf6\") " pod="openstack/nova-cell1-novncproxy-0" Feb 17 13:57:13 crc kubenswrapper[4768]: I0217 13:57:13.494517 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7549ca8c-a029-408c-b740-6c376751dbf6-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"7549ca8c-a029-408c-b740-6c376751dbf6\") " pod="openstack/nova-cell1-novncproxy-0" Feb 17 13:57:13 crc kubenswrapper[4768]: I0217 13:57:13.560791 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5gnzv\" (UniqueName: \"kubernetes.io/projected/7549ca8c-a029-408c-b740-6c376751dbf6-kube-api-access-5gnzv\") pod \"nova-cell1-novncproxy-0\" (UID: \"7549ca8c-a029-408c-b740-6c376751dbf6\") " pod="openstack/nova-cell1-novncproxy-0" Feb 17 13:57:13 crc kubenswrapper[4768]: I0217 13:57:13.574321 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6ebd0ede-8ec0-405a-9a8c-75ad432bd515-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"6ebd0ede-8ec0-405a-9a8c-75ad432bd515\") " pod="openstack/nova-metadata-0" Feb 17 13:57:13 crc kubenswrapper[4768]: I0217 13:57:13.574393 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4k7ml\" (UniqueName: \"kubernetes.io/projected/6ebd0ede-8ec0-405a-9a8c-75ad432bd515-kube-api-access-4k7ml\") pod \"nova-metadata-0\" (UID: \"6ebd0ede-8ec0-405a-9a8c-75ad432bd515\") " pod="openstack/nova-metadata-0" Feb 17 13:57:13 crc kubenswrapper[4768]: I0217 13:57:13.574481 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6ebd0ede-8ec0-405a-9a8c-75ad432bd515-logs\") pod \"nova-metadata-0\" (UID: \"6ebd0ede-8ec0-405a-9a8c-75ad432bd515\") " pod="openstack/nova-metadata-0" Feb 17 13:57:13 crc kubenswrapper[4768]: I0217 13:57:13.574554 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6ebd0ede-8ec0-405a-9a8c-75ad432bd515-config-data\") pod \"nova-metadata-0\" (UID: \"6ebd0ede-8ec0-405a-9a8c-75ad432bd515\") " pod="openstack/nova-metadata-0" Feb 17 13:57:13 crc kubenswrapper[4768]: I0217 13:57:13.585413 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6ebd0ede-8ec0-405a-9a8c-75ad432bd515-logs\") pod \"nova-metadata-0\" (UID: \"6ebd0ede-8ec0-405a-9a8c-75ad432bd515\") " pod="openstack/nova-metadata-0" Feb 17 13:57:13 crc kubenswrapper[4768]: I0217 13:57:13.590180 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6ebd0ede-8ec0-405a-9a8c-75ad432bd515-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"6ebd0ede-8ec0-405a-9a8c-75ad432bd515\") " pod="openstack/nova-metadata-0" Feb 17 13:57:13 crc kubenswrapper[4768]: I0217 13:57:13.601944 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6ebd0ede-8ec0-405a-9a8c-75ad432bd515-config-data\") pod \"nova-metadata-0\" (UID: \"6ebd0ede-8ec0-405a-9a8c-75ad432bd515\") " pod="openstack/nova-metadata-0" Feb 17 13:57:13 crc kubenswrapper[4768]: I0217 13:57:13.602685 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Feb 17 13:57:13 crc kubenswrapper[4768]: I0217 13:57:13.615186 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 17 13:57:13 crc kubenswrapper[4768]: I0217 13:57:13.623002 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Feb 17 13:57:13 crc kubenswrapper[4768]: I0217 13:57:13.624897 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 17 13:57:13 crc kubenswrapper[4768]: I0217 13:57:13.643514 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4k7ml\" (UniqueName: \"kubernetes.io/projected/6ebd0ede-8ec0-405a-9a8c-75ad432bd515-kube-api-access-4k7ml\") pod \"nova-metadata-0\" (UID: \"6ebd0ede-8ec0-405a-9a8c-75ad432bd515\") " pod="openstack/nova-metadata-0" Feb 17 13:57:13 crc kubenswrapper[4768]: I0217 13:57:13.660301 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 17 13:57:13 crc kubenswrapper[4768]: I0217 13:57:13.676728 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/71481e86-0ed0-47c2-a15a-3379dacb425c-logs\") pod \"nova-api-0\" (UID: \"71481e86-0ed0-47c2-a15a-3379dacb425c\") " pod="openstack/nova-api-0" Feb 17 13:57:13 crc kubenswrapper[4768]: I0217 13:57:13.676846 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/71481e86-0ed0-47c2-a15a-3379dacb425c-config-data\") pod \"nova-api-0\" (UID: \"71481e86-0ed0-47c2-a15a-3379dacb425c\") " pod="openstack/nova-api-0" Feb 17 13:57:13 crc kubenswrapper[4768]: I0217 13:57:13.676882 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dlj48\" (UniqueName: \"kubernetes.io/projected/71481e86-0ed0-47c2-a15a-3379dacb425c-kube-api-access-dlj48\") pod \"nova-api-0\" (UID: \"71481e86-0ed0-47c2-a15a-3379dacb425c\") " pod="openstack/nova-api-0" Feb 17 13:57:13 crc kubenswrapper[4768]: I0217 13:57:13.676900 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/71481e86-0ed0-47c2-a15a-3379dacb425c-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"71481e86-0ed0-47c2-a15a-3379dacb425c\") " pod="openstack/nova-api-0" Feb 17 13:57:13 crc kubenswrapper[4768]: I0217 13:57:13.723603 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Feb 17 13:57:13 crc kubenswrapper[4768]: I0217 13:57:13.726643 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 17 13:57:13 crc kubenswrapper[4768]: I0217 13:57:13.729363 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Feb 17 13:57:13 crc kubenswrapper[4768]: I0217 13:57:13.743020 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 17 13:57:13 crc kubenswrapper[4768]: I0217 13:57:13.763390 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-757b4f8459-zmmmh"] Feb 17 13:57:13 crc kubenswrapper[4768]: I0217 13:57:13.765500 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-757b4f8459-zmmmh" Feb 17 13:57:13 crc kubenswrapper[4768]: I0217 13:57:13.776263 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-757b4f8459-zmmmh"] Feb 17 13:57:13 crc kubenswrapper[4768]: I0217 13:57:13.779450 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fa54770b-b069-4fe4-b618-30ab5004db67-config-data\") pod \"nova-scheduler-0\" (UID: \"fa54770b-b069-4fe4-b618-30ab5004db67\") " pod="openstack/nova-scheduler-0" Feb 17 13:57:13 crc kubenswrapper[4768]: I0217 13:57:13.779494 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pnv7m\" (UniqueName: \"kubernetes.io/projected/fa54770b-b069-4fe4-b618-30ab5004db67-kube-api-access-pnv7m\") pod \"nova-scheduler-0\" (UID: \"fa54770b-b069-4fe4-b618-30ab5004db67\") " pod="openstack/nova-scheduler-0" Feb 17 13:57:13 crc kubenswrapper[4768]: I0217 13:57:13.779558 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/71481e86-0ed0-47c2-a15a-3379dacb425c-config-data\") pod \"nova-api-0\" (UID: \"71481e86-0ed0-47c2-a15a-3379dacb425c\") " pod="openstack/nova-api-0" Feb 17 13:57:13 crc kubenswrapper[4768]: I0217 13:57:13.779587 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bc797443-7e90-4b43-be24-df2291d9a72e-config\") pod \"dnsmasq-dns-757b4f8459-zmmmh\" (UID: \"bc797443-7e90-4b43-be24-df2291d9a72e\") " pod="openstack/dnsmasq-dns-757b4f8459-zmmmh" Feb 17 13:57:13 crc kubenswrapper[4768]: I0217 13:57:13.779628 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/71481e86-0ed0-47c2-a15a-3379dacb425c-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"71481e86-0ed0-47c2-a15a-3379dacb425c\") " pod="openstack/nova-api-0" Feb 17 13:57:13 crc kubenswrapper[4768]: I0217 13:57:13.779648 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dlj48\" (UniqueName: \"kubernetes.io/projected/71481e86-0ed0-47c2-a15a-3379dacb425c-kube-api-access-dlj48\") pod \"nova-api-0\" (UID: \"71481e86-0ed0-47c2-a15a-3379dacb425c\") " pod="openstack/nova-api-0" Feb 17 13:57:13 crc kubenswrapper[4768]: I0217 13:57:13.779680 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fa54770b-b069-4fe4-b618-30ab5004db67-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"fa54770b-b069-4fe4-b618-30ab5004db67\") " pod="openstack/nova-scheduler-0" Feb 17 13:57:13 crc kubenswrapper[4768]: I0217 13:57:13.779712 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/bc797443-7e90-4b43-be24-df2291d9a72e-ovsdbserver-nb\") pod \"dnsmasq-dns-757b4f8459-zmmmh\" (UID: \"bc797443-7e90-4b43-be24-df2291d9a72e\") " pod="openstack/dnsmasq-dns-757b4f8459-zmmmh" Feb 17 13:57:13 crc kubenswrapper[4768]: I0217 13:57:13.779732 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/bc797443-7e90-4b43-be24-df2291d9a72e-ovsdbserver-sb\") pod \"dnsmasq-dns-757b4f8459-zmmmh\" (UID: \"bc797443-7e90-4b43-be24-df2291d9a72e\") " pod="openstack/dnsmasq-dns-757b4f8459-zmmmh" Feb 17 13:57:13 crc kubenswrapper[4768]: I0217 13:57:13.779768 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bc797443-7e90-4b43-be24-df2291d9a72e-dns-svc\") pod \"dnsmasq-dns-757b4f8459-zmmmh\" (UID: \"bc797443-7e90-4b43-be24-df2291d9a72e\") " pod="openstack/dnsmasq-dns-757b4f8459-zmmmh" Feb 17 13:57:13 crc kubenswrapper[4768]: I0217 13:57:13.779795 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wbjk5\" (UniqueName: \"kubernetes.io/projected/bc797443-7e90-4b43-be24-df2291d9a72e-kube-api-access-wbjk5\") pod \"dnsmasq-dns-757b4f8459-zmmmh\" (UID: \"bc797443-7e90-4b43-be24-df2291d9a72e\") " pod="openstack/dnsmasq-dns-757b4f8459-zmmmh" Feb 17 13:57:13 crc kubenswrapper[4768]: I0217 13:57:13.779835 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/bc797443-7e90-4b43-be24-df2291d9a72e-dns-swift-storage-0\") pod \"dnsmasq-dns-757b4f8459-zmmmh\" (UID: \"bc797443-7e90-4b43-be24-df2291d9a72e\") " pod="openstack/dnsmasq-dns-757b4f8459-zmmmh" Feb 17 13:57:13 crc kubenswrapper[4768]: I0217 13:57:13.779887 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/71481e86-0ed0-47c2-a15a-3379dacb425c-logs\") pod \"nova-api-0\" (UID: \"71481e86-0ed0-47c2-a15a-3379dacb425c\") " pod="openstack/nova-api-0" Feb 17 13:57:13 crc kubenswrapper[4768]: I0217 13:57:13.785071 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/71481e86-0ed0-47c2-a15a-3379dacb425c-logs\") pod \"nova-api-0\" (UID: \"71481e86-0ed0-47c2-a15a-3379dacb425c\") " pod="openstack/nova-api-0" Feb 17 13:57:13 crc kubenswrapper[4768]: I0217 13:57:13.798538 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/71481e86-0ed0-47c2-a15a-3379dacb425c-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"71481e86-0ed0-47c2-a15a-3379dacb425c\") " pod="openstack/nova-api-0" Feb 17 13:57:13 crc kubenswrapper[4768]: I0217 13:57:13.799518 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/71481e86-0ed0-47c2-a15a-3379dacb425c-config-data\") pod \"nova-api-0\" (UID: \"71481e86-0ed0-47c2-a15a-3379dacb425c\") " pod="openstack/nova-api-0" Feb 17 13:57:13 crc kubenswrapper[4768]: I0217 13:57:13.809575 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dlj48\" (UniqueName: \"kubernetes.io/projected/71481e86-0ed0-47c2-a15a-3379dacb425c-kube-api-access-dlj48\") pod \"nova-api-0\" (UID: \"71481e86-0ed0-47c2-a15a-3379dacb425c\") " pod="openstack/nova-api-0" Feb 17 13:57:13 crc kubenswrapper[4768]: I0217 13:57:13.881179 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wbjk5\" (UniqueName: \"kubernetes.io/projected/bc797443-7e90-4b43-be24-df2291d9a72e-kube-api-access-wbjk5\") pod \"dnsmasq-dns-757b4f8459-zmmmh\" (UID: \"bc797443-7e90-4b43-be24-df2291d9a72e\") " pod="openstack/dnsmasq-dns-757b4f8459-zmmmh" Feb 17 13:57:13 crc kubenswrapper[4768]: I0217 13:57:13.881243 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/bc797443-7e90-4b43-be24-df2291d9a72e-dns-swift-storage-0\") pod \"dnsmasq-dns-757b4f8459-zmmmh\" (UID: \"bc797443-7e90-4b43-be24-df2291d9a72e\") " pod="openstack/dnsmasq-dns-757b4f8459-zmmmh" Feb 17 13:57:13 crc kubenswrapper[4768]: I0217 13:57:13.881406 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fa54770b-b069-4fe4-b618-30ab5004db67-config-data\") pod \"nova-scheduler-0\" (UID: \"fa54770b-b069-4fe4-b618-30ab5004db67\") " pod="openstack/nova-scheduler-0" Feb 17 13:57:13 crc kubenswrapper[4768]: I0217 13:57:13.881432 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pnv7m\" (UniqueName: \"kubernetes.io/projected/fa54770b-b069-4fe4-b618-30ab5004db67-kube-api-access-pnv7m\") pod \"nova-scheduler-0\" (UID: \"fa54770b-b069-4fe4-b618-30ab5004db67\") " pod="openstack/nova-scheduler-0" Feb 17 13:57:13 crc kubenswrapper[4768]: I0217 13:57:13.881474 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bc797443-7e90-4b43-be24-df2291d9a72e-config\") pod \"dnsmasq-dns-757b4f8459-zmmmh\" (UID: \"bc797443-7e90-4b43-be24-df2291d9a72e\") " pod="openstack/dnsmasq-dns-757b4f8459-zmmmh" Feb 17 13:57:13 crc kubenswrapper[4768]: I0217 13:57:13.881512 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fa54770b-b069-4fe4-b618-30ab5004db67-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"fa54770b-b069-4fe4-b618-30ab5004db67\") " pod="openstack/nova-scheduler-0" Feb 17 13:57:13 crc kubenswrapper[4768]: I0217 13:57:13.881532 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/bc797443-7e90-4b43-be24-df2291d9a72e-ovsdbserver-nb\") pod \"dnsmasq-dns-757b4f8459-zmmmh\" (UID: \"bc797443-7e90-4b43-be24-df2291d9a72e\") " pod="openstack/dnsmasq-dns-757b4f8459-zmmmh" Feb 17 13:57:13 crc kubenswrapper[4768]: I0217 13:57:13.881547 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/bc797443-7e90-4b43-be24-df2291d9a72e-ovsdbserver-sb\") pod \"dnsmasq-dns-757b4f8459-zmmmh\" (UID: \"bc797443-7e90-4b43-be24-df2291d9a72e\") " pod="openstack/dnsmasq-dns-757b4f8459-zmmmh" Feb 17 13:57:13 crc kubenswrapper[4768]: I0217 13:57:13.884676 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bc797443-7e90-4b43-be24-df2291d9a72e-dns-svc\") pod \"dnsmasq-dns-757b4f8459-zmmmh\" (UID: \"bc797443-7e90-4b43-be24-df2291d9a72e\") " pod="openstack/dnsmasq-dns-757b4f8459-zmmmh" Feb 17 13:57:13 crc kubenswrapper[4768]: I0217 13:57:13.884730 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/bc797443-7e90-4b43-be24-df2291d9a72e-ovsdbserver-nb\") pod \"dnsmasq-dns-757b4f8459-zmmmh\" (UID: \"bc797443-7e90-4b43-be24-df2291d9a72e\") " pod="openstack/dnsmasq-dns-757b4f8459-zmmmh" Feb 17 13:57:13 crc kubenswrapper[4768]: I0217 13:57:13.884730 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bc797443-7e90-4b43-be24-df2291d9a72e-config\") pod \"dnsmasq-dns-757b4f8459-zmmmh\" (UID: \"bc797443-7e90-4b43-be24-df2291d9a72e\") " pod="openstack/dnsmasq-dns-757b4f8459-zmmmh" Feb 17 13:57:13 crc kubenswrapper[4768]: I0217 13:57:13.884834 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/bc797443-7e90-4b43-be24-df2291d9a72e-ovsdbserver-sb\") pod \"dnsmasq-dns-757b4f8459-zmmmh\" (UID: \"bc797443-7e90-4b43-be24-df2291d9a72e\") " pod="openstack/dnsmasq-dns-757b4f8459-zmmmh" Feb 17 13:57:13 crc kubenswrapper[4768]: I0217 13:57:13.885354 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/bc797443-7e90-4b43-be24-df2291d9a72e-dns-swift-storage-0\") pod \"dnsmasq-dns-757b4f8459-zmmmh\" (UID: \"bc797443-7e90-4b43-be24-df2291d9a72e\") " pod="openstack/dnsmasq-dns-757b4f8459-zmmmh" Feb 17 13:57:13 crc kubenswrapper[4768]: I0217 13:57:13.890816 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bc797443-7e90-4b43-be24-df2291d9a72e-dns-svc\") pod \"dnsmasq-dns-757b4f8459-zmmmh\" (UID: \"bc797443-7e90-4b43-be24-df2291d9a72e\") " pod="openstack/dnsmasq-dns-757b4f8459-zmmmh" Feb 17 13:57:13 crc kubenswrapper[4768]: I0217 13:57:13.895675 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fa54770b-b069-4fe4-b618-30ab5004db67-config-data\") pod \"nova-scheduler-0\" (UID: \"fa54770b-b069-4fe4-b618-30ab5004db67\") " pod="openstack/nova-scheduler-0" Feb 17 13:57:13 crc kubenswrapper[4768]: I0217 13:57:13.896886 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fa54770b-b069-4fe4-b618-30ab5004db67-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"fa54770b-b069-4fe4-b618-30ab5004db67\") " pod="openstack/nova-scheduler-0" Feb 17 13:57:13 crc kubenswrapper[4768]: I0217 13:57:13.898828 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 17 13:57:13 crc kubenswrapper[4768]: I0217 13:57:13.904917 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pnv7m\" (UniqueName: \"kubernetes.io/projected/fa54770b-b069-4fe4-b618-30ab5004db67-kube-api-access-pnv7m\") pod \"nova-scheduler-0\" (UID: \"fa54770b-b069-4fe4-b618-30ab5004db67\") " pod="openstack/nova-scheduler-0" Feb 17 13:57:13 crc kubenswrapper[4768]: I0217 13:57:13.907270 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wbjk5\" (UniqueName: \"kubernetes.io/projected/bc797443-7e90-4b43-be24-df2291d9a72e-kube-api-access-wbjk5\") pod \"dnsmasq-dns-757b4f8459-zmmmh\" (UID: \"bc797443-7e90-4b43-be24-df2291d9a72e\") " pod="openstack/dnsmasq-dns-757b4f8459-zmmmh" Feb 17 13:57:13 crc kubenswrapper[4768]: I0217 13:57:13.979733 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 17 13:57:14 crc kubenswrapper[4768]: I0217 13:57:14.058297 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 17 13:57:14 crc kubenswrapper[4768]: I0217 13:57:14.109049 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-j6688"] Feb 17 13:57:14 crc kubenswrapper[4768]: I0217 13:57:14.111935 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-757b4f8459-zmmmh" Feb 17 13:57:14 crc kubenswrapper[4768]: W0217 13:57:14.125137 4768 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podab5558d8_d8ab_4e56_a053_bd878be0dfb7.slice/crio-c8e6c4403378a3f174f198a2ba800820d167cce3c58e846f5939ba7d56833698 WatchSource:0}: Error finding container c8e6c4403378a3f174f198a2ba800820d167cce3c58e846f5939ba7d56833698: Status 404 returned error can't find the container with id c8e6c4403378a3f174f198a2ba800820d167cce3c58e846f5939ba7d56833698 Feb 17 13:57:14 crc kubenswrapper[4768]: I0217 13:57:14.242674 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 17 13:57:14 crc kubenswrapper[4768]: I0217 13:57:14.365768 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-j6688" event={"ID":"ab5558d8-d8ab-4e56-a053-bd878be0dfb7","Type":"ContainerStarted","Data":"c8e6c4403378a3f174f198a2ba800820d167cce3c58e846f5939ba7d56833698"} Feb 17 13:57:14 crc kubenswrapper[4768]: I0217 13:57:14.373004 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 17 13:57:14 crc kubenswrapper[4768]: I0217 13:57:14.394503 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"6ebd0ede-8ec0-405a-9a8c-75ad432bd515","Type":"ContainerStarted","Data":"23ecc59efbf352f757b52a4a32a6915ae99082f9884650746d6359b487b5e632"} Feb 17 13:57:14 crc kubenswrapper[4768]: I0217 13:57:14.552599 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-db-sync-z5wdm"] Feb 17 13:57:14 crc kubenswrapper[4768]: I0217 13:57:14.554517 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-z5wdm" Feb 17 13:57:14 crc kubenswrapper[4768]: I0217 13:57:14.558954 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-scripts" Feb 17 13:57:14 crc kubenswrapper[4768]: I0217 13:57:14.559162 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Feb 17 13:57:14 crc kubenswrapper[4768]: I0217 13:57:14.563547 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-z5wdm"] Feb 17 13:57:14 crc kubenswrapper[4768]: I0217 13:57:14.609714 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7636550e-37fd-4031-a05d-603fef57553a-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-z5wdm\" (UID: \"7636550e-37fd-4031-a05d-603fef57553a\") " pod="openstack/nova-cell1-conductor-db-sync-z5wdm" Feb 17 13:57:14 crc kubenswrapper[4768]: I0217 13:57:14.609840 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7636550e-37fd-4031-a05d-603fef57553a-config-data\") pod \"nova-cell1-conductor-db-sync-z5wdm\" (UID: \"7636550e-37fd-4031-a05d-603fef57553a\") " pod="openstack/nova-cell1-conductor-db-sync-z5wdm" Feb 17 13:57:14 crc kubenswrapper[4768]: I0217 13:57:14.609911 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7636550e-37fd-4031-a05d-603fef57553a-scripts\") pod \"nova-cell1-conductor-db-sync-z5wdm\" (UID: \"7636550e-37fd-4031-a05d-603fef57553a\") " pod="openstack/nova-cell1-conductor-db-sync-z5wdm" Feb 17 13:57:14 crc kubenswrapper[4768]: I0217 13:57:14.610130 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mz9cg\" (UniqueName: \"kubernetes.io/projected/7636550e-37fd-4031-a05d-603fef57553a-kube-api-access-mz9cg\") pod \"nova-cell1-conductor-db-sync-z5wdm\" (UID: \"7636550e-37fd-4031-a05d-603fef57553a\") " pod="openstack/nova-cell1-conductor-db-sync-z5wdm" Feb 17 13:57:14 crc kubenswrapper[4768]: I0217 13:57:14.722807 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7636550e-37fd-4031-a05d-603fef57553a-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-z5wdm\" (UID: \"7636550e-37fd-4031-a05d-603fef57553a\") " pod="openstack/nova-cell1-conductor-db-sync-z5wdm" Feb 17 13:57:14 crc kubenswrapper[4768]: I0217 13:57:14.722891 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7636550e-37fd-4031-a05d-603fef57553a-config-data\") pod \"nova-cell1-conductor-db-sync-z5wdm\" (UID: \"7636550e-37fd-4031-a05d-603fef57553a\") " pod="openstack/nova-cell1-conductor-db-sync-z5wdm" Feb 17 13:57:14 crc kubenswrapper[4768]: I0217 13:57:14.723004 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7636550e-37fd-4031-a05d-603fef57553a-scripts\") pod \"nova-cell1-conductor-db-sync-z5wdm\" (UID: \"7636550e-37fd-4031-a05d-603fef57553a\") " pod="openstack/nova-cell1-conductor-db-sync-z5wdm" Feb 17 13:57:14 crc kubenswrapper[4768]: I0217 13:57:14.725020 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mz9cg\" (UniqueName: \"kubernetes.io/projected/7636550e-37fd-4031-a05d-603fef57553a-kube-api-access-mz9cg\") pod \"nova-cell1-conductor-db-sync-z5wdm\" (UID: \"7636550e-37fd-4031-a05d-603fef57553a\") " pod="openstack/nova-cell1-conductor-db-sync-z5wdm" Feb 17 13:57:14 crc kubenswrapper[4768]: I0217 13:57:14.737064 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 17 13:57:14 crc kubenswrapper[4768]: I0217 13:57:14.742339 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7636550e-37fd-4031-a05d-603fef57553a-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-z5wdm\" (UID: \"7636550e-37fd-4031-a05d-603fef57553a\") " pod="openstack/nova-cell1-conductor-db-sync-z5wdm" Feb 17 13:57:14 crc kubenswrapper[4768]: I0217 13:57:14.744972 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7636550e-37fd-4031-a05d-603fef57553a-config-data\") pod \"nova-cell1-conductor-db-sync-z5wdm\" (UID: \"7636550e-37fd-4031-a05d-603fef57553a\") " pod="openstack/nova-cell1-conductor-db-sync-z5wdm" Feb 17 13:57:14 crc kubenswrapper[4768]: I0217 13:57:14.745522 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7636550e-37fd-4031-a05d-603fef57553a-scripts\") pod \"nova-cell1-conductor-db-sync-z5wdm\" (UID: \"7636550e-37fd-4031-a05d-603fef57553a\") " pod="openstack/nova-cell1-conductor-db-sync-z5wdm" Feb 17 13:57:14 crc kubenswrapper[4768]: I0217 13:57:14.750414 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mz9cg\" (UniqueName: \"kubernetes.io/projected/7636550e-37fd-4031-a05d-603fef57553a-kube-api-access-mz9cg\") pod \"nova-cell1-conductor-db-sync-z5wdm\" (UID: \"7636550e-37fd-4031-a05d-603fef57553a\") " pod="openstack/nova-cell1-conductor-db-sync-z5wdm" Feb 17 13:57:14 crc kubenswrapper[4768]: I0217 13:57:14.806551 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 17 13:57:14 crc kubenswrapper[4768]: I0217 13:57:14.874756 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-z5wdm" Feb 17 13:57:14 crc kubenswrapper[4768]: I0217 13:57:14.939236 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-757b4f8459-zmmmh"] Feb 17 13:57:15 crc kubenswrapper[4768]: I0217 13:57:15.372702 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-z5wdm"] Feb 17 13:57:15 crc kubenswrapper[4768]: W0217 13:57:15.390422 4768 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7636550e_37fd_4031_a05d_603fef57553a.slice/crio-edc8ef7389a5085acd0ddf37e64f224eed00b5c34fd3ea7c95b5fca3e8f1c795 WatchSource:0}: Error finding container edc8ef7389a5085acd0ddf37e64f224eed00b5c34fd3ea7c95b5fca3e8f1c795: Status 404 returned error can't find the container with id edc8ef7389a5085acd0ddf37e64f224eed00b5c34fd3ea7c95b5fca3e8f1c795 Feb 17 13:57:15 crc kubenswrapper[4768]: I0217 13:57:15.404403 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"7549ca8c-a029-408c-b740-6c376751dbf6","Type":"ContainerStarted","Data":"a845bbd91f149d5bc2faab9bc753bc26eaebf6a5ee4cd66c29572055650cb2ad"} Feb 17 13:57:15 crc kubenswrapper[4768]: I0217 13:57:15.412082 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"71481e86-0ed0-47c2-a15a-3379dacb425c","Type":"ContainerStarted","Data":"482617c7e2a10641ea2c3d22bb6f885cf733d40855e3e0c388fb0d3a95f2933a"} Feb 17 13:57:15 crc kubenswrapper[4768]: I0217 13:57:15.414042 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-z5wdm" event={"ID":"7636550e-37fd-4031-a05d-603fef57553a","Type":"ContainerStarted","Data":"edc8ef7389a5085acd0ddf37e64f224eed00b5c34fd3ea7c95b5fca3e8f1c795"} Feb 17 13:57:15 crc kubenswrapper[4768]: I0217 13:57:15.415968 4768 generic.go:334] "Generic (PLEG): container finished" podID="bc797443-7e90-4b43-be24-df2291d9a72e" containerID="928effde91625203c6c6632d507372a7382c408160acf757c2a837f1eb87e7a4" exitCode=0 Feb 17 13:57:15 crc kubenswrapper[4768]: I0217 13:57:15.416179 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-757b4f8459-zmmmh" event={"ID":"bc797443-7e90-4b43-be24-df2291d9a72e","Type":"ContainerDied","Data":"928effde91625203c6c6632d507372a7382c408160acf757c2a837f1eb87e7a4"} Feb 17 13:57:15 crc kubenswrapper[4768]: I0217 13:57:15.416225 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-757b4f8459-zmmmh" event={"ID":"bc797443-7e90-4b43-be24-df2291d9a72e","Type":"ContainerStarted","Data":"3113e5b93bd11fe5e3416ae7d4585cd788af7333c9081dc426602798ca9a8cee"} Feb 17 13:57:15 crc kubenswrapper[4768]: I0217 13:57:15.420265 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"fa54770b-b069-4fe4-b618-30ab5004db67","Type":"ContainerStarted","Data":"c0777139f89ace0754282f33110f172efbf95013c97c46c7fdeac21fe937cf69"} Feb 17 13:57:15 crc kubenswrapper[4768]: I0217 13:57:15.426920 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-j6688" event={"ID":"ab5558d8-d8ab-4e56-a053-bd878be0dfb7","Type":"ContainerStarted","Data":"b2a1f2873dd7eae98bad852dc5d0a50cfdc71bae83dc7248c89fa165e7458f33"} Feb 17 13:57:15 crc kubenswrapper[4768]: I0217 13:57:15.477623 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-cell-mapping-j6688" podStartSLOduration=2.477572236 podStartE2EDuration="2.477572236s" podCreationTimestamp="2026-02-17 13:57:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 13:57:15.459801645 +0000 UTC m=+1254.739188087" watchObservedRunningTime="2026-02-17 13:57:15.477572236 +0000 UTC m=+1254.756958688" Feb 17 13:57:16 crc kubenswrapper[4768]: I0217 13:57:16.438825 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-z5wdm" event={"ID":"7636550e-37fd-4031-a05d-603fef57553a","Type":"ContainerStarted","Data":"2386d79fdae6ad85a03996c9a38134a990c02eb25b593969a3ee16241de00a38"} Feb 17 13:57:16 crc kubenswrapper[4768]: I0217 13:57:16.459044 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-db-sync-z5wdm" podStartSLOduration=2.459029388 podStartE2EDuration="2.459029388s" podCreationTimestamp="2026-02-17 13:57:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 13:57:16.455464901 +0000 UTC m=+1255.734851343" watchObservedRunningTime="2026-02-17 13:57:16.459029388 +0000 UTC m=+1255.738415830" Feb 17 13:57:17 crc kubenswrapper[4768]: I0217 13:57:17.001680 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 17 13:57:17 crc kubenswrapper[4768]: I0217 13:57:17.010688 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 17 13:57:17 crc kubenswrapper[4768]: I0217 13:57:17.463340 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-757b4f8459-zmmmh" event={"ID":"bc797443-7e90-4b43-be24-df2291d9a72e","Type":"ContainerStarted","Data":"262fa1f5ddb0f8698cbc4b09579b6053fb8c1482fd561b0c8896fdf6158103e7"} Feb 17 13:57:17 crc kubenswrapper[4768]: I0217 13:57:17.464073 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-757b4f8459-zmmmh" Feb 17 13:57:17 crc kubenswrapper[4768]: I0217 13:57:17.498072 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-757b4f8459-zmmmh" podStartSLOduration=4.498053228 podStartE2EDuration="4.498053228s" podCreationTimestamp="2026-02-17 13:57:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 13:57:17.484384928 +0000 UTC m=+1256.763771370" watchObservedRunningTime="2026-02-17 13:57:17.498053228 +0000 UTC m=+1256.777439670" Feb 17 13:57:18 crc kubenswrapper[4768]: I0217 13:57:18.473519 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"7549ca8c-a029-408c-b740-6c376751dbf6","Type":"ContainerStarted","Data":"e341641e7ad5f7011c170f38c42145bafbb17a6b90a496d8430c17bfb55e2964"} Feb 17 13:57:18 crc kubenswrapper[4768]: I0217 13:57:18.473598 4768 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-novncproxy-0" podUID="7549ca8c-a029-408c-b740-6c376751dbf6" containerName="nova-cell1-novncproxy-novncproxy" containerID="cri-o://e341641e7ad5f7011c170f38c42145bafbb17a6b90a496d8430c17bfb55e2964" gracePeriod=30 Feb 17 13:57:18 crc kubenswrapper[4768]: I0217 13:57:18.481870 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"71481e86-0ed0-47c2-a15a-3379dacb425c","Type":"ContainerStarted","Data":"81e45c86629ef1f04ba43e5477ade7e085e9347a89cf29b7daafa5c4496432ff"} Feb 17 13:57:18 crc kubenswrapper[4768]: I0217 13:57:18.481918 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"71481e86-0ed0-47c2-a15a-3379dacb425c","Type":"ContainerStarted","Data":"7ce8a2df0172071d921fc117a138e0c62e92da3614e355458d5478a4f3ac78ed"} Feb 17 13:57:18 crc kubenswrapper[4768]: I0217 13:57:18.484189 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"fa54770b-b069-4fe4-b618-30ab5004db67","Type":"ContainerStarted","Data":"39ae10eee605ad226ded5f02dd7ac7cd3908e988b7081ff0616b4a6f677dbfda"} Feb 17 13:57:18 crc kubenswrapper[4768]: I0217 13:57:18.490709 4768 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="6ebd0ede-8ec0-405a-9a8c-75ad432bd515" containerName="nova-metadata-log" containerID="cri-o://88dc0fd2d39bddd3286c2a9bde30ddf038b2244129d48c60d03c46f041298d7e" gracePeriod=30 Feb 17 13:57:18 crc kubenswrapper[4768]: I0217 13:57:18.490860 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"6ebd0ede-8ec0-405a-9a8c-75ad432bd515","Type":"ContainerStarted","Data":"40aa4ce35e70abcdc19c5896a45cd80400d1018984b38e8777ec76e9d45345c4"} Feb 17 13:57:18 crc kubenswrapper[4768]: I0217 13:57:18.490899 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"6ebd0ede-8ec0-405a-9a8c-75ad432bd515","Type":"ContainerStarted","Data":"88dc0fd2d39bddd3286c2a9bde30ddf038b2244129d48c60d03c46f041298d7e"} Feb 17 13:57:18 crc kubenswrapper[4768]: I0217 13:57:18.490983 4768 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="6ebd0ede-8ec0-405a-9a8c-75ad432bd515" containerName="nova-metadata-metadata" containerID="cri-o://40aa4ce35e70abcdc19c5896a45cd80400d1018984b38e8777ec76e9d45345c4" gracePeriod=30 Feb 17 13:57:18 crc kubenswrapper[4768]: I0217 13:57:18.502045 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.6632245919999997 podStartE2EDuration="5.50201143s" podCreationTimestamp="2026-02-17 13:57:13 +0000 UTC" firstStartedPulling="2026-02-17 13:57:14.436199122 +0000 UTC m=+1253.715585564" lastFinishedPulling="2026-02-17 13:57:17.27498595 +0000 UTC m=+1256.554372402" observedRunningTime="2026-02-17 13:57:18.491644209 +0000 UTC m=+1257.771030651" watchObservedRunningTime="2026-02-17 13:57:18.50201143 +0000 UTC m=+1257.781397882" Feb 17 13:57:18 crc kubenswrapper[4768]: I0217 13:57:18.525593 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.530399047 podStartE2EDuration="5.525573528s" podCreationTimestamp="2026-02-17 13:57:13 +0000 UTC" firstStartedPulling="2026-02-17 13:57:14.286429847 +0000 UTC m=+1253.565816289" lastFinishedPulling="2026-02-17 13:57:17.281604328 +0000 UTC m=+1256.560990770" observedRunningTime="2026-02-17 13:57:18.511695382 +0000 UTC m=+1257.791081824" watchObservedRunningTime="2026-02-17 13:57:18.525573528 +0000 UTC m=+1257.804959970" Feb 17 13:57:18 crc kubenswrapper[4768]: I0217 13:57:18.542328 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.998582562 podStartE2EDuration="5.542310621s" podCreationTimestamp="2026-02-17 13:57:13 +0000 UTC" firstStartedPulling="2026-02-17 13:57:14.732627837 +0000 UTC m=+1254.012014279" lastFinishedPulling="2026-02-17 13:57:17.276355896 +0000 UTC m=+1256.555742338" observedRunningTime="2026-02-17 13:57:18.53782136 +0000 UTC m=+1257.817207822" watchObservedRunningTime="2026-02-17 13:57:18.542310621 +0000 UTC m=+1257.821697063" Feb 17 13:57:18 crc kubenswrapper[4768]: I0217 13:57:18.564205 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=3.094533699 podStartE2EDuration="5.564185683s" podCreationTimestamp="2026-02-17 13:57:13 +0000 UTC" firstStartedPulling="2026-02-17 13:57:14.805697495 +0000 UTC m=+1254.085083927" lastFinishedPulling="2026-02-17 13:57:17.275349459 +0000 UTC m=+1256.554735911" observedRunningTime="2026-02-17 13:57:18.556973818 +0000 UTC m=+1257.836360260" watchObservedRunningTime="2026-02-17 13:57:18.564185683 +0000 UTC m=+1257.843572125" Feb 17 13:57:18 crc kubenswrapper[4768]: I0217 13:57:18.626491 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Feb 17 13:57:18 crc kubenswrapper[4768]: I0217 13:57:18.899945 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 17 13:57:18 crc kubenswrapper[4768]: I0217 13:57:18.900368 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 17 13:57:19 crc kubenswrapper[4768]: I0217 13:57:19.059394 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Feb 17 13:57:19 crc kubenswrapper[4768]: I0217 13:57:19.147783 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 17 13:57:19 crc kubenswrapper[4768]: I0217 13:57:19.231913 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4k7ml\" (UniqueName: \"kubernetes.io/projected/6ebd0ede-8ec0-405a-9a8c-75ad432bd515-kube-api-access-4k7ml\") pod \"6ebd0ede-8ec0-405a-9a8c-75ad432bd515\" (UID: \"6ebd0ede-8ec0-405a-9a8c-75ad432bd515\") " Feb 17 13:57:19 crc kubenswrapper[4768]: I0217 13:57:19.232244 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6ebd0ede-8ec0-405a-9a8c-75ad432bd515-combined-ca-bundle\") pod \"6ebd0ede-8ec0-405a-9a8c-75ad432bd515\" (UID: \"6ebd0ede-8ec0-405a-9a8c-75ad432bd515\") " Feb 17 13:57:19 crc kubenswrapper[4768]: I0217 13:57:19.232344 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6ebd0ede-8ec0-405a-9a8c-75ad432bd515-config-data\") pod \"6ebd0ede-8ec0-405a-9a8c-75ad432bd515\" (UID: \"6ebd0ede-8ec0-405a-9a8c-75ad432bd515\") " Feb 17 13:57:19 crc kubenswrapper[4768]: I0217 13:57:19.232361 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6ebd0ede-8ec0-405a-9a8c-75ad432bd515-logs\") pod \"6ebd0ede-8ec0-405a-9a8c-75ad432bd515\" (UID: \"6ebd0ede-8ec0-405a-9a8c-75ad432bd515\") " Feb 17 13:57:19 crc kubenswrapper[4768]: I0217 13:57:19.233246 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6ebd0ede-8ec0-405a-9a8c-75ad432bd515-logs" (OuterVolumeSpecName: "logs") pod "6ebd0ede-8ec0-405a-9a8c-75ad432bd515" (UID: "6ebd0ede-8ec0-405a-9a8c-75ad432bd515"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 13:57:19 crc kubenswrapper[4768]: I0217 13:57:19.249703 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ebd0ede-8ec0-405a-9a8c-75ad432bd515-kube-api-access-4k7ml" (OuterVolumeSpecName: "kube-api-access-4k7ml") pod "6ebd0ede-8ec0-405a-9a8c-75ad432bd515" (UID: "6ebd0ede-8ec0-405a-9a8c-75ad432bd515"). InnerVolumeSpecName "kube-api-access-4k7ml". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 13:57:19 crc kubenswrapper[4768]: I0217 13:57:19.299209 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ebd0ede-8ec0-405a-9a8c-75ad432bd515-config-data" (OuterVolumeSpecName: "config-data") pod "6ebd0ede-8ec0-405a-9a8c-75ad432bd515" (UID: "6ebd0ede-8ec0-405a-9a8c-75ad432bd515"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 13:57:19 crc kubenswrapper[4768]: I0217 13:57:19.323684 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ebd0ede-8ec0-405a-9a8c-75ad432bd515-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6ebd0ede-8ec0-405a-9a8c-75ad432bd515" (UID: "6ebd0ede-8ec0-405a-9a8c-75ad432bd515"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 13:57:19 crc kubenswrapper[4768]: I0217 13:57:19.334909 4768 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6ebd0ede-8ec0-405a-9a8c-75ad432bd515-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 13:57:19 crc kubenswrapper[4768]: I0217 13:57:19.334946 4768 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6ebd0ede-8ec0-405a-9a8c-75ad432bd515-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 13:57:19 crc kubenswrapper[4768]: I0217 13:57:19.334956 4768 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6ebd0ede-8ec0-405a-9a8c-75ad432bd515-logs\") on node \"crc\" DevicePath \"\"" Feb 17 13:57:19 crc kubenswrapper[4768]: I0217 13:57:19.334966 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4k7ml\" (UniqueName: \"kubernetes.io/projected/6ebd0ede-8ec0-405a-9a8c-75ad432bd515-kube-api-access-4k7ml\") on node \"crc\" DevicePath \"\"" Feb 17 13:57:19 crc kubenswrapper[4768]: I0217 13:57:19.502540 4768 generic.go:334] "Generic (PLEG): container finished" podID="6ebd0ede-8ec0-405a-9a8c-75ad432bd515" containerID="40aa4ce35e70abcdc19c5896a45cd80400d1018984b38e8777ec76e9d45345c4" exitCode=0 Feb 17 13:57:19 crc kubenswrapper[4768]: I0217 13:57:19.502585 4768 generic.go:334] "Generic (PLEG): container finished" podID="6ebd0ede-8ec0-405a-9a8c-75ad432bd515" containerID="88dc0fd2d39bddd3286c2a9bde30ddf038b2244129d48c60d03c46f041298d7e" exitCode=143 Feb 17 13:57:19 crc kubenswrapper[4768]: I0217 13:57:19.502606 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 17 13:57:19 crc kubenswrapper[4768]: I0217 13:57:19.502668 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"6ebd0ede-8ec0-405a-9a8c-75ad432bd515","Type":"ContainerDied","Data":"40aa4ce35e70abcdc19c5896a45cd80400d1018984b38e8777ec76e9d45345c4"} Feb 17 13:57:19 crc kubenswrapper[4768]: I0217 13:57:19.502695 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"6ebd0ede-8ec0-405a-9a8c-75ad432bd515","Type":"ContainerDied","Data":"88dc0fd2d39bddd3286c2a9bde30ddf038b2244129d48c60d03c46f041298d7e"} Feb 17 13:57:19 crc kubenswrapper[4768]: I0217 13:57:19.502706 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"6ebd0ede-8ec0-405a-9a8c-75ad432bd515","Type":"ContainerDied","Data":"23ecc59efbf352f757b52a4a32a6915ae99082f9884650746d6359b487b5e632"} Feb 17 13:57:19 crc kubenswrapper[4768]: I0217 13:57:19.502721 4768 scope.go:117] "RemoveContainer" containerID="40aa4ce35e70abcdc19c5896a45cd80400d1018984b38e8777ec76e9d45345c4" Feb 17 13:57:19 crc kubenswrapper[4768]: I0217 13:57:19.545896 4768 scope.go:117] "RemoveContainer" containerID="88dc0fd2d39bddd3286c2a9bde30ddf038b2244129d48c60d03c46f041298d7e" Feb 17 13:57:19 crc kubenswrapper[4768]: I0217 13:57:19.555684 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 17 13:57:19 crc kubenswrapper[4768]: I0217 13:57:19.575471 4768 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Feb 17 13:57:19 crc kubenswrapper[4768]: I0217 13:57:19.587565 4768 scope.go:117] "RemoveContainer" containerID="40aa4ce35e70abcdc19c5896a45cd80400d1018984b38e8777ec76e9d45345c4" Feb 17 13:57:19 crc kubenswrapper[4768]: I0217 13:57:19.600353 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Feb 17 13:57:19 crc kubenswrapper[4768]: E0217 13:57:19.600774 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6ebd0ede-8ec0-405a-9a8c-75ad432bd515" containerName="nova-metadata-log" Feb 17 13:57:19 crc kubenswrapper[4768]: I0217 13:57:19.600792 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="6ebd0ede-8ec0-405a-9a8c-75ad432bd515" containerName="nova-metadata-log" Feb 17 13:57:19 crc kubenswrapper[4768]: E0217 13:57:19.600804 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6ebd0ede-8ec0-405a-9a8c-75ad432bd515" containerName="nova-metadata-metadata" Feb 17 13:57:19 crc kubenswrapper[4768]: I0217 13:57:19.600810 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="6ebd0ede-8ec0-405a-9a8c-75ad432bd515" containerName="nova-metadata-metadata" Feb 17 13:57:19 crc kubenswrapper[4768]: I0217 13:57:19.601594 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="6ebd0ede-8ec0-405a-9a8c-75ad432bd515" containerName="nova-metadata-log" Feb 17 13:57:19 crc kubenswrapper[4768]: I0217 13:57:19.601678 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="6ebd0ede-8ec0-405a-9a8c-75ad432bd515" containerName="nova-metadata-metadata" Feb 17 13:57:19 crc kubenswrapper[4768]: I0217 13:57:19.602924 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 17 13:57:19 crc kubenswrapper[4768]: E0217 13:57:19.603208 4768 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"40aa4ce35e70abcdc19c5896a45cd80400d1018984b38e8777ec76e9d45345c4\": container with ID starting with 40aa4ce35e70abcdc19c5896a45cd80400d1018984b38e8777ec76e9d45345c4 not found: ID does not exist" containerID="40aa4ce35e70abcdc19c5896a45cd80400d1018984b38e8777ec76e9d45345c4" Feb 17 13:57:19 crc kubenswrapper[4768]: I0217 13:57:19.603252 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"40aa4ce35e70abcdc19c5896a45cd80400d1018984b38e8777ec76e9d45345c4"} err="failed to get container status \"40aa4ce35e70abcdc19c5896a45cd80400d1018984b38e8777ec76e9d45345c4\": rpc error: code = NotFound desc = could not find container \"40aa4ce35e70abcdc19c5896a45cd80400d1018984b38e8777ec76e9d45345c4\": container with ID starting with 40aa4ce35e70abcdc19c5896a45cd80400d1018984b38e8777ec76e9d45345c4 not found: ID does not exist" Feb 17 13:57:19 crc kubenswrapper[4768]: I0217 13:57:19.603280 4768 scope.go:117] "RemoveContainer" containerID="88dc0fd2d39bddd3286c2a9bde30ddf038b2244129d48c60d03c46f041298d7e" Feb 17 13:57:19 crc kubenswrapper[4768]: E0217 13:57:19.604032 4768 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"88dc0fd2d39bddd3286c2a9bde30ddf038b2244129d48c60d03c46f041298d7e\": container with ID starting with 88dc0fd2d39bddd3286c2a9bde30ddf038b2244129d48c60d03c46f041298d7e not found: ID does not exist" containerID="88dc0fd2d39bddd3286c2a9bde30ddf038b2244129d48c60d03c46f041298d7e" Feb 17 13:57:19 crc kubenswrapper[4768]: I0217 13:57:19.604053 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"88dc0fd2d39bddd3286c2a9bde30ddf038b2244129d48c60d03c46f041298d7e"} err="failed to get container status \"88dc0fd2d39bddd3286c2a9bde30ddf038b2244129d48c60d03c46f041298d7e\": rpc error: code = NotFound desc = could not find container \"88dc0fd2d39bddd3286c2a9bde30ddf038b2244129d48c60d03c46f041298d7e\": container with ID starting with 88dc0fd2d39bddd3286c2a9bde30ddf038b2244129d48c60d03c46f041298d7e not found: ID does not exist" Feb 17 13:57:19 crc kubenswrapper[4768]: I0217 13:57:19.604067 4768 scope.go:117] "RemoveContainer" containerID="40aa4ce35e70abcdc19c5896a45cd80400d1018984b38e8777ec76e9d45345c4" Feb 17 13:57:19 crc kubenswrapper[4768]: I0217 13:57:19.605247 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"40aa4ce35e70abcdc19c5896a45cd80400d1018984b38e8777ec76e9d45345c4"} err="failed to get container status \"40aa4ce35e70abcdc19c5896a45cd80400d1018984b38e8777ec76e9d45345c4\": rpc error: code = NotFound desc = could not find container \"40aa4ce35e70abcdc19c5896a45cd80400d1018984b38e8777ec76e9d45345c4\": container with ID starting with 40aa4ce35e70abcdc19c5896a45cd80400d1018984b38e8777ec76e9d45345c4 not found: ID does not exist" Feb 17 13:57:19 crc kubenswrapper[4768]: I0217 13:57:19.605268 4768 scope.go:117] "RemoveContainer" containerID="88dc0fd2d39bddd3286c2a9bde30ddf038b2244129d48c60d03c46f041298d7e" Feb 17 13:57:19 crc kubenswrapper[4768]: I0217 13:57:19.605650 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Feb 17 13:57:19 crc kubenswrapper[4768]: I0217 13:57:19.605888 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Feb 17 13:57:19 crc kubenswrapper[4768]: I0217 13:57:19.605994 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"88dc0fd2d39bddd3286c2a9bde30ddf038b2244129d48c60d03c46f041298d7e"} err="failed to get container status \"88dc0fd2d39bddd3286c2a9bde30ddf038b2244129d48c60d03c46f041298d7e\": rpc error: code = NotFound desc = could not find container \"88dc0fd2d39bddd3286c2a9bde30ddf038b2244129d48c60d03c46f041298d7e\": container with ID starting with 88dc0fd2d39bddd3286c2a9bde30ddf038b2244129d48c60d03c46f041298d7e not found: ID does not exist" Feb 17 13:57:19 crc kubenswrapper[4768]: I0217 13:57:19.611182 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 17 13:57:19 crc kubenswrapper[4768]: I0217 13:57:19.645277 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2a875be7-6c07-4783-8f27-e13f4b89997e-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"2a875be7-6c07-4783-8f27-e13f4b89997e\") " pod="openstack/nova-metadata-0" Feb 17 13:57:19 crc kubenswrapper[4768]: I0217 13:57:19.645793 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/2a875be7-6c07-4783-8f27-e13f4b89997e-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"2a875be7-6c07-4783-8f27-e13f4b89997e\") " pod="openstack/nova-metadata-0" Feb 17 13:57:19 crc kubenswrapper[4768]: I0217 13:57:19.645852 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2a875be7-6c07-4783-8f27-e13f4b89997e-config-data\") pod \"nova-metadata-0\" (UID: \"2a875be7-6c07-4783-8f27-e13f4b89997e\") " pod="openstack/nova-metadata-0" Feb 17 13:57:19 crc kubenswrapper[4768]: I0217 13:57:19.647187 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z8jzs\" (UniqueName: \"kubernetes.io/projected/2a875be7-6c07-4783-8f27-e13f4b89997e-kube-api-access-z8jzs\") pod \"nova-metadata-0\" (UID: \"2a875be7-6c07-4783-8f27-e13f4b89997e\") " pod="openstack/nova-metadata-0" Feb 17 13:57:19 crc kubenswrapper[4768]: I0217 13:57:19.647284 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2a875be7-6c07-4783-8f27-e13f4b89997e-logs\") pod \"nova-metadata-0\" (UID: \"2a875be7-6c07-4783-8f27-e13f4b89997e\") " pod="openstack/nova-metadata-0" Feb 17 13:57:19 crc kubenswrapper[4768]: I0217 13:57:19.748967 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z8jzs\" (UniqueName: \"kubernetes.io/projected/2a875be7-6c07-4783-8f27-e13f4b89997e-kube-api-access-z8jzs\") pod \"nova-metadata-0\" (UID: \"2a875be7-6c07-4783-8f27-e13f4b89997e\") " pod="openstack/nova-metadata-0" Feb 17 13:57:19 crc kubenswrapper[4768]: I0217 13:57:19.749059 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2a875be7-6c07-4783-8f27-e13f4b89997e-logs\") pod \"nova-metadata-0\" (UID: \"2a875be7-6c07-4783-8f27-e13f4b89997e\") " pod="openstack/nova-metadata-0" Feb 17 13:57:19 crc kubenswrapper[4768]: I0217 13:57:19.749212 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2a875be7-6c07-4783-8f27-e13f4b89997e-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"2a875be7-6c07-4783-8f27-e13f4b89997e\") " pod="openstack/nova-metadata-0" Feb 17 13:57:19 crc kubenswrapper[4768]: I0217 13:57:19.749370 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/2a875be7-6c07-4783-8f27-e13f4b89997e-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"2a875be7-6c07-4783-8f27-e13f4b89997e\") " pod="openstack/nova-metadata-0" Feb 17 13:57:19 crc kubenswrapper[4768]: I0217 13:57:19.749474 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2a875be7-6c07-4783-8f27-e13f4b89997e-config-data\") pod \"nova-metadata-0\" (UID: \"2a875be7-6c07-4783-8f27-e13f4b89997e\") " pod="openstack/nova-metadata-0" Feb 17 13:57:19 crc kubenswrapper[4768]: I0217 13:57:19.749807 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2a875be7-6c07-4783-8f27-e13f4b89997e-logs\") pod \"nova-metadata-0\" (UID: \"2a875be7-6c07-4783-8f27-e13f4b89997e\") " pod="openstack/nova-metadata-0" Feb 17 13:57:19 crc kubenswrapper[4768]: I0217 13:57:19.755269 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2a875be7-6c07-4783-8f27-e13f4b89997e-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"2a875be7-6c07-4783-8f27-e13f4b89997e\") " pod="openstack/nova-metadata-0" Feb 17 13:57:19 crc kubenswrapper[4768]: I0217 13:57:19.756327 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/2a875be7-6c07-4783-8f27-e13f4b89997e-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"2a875be7-6c07-4783-8f27-e13f4b89997e\") " pod="openstack/nova-metadata-0" Feb 17 13:57:19 crc kubenswrapper[4768]: I0217 13:57:19.758381 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2a875be7-6c07-4783-8f27-e13f4b89997e-config-data\") pod \"nova-metadata-0\" (UID: \"2a875be7-6c07-4783-8f27-e13f4b89997e\") " pod="openstack/nova-metadata-0" Feb 17 13:57:19 crc kubenswrapper[4768]: I0217 13:57:19.766867 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z8jzs\" (UniqueName: \"kubernetes.io/projected/2a875be7-6c07-4783-8f27-e13f4b89997e-kube-api-access-z8jzs\") pod \"nova-metadata-0\" (UID: \"2a875be7-6c07-4783-8f27-e13f4b89997e\") " pod="openstack/nova-metadata-0" Feb 17 13:57:19 crc kubenswrapper[4768]: I0217 13:57:19.935681 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 17 13:57:20 crc kubenswrapper[4768]: I0217 13:57:20.432538 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 17 13:57:20 crc kubenswrapper[4768]: I0217 13:57:20.515000 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"2a875be7-6c07-4783-8f27-e13f4b89997e","Type":"ContainerStarted","Data":"6c6a8fdb40efb3e04fc3e8f96ffdfc1f1f629c6a0dadc4f2b1fc4cd9c5c565b0"} Feb 17 13:57:20 crc kubenswrapper[4768]: E0217 13:57:20.885695 4768 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc20ad4a2_cf3e_4390_9141_1cc58518fd2b.slice/crio-1d17c1ad4b2966fe20b3c50b7d3e8591d321560f133eb3c2ff7c07184fbb95e5\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc20ad4a2_cf3e_4390_9141_1cc58518fd2b.slice\": RecentStats: unable to find data in memory cache]" Feb 17 13:57:21 crc kubenswrapper[4768]: I0217 13:57:21.485652 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Feb 17 13:57:21 crc kubenswrapper[4768]: I0217 13:57:21.598439 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ebd0ede-8ec0-405a-9a8c-75ad432bd515" path="/var/lib/kubelet/pods/6ebd0ede-8ec0-405a-9a8c-75ad432bd515/volumes" Feb 17 13:57:21 crc kubenswrapper[4768]: I0217 13:57:21.599221 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"2a875be7-6c07-4783-8f27-e13f4b89997e","Type":"ContainerStarted","Data":"1eca4a42321ae73202fcef532025e3d62aa8baf5873ae58f8de5c74736954956"} Feb 17 13:57:21 crc kubenswrapper[4768]: I0217 13:57:21.599257 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"2a875be7-6c07-4783-8f27-e13f4b89997e","Type":"ContainerStarted","Data":"aa3faba37ec1b6a6a4fdeca0d63e7863eabe7b9ea753ecdabe270670b6f5cc2d"} Feb 17 13:57:21 crc kubenswrapper[4768]: E0217 13:57:21.601293 4768 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/8064d1bf46d97cd2bb7ebac17e9eee41f5fe77bcfc671d322b687f43a1642022/diff" to get inode usage: stat /var/lib/containers/storage/overlay/8064d1bf46d97cd2bb7ebac17e9eee41f5fe77bcfc671d322b687f43a1642022/diff: no such file or directory, extraDiskErr: Feb 17 13:57:21 crc kubenswrapper[4768]: I0217 13:57:21.638286 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.638265301 podStartE2EDuration="2.638265301s" podCreationTimestamp="2026-02-17 13:57:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 13:57:21.63713231 +0000 UTC m=+1260.916518752" watchObservedRunningTime="2026-02-17 13:57:21.638265301 +0000 UTC m=+1260.917651743" Feb 17 13:57:22 crc kubenswrapper[4768]: I0217 13:57:22.578550 4768 generic.go:334] "Generic (PLEG): container finished" podID="ab5558d8-d8ab-4e56-a053-bd878be0dfb7" containerID="b2a1f2873dd7eae98bad852dc5d0a50cfdc71bae83dc7248c89fa165e7458f33" exitCode=0 Feb 17 13:57:22 crc kubenswrapper[4768]: I0217 13:57:22.578668 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-j6688" event={"ID":"ab5558d8-d8ab-4e56-a053-bd878be0dfb7","Type":"ContainerDied","Data":"b2a1f2873dd7eae98bad852dc5d0a50cfdc71bae83dc7248c89fa165e7458f33"} Feb 17 13:57:23 crc kubenswrapper[4768]: I0217 13:57:23.981257 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 17 13:57:23 crc kubenswrapper[4768]: I0217 13:57:23.981622 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 17 13:57:24 crc kubenswrapper[4768]: I0217 13:57:24.041494 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-j6688" Feb 17 13:57:24 crc kubenswrapper[4768]: I0217 13:57:24.059700 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Feb 17 13:57:24 crc kubenswrapper[4768]: I0217 13:57:24.091020 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Feb 17 13:57:24 crc kubenswrapper[4768]: I0217 13:57:24.114255 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-757b4f8459-zmmmh" Feb 17 13:57:24 crc kubenswrapper[4768]: I0217 13:57:24.143070 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c67h8\" (UniqueName: \"kubernetes.io/projected/ab5558d8-d8ab-4e56-a053-bd878be0dfb7-kube-api-access-c67h8\") pod \"ab5558d8-d8ab-4e56-a053-bd878be0dfb7\" (UID: \"ab5558d8-d8ab-4e56-a053-bd878be0dfb7\") " Feb 17 13:57:24 crc kubenswrapper[4768]: I0217 13:57:24.143200 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ab5558d8-d8ab-4e56-a053-bd878be0dfb7-combined-ca-bundle\") pod \"ab5558d8-d8ab-4e56-a053-bd878be0dfb7\" (UID: \"ab5558d8-d8ab-4e56-a053-bd878be0dfb7\") " Feb 17 13:57:24 crc kubenswrapper[4768]: I0217 13:57:24.143247 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ab5558d8-d8ab-4e56-a053-bd878be0dfb7-scripts\") pod \"ab5558d8-d8ab-4e56-a053-bd878be0dfb7\" (UID: \"ab5558d8-d8ab-4e56-a053-bd878be0dfb7\") " Feb 17 13:57:24 crc kubenswrapper[4768]: I0217 13:57:24.143307 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ab5558d8-d8ab-4e56-a053-bd878be0dfb7-config-data\") pod \"ab5558d8-d8ab-4e56-a053-bd878be0dfb7\" (UID: \"ab5558d8-d8ab-4e56-a053-bd878be0dfb7\") " Feb 17 13:57:24 crc kubenswrapper[4768]: I0217 13:57:24.150239 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ab5558d8-d8ab-4e56-a053-bd878be0dfb7-scripts" (OuterVolumeSpecName: "scripts") pod "ab5558d8-d8ab-4e56-a053-bd878be0dfb7" (UID: "ab5558d8-d8ab-4e56-a053-bd878be0dfb7"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 13:57:24 crc kubenswrapper[4768]: I0217 13:57:24.153295 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ab5558d8-d8ab-4e56-a053-bd878be0dfb7-kube-api-access-c67h8" (OuterVolumeSpecName: "kube-api-access-c67h8") pod "ab5558d8-d8ab-4e56-a053-bd878be0dfb7" (UID: "ab5558d8-d8ab-4e56-a053-bd878be0dfb7"). InnerVolumeSpecName "kube-api-access-c67h8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 13:57:24 crc kubenswrapper[4768]: I0217 13:57:24.201344 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-kw2l4"] Feb 17 13:57:24 crc kubenswrapper[4768]: I0217 13:57:24.201557 4768 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5c9776ccc5-kw2l4" podUID="e9696419-5c03-4d1d-bd0c-7bf7becd6239" containerName="dnsmasq-dns" containerID="cri-o://1516be8385d9e8af36ef4572a00ff3fc2ea3aff8702de8ab771ba1170ab97327" gracePeriod=10 Feb 17 13:57:24 crc kubenswrapper[4768]: I0217 13:57:24.207792 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ab5558d8-d8ab-4e56-a053-bd878be0dfb7-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ab5558d8-d8ab-4e56-a053-bd878be0dfb7" (UID: "ab5558d8-d8ab-4e56-a053-bd878be0dfb7"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 13:57:24 crc kubenswrapper[4768]: I0217 13:57:24.247612 4768 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ab5558d8-d8ab-4e56-a053-bd878be0dfb7-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 13:57:24 crc kubenswrapper[4768]: I0217 13:57:24.247656 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c67h8\" (UniqueName: \"kubernetes.io/projected/ab5558d8-d8ab-4e56-a053-bd878be0dfb7-kube-api-access-c67h8\") on node \"crc\" DevicePath \"\"" Feb 17 13:57:24 crc kubenswrapper[4768]: I0217 13:57:24.247669 4768 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ab5558d8-d8ab-4e56-a053-bd878be0dfb7-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 13:57:24 crc kubenswrapper[4768]: I0217 13:57:24.250420 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ab5558d8-d8ab-4e56-a053-bd878be0dfb7-config-data" (OuterVolumeSpecName: "config-data") pod "ab5558d8-d8ab-4e56-a053-bd878be0dfb7" (UID: "ab5558d8-d8ab-4e56-a053-bd878be0dfb7"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 13:57:24 crc kubenswrapper[4768]: I0217 13:57:24.349976 4768 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ab5558d8-d8ab-4e56-a053-bd878be0dfb7-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 13:57:24 crc kubenswrapper[4768]: I0217 13:57:24.600734 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-j6688" event={"ID":"ab5558d8-d8ab-4e56-a053-bd878be0dfb7","Type":"ContainerDied","Data":"c8e6c4403378a3f174f198a2ba800820d167cce3c58e846f5939ba7d56833698"} Feb 17 13:57:24 crc kubenswrapper[4768]: I0217 13:57:24.600780 4768 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c8e6c4403378a3f174f198a2ba800820d167cce3c58e846f5939ba7d56833698" Feb 17 13:57:24 crc kubenswrapper[4768]: I0217 13:57:24.600854 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-j6688" Feb 17 13:57:24 crc kubenswrapper[4768]: I0217 13:57:24.602661 4768 generic.go:334] "Generic (PLEG): container finished" podID="e9696419-5c03-4d1d-bd0c-7bf7becd6239" containerID="1516be8385d9e8af36ef4572a00ff3fc2ea3aff8702de8ab771ba1170ab97327" exitCode=0 Feb 17 13:57:24 crc kubenswrapper[4768]: I0217 13:57:24.602719 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9776ccc5-kw2l4" event={"ID":"e9696419-5c03-4d1d-bd0c-7bf7becd6239","Type":"ContainerDied","Data":"1516be8385d9e8af36ef4572a00ff3fc2ea3aff8702de8ab771ba1170ab97327"} Feb 17 13:57:24 crc kubenswrapper[4768]: I0217 13:57:24.603746 4768 generic.go:334] "Generic (PLEG): container finished" podID="7636550e-37fd-4031-a05d-603fef57553a" containerID="2386d79fdae6ad85a03996c9a38134a990c02eb25b593969a3ee16241de00a38" exitCode=0 Feb 17 13:57:24 crc kubenswrapper[4768]: I0217 13:57:24.604183 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-z5wdm" event={"ID":"7636550e-37fd-4031-a05d-603fef57553a","Type":"ContainerDied","Data":"2386d79fdae6ad85a03996c9a38134a990c02eb25b593969a3ee16241de00a38"} Feb 17 13:57:24 crc kubenswrapper[4768]: I0217 13:57:24.642984 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Feb 17 13:57:24 crc kubenswrapper[4768]: I0217 13:57:24.703561 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c9776ccc5-kw2l4" Feb 17 13:57:24 crc kubenswrapper[4768]: I0217 13:57:24.877230 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e9696419-5c03-4d1d-bd0c-7bf7becd6239-ovsdbserver-nb\") pod \"e9696419-5c03-4d1d-bd0c-7bf7becd6239\" (UID: \"e9696419-5c03-4d1d-bd0c-7bf7becd6239\") " Feb 17 13:57:24 crc kubenswrapper[4768]: I0217 13:57:24.877542 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s5zz2\" (UniqueName: \"kubernetes.io/projected/e9696419-5c03-4d1d-bd0c-7bf7becd6239-kube-api-access-s5zz2\") pod \"e9696419-5c03-4d1d-bd0c-7bf7becd6239\" (UID: \"e9696419-5c03-4d1d-bd0c-7bf7becd6239\") " Feb 17 13:57:24 crc kubenswrapper[4768]: I0217 13:57:24.877616 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e9696419-5c03-4d1d-bd0c-7bf7becd6239-dns-svc\") pod \"e9696419-5c03-4d1d-bd0c-7bf7becd6239\" (UID: \"e9696419-5c03-4d1d-bd0c-7bf7becd6239\") " Feb 17 13:57:24 crc kubenswrapper[4768]: I0217 13:57:24.877763 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e9696419-5c03-4d1d-bd0c-7bf7becd6239-config\") pod \"e9696419-5c03-4d1d-bd0c-7bf7becd6239\" (UID: \"e9696419-5c03-4d1d-bd0c-7bf7becd6239\") " Feb 17 13:57:24 crc kubenswrapper[4768]: I0217 13:57:24.877824 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e9696419-5c03-4d1d-bd0c-7bf7becd6239-ovsdbserver-sb\") pod \"e9696419-5c03-4d1d-bd0c-7bf7becd6239\" (UID: \"e9696419-5c03-4d1d-bd0c-7bf7becd6239\") " Feb 17 13:57:24 crc kubenswrapper[4768]: I0217 13:57:24.877878 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e9696419-5c03-4d1d-bd0c-7bf7becd6239-dns-swift-storage-0\") pod \"e9696419-5c03-4d1d-bd0c-7bf7becd6239\" (UID: \"e9696419-5c03-4d1d-bd0c-7bf7becd6239\") " Feb 17 13:57:24 crc kubenswrapper[4768]: I0217 13:57:24.883400 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e9696419-5c03-4d1d-bd0c-7bf7becd6239-kube-api-access-s5zz2" (OuterVolumeSpecName: "kube-api-access-s5zz2") pod "e9696419-5c03-4d1d-bd0c-7bf7becd6239" (UID: "e9696419-5c03-4d1d-bd0c-7bf7becd6239"). InnerVolumeSpecName "kube-api-access-s5zz2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 13:57:24 crc kubenswrapper[4768]: I0217 13:57:24.913655 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 17 13:57:24 crc kubenswrapper[4768]: I0217 13:57:24.914603 4768 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="71481e86-0ed0-47c2-a15a-3379dacb425c" containerName="nova-api-log" containerID="cri-o://7ce8a2df0172071d921fc117a138e0c62e92da3614e355458d5478a4f3ac78ed" gracePeriod=30 Feb 17 13:57:24 crc kubenswrapper[4768]: I0217 13:57:24.915032 4768 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="71481e86-0ed0-47c2-a15a-3379dacb425c" containerName="nova-api-api" containerID="cri-o://81e45c86629ef1f04ba43e5477ade7e085e9347a89cf29b7daafa5c4496432ff" gracePeriod=30 Feb 17 13:57:24 crc kubenswrapper[4768]: I0217 13:57:24.930088 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 17 13:57:24 crc kubenswrapper[4768]: I0217 13:57:24.930334 4768 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="2a875be7-6c07-4783-8f27-e13f4b89997e" containerName="nova-metadata-log" containerID="cri-o://aa3faba37ec1b6a6a4fdeca0d63e7863eabe7b9ea753ecdabe270670b6f5cc2d" gracePeriod=30 Feb 17 13:57:24 crc kubenswrapper[4768]: I0217 13:57:24.930728 4768 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="2a875be7-6c07-4783-8f27-e13f4b89997e" containerName="nova-metadata-metadata" containerID="cri-o://1eca4a42321ae73202fcef532025e3d62aa8baf5873ae58f8de5c74736954956" gracePeriod=30 Feb 17 13:57:24 crc kubenswrapper[4768]: I0217 13:57:24.936582 4768 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="71481e86-0ed0-47c2-a15a-3379dacb425c" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.191:8774/\": EOF" Feb 17 13:57:24 crc kubenswrapper[4768]: I0217 13:57:24.936583 4768 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="71481e86-0ed0-47c2-a15a-3379dacb425c" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.191:8774/\": EOF" Feb 17 13:57:24 crc kubenswrapper[4768]: I0217 13:57:24.936674 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 17 13:57:24 crc kubenswrapper[4768]: I0217 13:57:24.936955 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 17 13:57:24 crc kubenswrapper[4768]: I0217 13:57:24.964779 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e9696419-5c03-4d1d-bd0c-7bf7becd6239-config" (OuterVolumeSpecName: "config") pod "e9696419-5c03-4d1d-bd0c-7bf7becd6239" (UID: "e9696419-5c03-4d1d-bd0c-7bf7becd6239"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 13:57:24 crc kubenswrapper[4768]: I0217 13:57:24.970532 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e9696419-5c03-4d1d-bd0c-7bf7becd6239-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "e9696419-5c03-4d1d-bd0c-7bf7becd6239" (UID: "e9696419-5c03-4d1d-bd0c-7bf7becd6239"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 13:57:24 crc kubenswrapper[4768]: I0217 13:57:24.974629 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e9696419-5c03-4d1d-bd0c-7bf7becd6239-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "e9696419-5c03-4d1d-bd0c-7bf7becd6239" (UID: "e9696419-5c03-4d1d-bd0c-7bf7becd6239"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 13:57:24 crc kubenswrapper[4768]: I0217 13:57:24.979445 4768 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e9696419-5c03-4d1d-bd0c-7bf7becd6239-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 17 13:57:24 crc kubenswrapper[4768]: I0217 13:57:24.979487 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s5zz2\" (UniqueName: \"kubernetes.io/projected/e9696419-5c03-4d1d-bd0c-7bf7becd6239-kube-api-access-s5zz2\") on node \"crc\" DevicePath \"\"" Feb 17 13:57:24 crc kubenswrapper[4768]: I0217 13:57:24.979505 4768 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e9696419-5c03-4d1d-bd0c-7bf7becd6239-config\") on node \"crc\" DevicePath \"\"" Feb 17 13:57:24 crc kubenswrapper[4768]: I0217 13:57:24.979515 4768 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e9696419-5c03-4d1d-bd0c-7bf7becd6239-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 17 13:57:25 crc kubenswrapper[4768]: I0217 13:57:25.001616 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e9696419-5c03-4d1d-bd0c-7bf7becd6239-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "e9696419-5c03-4d1d-bd0c-7bf7becd6239" (UID: "e9696419-5c03-4d1d-bd0c-7bf7becd6239"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 13:57:25 crc kubenswrapper[4768]: I0217 13:57:25.003628 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e9696419-5c03-4d1d-bd0c-7bf7becd6239-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "e9696419-5c03-4d1d-bd0c-7bf7becd6239" (UID: "e9696419-5c03-4d1d-bd0c-7bf7becd6239"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 13:57:25 crc kubenswrapper[4768]: I0217 13:57:25.081206 4768 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e9696419-5c03-4d1d-bd0c-7bf7becd6239-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 17 13:57:25 crc kubenswrapper[4768]: I0217 13:57:25.081242 4768 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e9696419-5c03-4d1d-bd0c-7bf7becd6239-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 17 13:57:25 crc kubenswrapper[4768]: I0217 13:57:25.179420 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Feb 17 13:57:25 crc kubenswrapper[4768]: I0217 13:57:25.523488 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 17 13:57:25 crc kubenswrapper[4768]: I0217 13:57:25.616330 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9776ccc5-kw2l4" event={"ID":"e9696419-5c03-4d1d-bd0c-7bf7becd6239","Type":"ContainerDied","Data":"b1a9808b9043a7cb44999e6eda0c42fa77e5a37a0b2810ca075660a49c6c30ba"} Feb 17 13:57:25 crc kubenswrapper[4768]: I0217 13:57:25.616375 4768 scope.go:117] "RemoveContainer" containerID="1516be8385d9e8af36ef4572a00ff3fc2ea3aff8702de8ab771ba1170ab97327" Feb 17 13:57:25 crc kubenswrapper[4768]: I0217 13:57:25.616477 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c9776ccc5-kw2l4" Feb 17 13:57:25 crc kubenswrapper[4768]: I0217 13:57:25.628912 4768 generic.go:334] "Generic (PLEG): container finished" podID="71481e86-0ed0-47c2-a15a-3379dacb425c" containerID="7ce8a2df0172071d921fc117a138e0c62e92da3614e355458d5478a4f3ac78ed" exitCode=143 Feb 17 13:57:25 crc kubenswrapper[4768]: I0217 13:57:25.629049 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"71481e86-0ed0-47c2-a15a-3379dacb425c","Type":"ContainerDied","Data":"7ce8a2df0172071d921fc117a138e0c62e92da3614e355458d5478a4f3ac78ed"} Feb 17 13:57:25 crc kubenswrapper[4768]: I0217 13:57:25.639137 4768 generic.go:334] "Generic (PLEG): container finished" podID="2a875be7-6c07-4783-8f27-e13f4b89997e" containerID="1eca4a42321ae73202fcef532025e3d62aa8baf5873ae58f8de5c74736954956" exitCode=0 Feb 17 13:57:25 crc kubenswrapper[4768]: I0217 13:57:25.639169 4768 generic.go:334] "Generic (PLEG): container finished" podID="2a875be7-6c07-4783-8f27-e13f4b89997e" containerID="aa3faba37ec1b6a6a4fdeca0d63e7863eabe7b9ea753ecdabe270670b6f5cc2d" exitCode=143 Feb 17 13:57:25 crc kubenswrapper[4768]: I0217 13:57:25.639381 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 17 13:57:25 crc kubenswrapper[4768]: I0217 13:57:25.639735 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"2a875be7-6c07-4783-8f27-e13f4b89997e","Type":"ContainerDied","Data":"1eca4a42321ae73202fcef532025e3d62aa8baf5873ae58f8de5c74736954956"} Feb 17 13:57:25 crc kubenswrapper[4768]: I0217 13:57:25.639763 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"2a875be7-6c07-4783-8f27-e13f4b89997e","Type":"ContainerDied","Data":"aa3faba37ec1b6a6a4fdeca0d63e7863eabe7b9ea753ecdabe270670b6f5cc2d"} Feb 17 13:57:25 crc kubenswrapper[4768]: I0217 13:57:25.639775 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"2a875be7-6c07-4783-8f27-e13f4b89997e","Type":"ContainerDied","Data":"6c6a8fdb40efb3e04fc3e8f96ffdfc1f1f629c6a0dadc4f2b1fc4cd9c5c565b0"} Feb 17 13:57:25 crc kubenswrapper[4768]: I0217 13:57:25.663216 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-kw2l4"] Feb 17 13:57:25 crc kubenswrapper[4768]: I0217 13:57:25.669485 4768 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-kw2l4"] Feb 17 13:57:25 crc kubenswrapper[4768]: I0217 13:57:25.679322 4768 scope.go:117] "RemoveContainer" containerID="b8127ed30e1421cfce5fe50c51f7a182f40811a6bfb6cc3814a69f88ed5b8d2f" Feb 17 13:57:25 crc kubenswrapper[4768]: I0217 13:57:25.691421 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2a875be7-6c07-4783-8f27-e13f4b89997e-config-data\") pod \"2a875be7-6c07-4783-8f27-e13f4b89997e\" (UID: \"2a875be7-6c07-4783-8f27-e13f4b89997e\") " Feb 17 13:57:25 crc kubenswrapper[4768]: I0217 13:57:25.691581 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2a875be7-6c07-4783-8f27-e13f4b89997e-logs\") pod \"2a875be7-6c07-4783-8f27-e13f4b89997e\" (UID: \"2a875be7-6c07-4783-8f27-e13f4b89997e\") " Feb 17 13:57:25 crc kubenswrapper[4768]: I0217 13:57:25.691637 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2a875be7-6c07-4783-8f27-e13f4b89997e-combined-ca-bundle\") pod \"2a875be7-6c07-4783-8f27-e13f4b89997e\" (UID: \"2a875be7-6c07-4783-8f27-e13f4b89997e\") " Feb 17 13:57:25 crc kubenswrapper[4768]: I0217 13:57:25.691694 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/2a875be7-6c07-4783-8f27-e13f4b89997e-nova-metadata-tls-certs\") pod \"2a875be7-6c07-4783-8f27-e13f4b89997e\" (UID: \"2a875be7-6c07-4783-8f27-e13f4b89997e\") " Feb 17 13:57:25 crc kubenswrapper[4768]: I0217 13:57:25.691721 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z8jzs\" (UniqueName: \"kubernetes.io/projected/2a875be7-6c07-4783-8f27-e13f4b89997e-kube-api-access-z8jzs\") pod \"2a875be7-6c07-4783-8f27-e13f4b89997e\" (UID: \"2a875be7-6c07-4783-8f27-e13f4b89997e\") " Feb 17 13:57:25 crc kubenswrapper[4768]: I0217 13:57:25.701997 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2a875be7-6c07-4783-8f27-e13f4b89997e-logs" (OuterVolumeSpecName: "logs") pod "2a875be7-6c07-4783-8f27-e13f4b89997e" (UID: "2a875be7-6c07-4783-8f27-e13f4b89997e"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 13:57:25 crc kubenswrapper[4768]: I0217 13:57:25.702285 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2a875be7-6c07-4783-8f27-e13f4b89997e-kube-api-access-z8jzs" (OuterVolumeSpecName: "kube-api-access-z8jzs") pod "2a875be7-6c07-4783-8f27-e13f4b89997e" (UID: "2a875be7-6c07-4783-8f27-e13f4b89997e"). InnerVolumeSpecName "kube-api-access-z8jzs". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 13:57:25 crc kubenswrapper[4768]: I0217 13:57:25.731838 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2a875be7-6c07-4783-8f27-e13f4b89997e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2a875be7-6c07-4783-8f27-e13f4b89997e" (UID: "2a875be7-6c07-4783-8f27-e13f4b89997e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 13:57:25 crc kubenswrapper[4768]: I0217 13:57:25.740314 4768 scope.go:117] "RemoveContainer" containerID="1eca4a42321ae73202fcef532025e3d62aa8baf5873ae58f8de5c74736954956" Feb 17 13:57:25 crc kubenswrapper[4768]: I0217 13:57:25.744249 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2a875be7-6c07-4783-8f27-e13f4b89997e-config-data" (OuterVolumeSpecName: "config-data") pod "2a875be7-6c07-4783-8f27-e13f4b89997e" (UID: "2a875be7-6c07-4783-8f27-e13f4b89997e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 13:57:25 crc kubenswrapper[4768]: I0217 13:57:25.774388 4768 scope.go:117] "RemoveContainer" containerID="aa3faba37ec1b6a6a4fdeca0d63e7863eabe7b9ea753ecdabe270670b6f5cc2d" Feb 17 13:57:25 crc kubenswrapper[4768]: I0217 13:57:25.787035 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2a875be7-6c07-4783-8f27-e13f4b89997e-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "2a875be7-6c07-4783-8f27-e13f4b89997e" (UID: "2a875be7-6c07-4783-8f27-e13f4b89997e"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 13:57:25 crc kubenswrapper[4768]: I0217 13:57:25.797337 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z8jzs\" (UniqueName: \"kubernetes.io/projected/2a875be7-6c07-4783-8f27-e13f4b89997e-kube-api-access-z8jzs\") on node \"crc\" DevicePath \"\"" Feb 17 13:57:25 crc kubenswrapper[4768]: I0217 13:57:25.797357 4768 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2a875be7-6c07-4783-8f27-e13f4b89997e-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 13:57:25 crc kubenswrapper[4768]: I0217 13:57:25.797366 4768 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2a875be7-6c07-4783-8f27-e13f4b89997e-logs\") on node \"crc\" DevicePath \"\"" Feb 17 13:57:25 crc kubenswrapper[4768]: I0217 13:57:25.797375 4768 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2a875be7-6c07-4783-8f27-e13f4b89997e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 13:57:25 crc kubenswrapper[4768]: I0217 13:57:25.797384 4768 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/2a875be7-6c07-4783-8f27-e13f4b89997e-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 17 13:57:25 crc kubenswrapper[4768]: I0217 13:57:25.808317 4768 scope.go:117] "RemoveContainer" containerID="1eca4a42321ae73202fcef532025e3d62aa8baf5873ae58f8de5c74736954956" Feb 17 13:57:25 crc kubenswrapper[4768]: E0217 13:57:25.810398 4768 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1eca4a42321ae73202fcef532025e3d62aa8baf5873ae58f8de5c74736954956\": container with ID starting with 1eca4a42321ae73202fcef532025e3d62aa8baf5873ae58f8de5c74736954956 not found: ID does not exist" containerID="1eca4a42321ae73202fcef532025e3d62aa8baf5873ae58f8de5c74736954956" Feb 17 13:57:25 crc kubenswrapper[4768]: I0217 13:57:25.810446 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1eca4a42321ae73202fcef532025e3d62aa8baf5873ae58f8de5c74736954956"} err="failed to get container status \"1eca4a42321ae73202fcef532025e3d62aa8baf5873ae58f8de5c74736954956\": rpc error: code = NotFound desc = could not find container \"1eca4a42321ae73202fcef532025e3d62aa8baf5873ae58f8de5c74736954956\": container with ID starting with 1eca4a42321ae73202fcef532025e3d62aa8baf5873ae58f8de5c74736954956 not found: ID does not exist" Feb 17 13:57:25 crc kubenswrapper[4768]: I0217 13:57:25.810489 4768 scope.go:117] "RemoveContainer" containerID="aa3faba37ec1b6a6a4fdeca0d63e7863eabe7b9ea753ecdabe270670b6f5cc2d" Feb 17 13:57:25 crc kubenswrapper[4768]: E0217 13:57:25.811029 4768 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"aa3faba37ec1b6a6a4fdeca0d63e7863eabe7b9ea753ecdabe270670b6f5cc2d\": container with ID starting with aa3faba37ec1b6a6a4fdeca0d63e7863eabe7b9ea753ecdabe270670b6f5cc2d not found: ID does not exist" containerID="aa3faba37ec1b6a6a4fdeca0d63e7863eabe7b9ea753ecdabe270670b6f5cc2d" Feb 17 13:57:25 crc kubenswrapper[4768]: I0217 13:57:25.811069 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"aa3faba37ec1b6a6a4fdeca0d63e7863eabe7b9ea753ecdabe270670b6f5cc2d"} err="failed to get container status \"aa3faba37ec1b6a6a4fdeca0d63e7863eabe7b9ea753ecdabe270670b6f5cc2d\": rpc error: code = NotFound desc = could not find container \"aa3faba37ec1b6a6a4fdeca0d63e7863eabe7b9ea753ecdabe270670b6f5cc2d\": container with ID starting with aa3faba37ec1b6a6a4fdeca0d63e7863eabe7b9ea753ecdabe270670b6f5cc2d not found: ID does not exist" Feb 17 13:57:25 crc kubenswrapper[4768]: I0217 13:57:25.811089 4768 scope.go:117] "RemoveContainer" containerID="1eca4a42321ae73202fcef532025e3d62aa8baf5873ae58f8de5c74736954956" Feb 17 13:57:25 crc kubenswrapper[4768]: I0217 13:57:25.815695 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1eca4a42321ae73202fcef532025e3d62aa8baf5873ae58f8de5c74736954956"} err="failed to get container status \"1eca4a42321ae73202fcef532025e3d62aa8baf5873ae58f8de5c74736954956\": rpc error: code = NotFound desc = could not find container \"1eca4a42321ae73202fcef532025e3d62aa8baf5873ae58f8de5c74736954956\": container with ID starting with 1eca4a42321ae73202fcef532025e3d62aa8baf5873ae58f8de5c74736954956 not found: ID does not exist" Feb 17 13:57:25 crc kubenswrapper[4768]: I0217 13:57:25.815736 4768 scope.go:117] "RemoveContainer" containerID="aa3faba37ec1b6a6a4fdeca0d63e7863eabe7b9ea753ecdabe270670b6f5cc2d" Feb 17 13:57:25 crc kubenswrapper[4768]: I0217 13:57:25.818333 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"aa3faba37ec1b6a6a4fdeca0d63e7863eabe7b9ea753ecdabe270670b6f5cc2d"} err="failed to get container status \"aa3faba37ec1b6a6a4fdeca0d63e7863eabe7b9ea753ecdabe270670b6f5cc2d\": rpc error: code = NotFound desc = could not find container \"aa3faba37ec1b6a6a4fdeca0d63e7863eabe7b9ea753ecdabe270670b6f5cc2d\": container with ID starting with aa3faba37ec1b6a6a4fdeca0d63e7863eabe7b9ea753ecdabe270670b6f5cc2d not found: ID does not exist" Feb 17 13:57:26 crc kubenswrapper[4768]: I0217 13:57:26.029126 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 17 13:57:26 crc kubenswrapper[4768]: I0217 13:57:26.071299 4768 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Feb 17 13:57:26 crc kubenswrapper[4768]: I0217 13:57:26.087729 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Feb 17 13:57:26 crc kubenswrapper[4768]: E0217 13:57:26.088156 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2a875be7-6c07-4783-8f27-e13f4b89997e" containerName="nova-metadata-log" Feb 17 13:57:26 crc kubenswrapper[4768]: I0217 13:57:26.088168 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="2a875be7-6c07-4783-8f27-e13f4b89997e" containerName="nova-metadata-log" Feb 17 13:57:26 crc kubenswrapper[4768]: E0217 13:57:26.088194 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2a875be7-6c07-4783-8f27-e13f4b89997e" containerName="nova-metadata-metadata" Feb 17 13:57:26 crc kubenswrapper[4768]: I0217 13:57:26.088200 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="2a875be7-6c07-4783-8f27-e13f4b89997e" containerName="nova-metadata-metadata" Feb 17 13:57:26 crc kubenswrapper[4768]: E0217 13:57:26.088210 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ab5558d8-d8ab-4e56-a053-bd878be0dfb7" containerName="nova-manage" Feb 17 13:57:26 crc kubenswrapper[4768]: I0217 13:57:26.088216 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="ab5558d8-d8ab-4e56-a053-bd878be0dfb7" containerName="nova-manage" Feb 17 13:57:26 crc kubenswrapper[4768]: E0217 13:57:26.088232 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e9696419-5c03-4d1d-bd0c-7bf7becd6239" containerName="init" Feb 17 13:57:26 crc kubenswrapper[4768]: I0217 13:57:26.088238 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="e9696419-5c03-4d1d-bd0c-7bf7becd6239" containerName="init" Feb 17 13:57:26 crc kubenswrapper[4768]: E0217 13:57:26.088245 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e9696419-5c03-4d1d-bd0c-7bf7becd6239" containerName="dnsmasq-dns" Feb 17 13:57:26 crc kubenswrapper[4768]: I0217 13:57:26.088251 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="e9696419-5c03-4d1d-bd0c-7bf7becd6239" containerName="dnsmasq-dns" Feb 17 13:57:26 crc kubenswrapper[4768]: I0217 13:57:26.088419 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="ab5558d8-d8ab-4e56-a053-bd878be0dfb7" containerName="nova-manage" Feb 17 13:57:26 crc kubenswrapper[4768]: I0217 13:57:26.088435 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="e9696419-5c03-4d1d-bd0c-7bf7becd6239" containerName="dnsmasq-dns" Feb 17 13:57:26 crc kubenswrapper[4768]: I0217 13:57:26.088441 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="2a875be7-6c07-4783-8f27-e13f4b89997e" containerName="nova-metadata-metadata" Feb 17 13:57:26 crc kubenswrapper[4768]: I0217 13:57:26.088460 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="2a875be7-6c07-4783-8f27-e13f4b89997e" containerName="nova-metadata-log" Feb 17 13:57:26 crc kubenswrapper[4768]: I0217 13:57:26.089418 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 17 13:57:26 crc kubenswrapper[4768]: I0217 13:57:26.096456 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Feb 17 13:57:26 crc kubenswrapper[4768]: I0217 13:57:26.096460 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Feb 17 13:57:26 crc kubenswrapper[4768]: I0217 13:57:26.111387 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/022e1dee-9e9a-4898-8667-cdce272dce30-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"022e1dee-9e9a-4898-8667-cdce272dce30\") " pod="openstack/nova-metadata-0" Feb 17 13:57:26 crc kubenswrapper[4768]: I0217 13:57:26.111463 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/022e1dee-9e9a-4898-8667-cdce272dce30-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"022e1dee-9e9a-4898-8667-cdce272dce30\") " pod="openstack/nova-metadata-0" Feb 17 13:57:26 crc kubenswrapper[4768]: I0217 13:57:26.111490 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/022e1dee-9e9a-4898-8667-cdce272dce30-logs\") pod \"nova-metadata-0\" (UID: \"022e1dee-9e9a-4898-8667-cdce272dce30\") " pod="openstack/nova-metadata-0" Feb 17 13:57:26 crc kubenswrapper[4768]: I0217 13:57:26.111531 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/022e1dee-9e9a-4898-8667-cdce272dce30-config-data\") pod \"nova-metadata-0\" (UID: \"022e1dee-9e9a-4898-8667-cdce272dce30\") " pod="openstack/nova-metadata-0" Feb 17 13:57:26 crc kubenswrapper[4768]: I0217 13:57:26.111561 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4k5lm\" (UniqueName: \"kubernetes.io/projected/022e1dee-9e9a-4898-8667-cdce272dce30-kube-api-access-4k5lm\") pod \"nova-metadata-0\" (UID: \"022e1dee-9e9a-4898-8667-cdce272dce30\") " pod="openstack/nova-metadata-0" Feb 17 13:57:26 crc kubenswrapper[4768]: I0217 13:57:26.111819 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 17 13:57:26 crc kubenswrapper[4768]: I0217 13:57:26.201529 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-z5wdm" Feb 17 13:57:26 crc kubenswrapper[4768]: I0217 13:57:26.212565 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/022e1dee-9e9a-4898-8667-cdce272dce30-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"022e1dee-9e9a-4898-8667-cdce272dce30\") " pod="openstack/nova-metadata-0" Feb 17 13:57:26 crc kubenswrapper[4768]: I0217 13:57:26.212664 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/022e1dee-9e9a-4898-8667-cdce272dce30-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"022e1dee-9e9a-4898-8667-cdce272dce30\") " pod="openstack/nova-metadata-0" Feb 17 13:57:26 crc kubenswrapper[4768]: I0217 13:57:26.212697 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/022e1dee-9e9a-4898-8667-cdce272dce30-logs\") pod \"nova-metadata-0\" (UID: \"022e1dee-9e9a-4898-8667-cdce272dce30\") " pod="openstack/nova-metadata-0" Feb 17 13:57:26 crc kubenswrapper[4768]: I0217 13:57:26.212750 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/022e1dee-9e9a-4898-8667-cdce272dce30-config-data\") pod \"nova-metadata-0\" (UID: \"022e1dee-9e9a-4898-8667-cdce272dce30\") " pod="openstack/nova-metadata-0" Feb 17 13:57:26 crc kubenswrapper[4768]: I0217 13:57:26.212786 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4k5lm\" (UniqueName: \"kubernetes.io/projected/022e1dee-9e9a-4898-8667-cdce272dce30-kube-api-access-4k5lm\") pod \"nova-metadata-0\" (UID: \"022e1dee-9e9a-4898-8667-cdce272dce30\") " pod="openstack/nova-metadata-0" Feb 17 13:57:26 crc kubenswrapper[4768]: I0217 13:57:26.214619 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/022e1dee-9e9a-4898-8667-cdce272dce30-logs\") pod \"nova-metadata-0\" (UID: \"022e1dee-9e9a-4898-8667-cdce272dce30\") " pod="openstack/nova-metadata-0" Feb 17 13:57:26 crc kubenswrapper[4768]: I0217 13:57:26.219881 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/022e1dee-9e9a-4898-8667-cdce272dce30-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"022e1dee-9e9a-4898-8667-cdce272dce30\") " pod="openstack/nova-metadata-0" Feb 17 13:57:26 crc kubenswrapper[4768]: I0217 13:57:26.220763 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/022e1dee-9e9a-4898-8667-cdce272dce30-config-data\") pod \"nova-metadata-0\" (UID: \"022e1dee-9e9a-4898-8667-cdce272dce30\") " pod="openstack/nova-metadata-0" Feb 17 13:57:26 crc kubenswrapper[4768]: I0217 13:57:26.221232 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/022e1dee-9e9a-4898-8667-cdce272dce30-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"022e1dee-9e9a-4898-8667-cdce272dce30\") " pod="openstack/nova-metadata-0" Feb 17 13:57:26 crc kubenswrapper[4768]: I0217 13:57:26.229996 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4k5lm\" (UniqueName: \"kubernetes.io/projected/022e1dee-9e9a-4898-8667-cdce272dce30-kube-api-access-4k5lm\") pod \"nova-metadata-0\" (UID: \"022e1dee-9e9a-4898-8667-cdce272dce30\") " pod="openstack/nova-metadata-0" Feb 17 13:57:26 crc kubenswrapper[4768]: I0217 13:57:26.316985 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7636550e-37fd-4031-a05d-603fef57553a-combined-ca-bundle\") pod \"7636550e-37fd-4031-a05d-603fef57553a\" (UID: \"7636550e-37fd-4031-a05d-603fef57553a\") " Feb 17 13:57:26 crc kubenswrapper[4768]: I0217 13:57:26.317128 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7636550e-37fd-4031-a05d-603fef57553a-config-data\") pod \"7636550e-37fd-4031-a05d-603fef57553a\" (UID: \"7636550e-37fd-4031-a05d-603fef57553a\") " Feb 17 13:57:26 crc kubenswrapper[4768]: I0217 13:57:26.317179 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7636550e-37fd-4031-a05d-603fef57553a-scripts\") pod \"7636550e-37fd-4031-a05d-603fef57553a\" (UID: \"7636550e-37fd-4031-a05d-603fef57553a\") " Feb 17 13:57:26 crc kubenswrapper[4768]: I0217 13:57:26.317233 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mz9cg\" (UniqueName: \"kubernetes.io/projected/7636550e-37fd-4031-a05d-603fef57553a-kube-api-access-mz9cg\") pod \"7636550e-37fd-4031-a05d-603fef57553a\" (UID: \"7636550e-37fd-4031-a05d-603fef57553a\") " Feb 17 13:57:26 crc kubenswrapper[4768]: I0217 13:57:26.321692 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7636550e-37fd-4031-a05d-603fef57553a-kube-api-access-mz9cg" (OuterVolumeSpecName: "kube-api-access-mz9cg") pod "7636550e-37fd-4031-a05d-603fef57553a" (UID: "7636550e-37fd-4031-a05d-603fef57553a"). InnerVolumeSpecName "kube-api-access-mz9cg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 13:57:26 crc kubenswrapper[4768]: I0217 13:57:26.329880 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7636550e-37fd-4031-a05d-603fef57553a-scripts" (OuterVolumeSpecName: "scripts") pod "7636550e-37fd-4031-a05d-603fef57553a" (UID: "7636550e-37fd-4031-a05d-603fef57553a"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 13:57:26 crc kubenswrapper[4768]: I0217 13:57:26.342902 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7636550e-37fd-4031-a05d-603fef57553a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7636550e-37fd-4031-a05d-603fef57553a" (UID: "7636550e-37fd-4031-a05d-603fef57553a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 13:57:26 crc kubenswrapper[4768]: I0217 13:57:26.351702 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7636550e-37fd-4031-a05d-603fef57553a-config-data" (OuterVolumeSpecName: "config-data") pod "7636550e-37fd-4031-a05d-603fef57553a" (UID: "7636550e-37fd-4031-a05d-603fef57553a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 13:57:26 crc kubenswrapper[4768]: I0217 13:57:26.419355 4768 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7636550e-37fd-4031-a05d-603fef57553a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 13:57:26 crc kubenswrapper[4768]: I0217 13:57:26.419398 4768 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7636550e-37fd-4031-a05d-603fef57553a-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 13:57:26 crc kubenswrapper[4768]: I0217 13:57:26.419412 4768 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7636550e-37fd-4031-a05d-603fef57553a-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 13:57:26 crc kubenswrapper[4768]: I0217 13:57:26.419425 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mz9cg\" (UniqueName: \"kubernetes.io/projected/7636550e-37fd-4031-a05d-603fef57553a-kube-api-access-mz9cg\") on node \"crc\" DevicePath \"\"" Feb 17 13:57:26 crc kubenswrapper[4768]: I0217 13:57:26.501013 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 17 13:57:26 crc kubenswrapper[4768]: I0217 13:57:26.702926 4768 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="fa54770b-b069-4fe4-b618-30ab5004db67" containerName="nova-scheduler-scheduler" containerID="cri-o://39ae10eee605ad226ded5f02dd7ac7cd3908e988b7081ff0616b4a6f677dbfda" gracePeriod=30 Feb 17 13:57:26 crc kubenswrapper[4768]: I0217 13:57:26.703346 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-z5wdm" Feb 17 13:57:26 crc kubenswrapper[4768]: I0217 13:57:26.711289 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-z5wdm" event={"ID":"7636550e-37fd-4031-a05d-603fef57553a","Type":"ContainerDied","Data":"edc8ef7389a5085acd0ddf37e64f224eed00b5c34fd3ea7c95b5fca3e8f1c795"} Feb 17 13:57:26 crc kubenswrapper[4768]: I0217 13:57:26.711338 4768 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="edc8ef7389a5085acd0ddf37e64f224eed00b5c34fd3ea7c95b5fca3e8f1c795" Feb 17 13:57:26 crc kubenswrapper[4768]: I0217 13:57:26.753337 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-0"] Feb 17 13:57:26 crc kubenswrapper[4768]: E0217 13:57:26.753860 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7636550e-37fd-4031-a05d-603fef57553a" containerName="nova-cell1-conductor-db-sync" Feb 17 13:57:26 crc kubenswrapper[4768]: I0217 13:57:26.753891 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="7636550e-37fd-4031-a05d-603fef57553a" containerName="nova-cell1-conductor-db-sync" Feb 17 13:57:26 crc kubenswrapper[4768]: I0217 13:57:26.754060 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="7636550e-37fd-4031-a05d-603fef57553a" containerName="nova-cell1-conductor-db-sync" Feb 17 13:57:26 crc kubenswrapper[4768]: I0217 13:57:26.754640 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Feb 17 13:57:26 crc kubenswrapper[4768]: I0217 13:57:26.759726 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Feb 17 13:57:26 crc kubenswrapper[4768]: I0217 13:57:26.796534 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Feb 17 13:57:26 crc kubenswrapper[4768]: I0217 13:57:26.928331 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ngqph\" (UniqueName: \"kubernetes.io/projected/ee6256eb-4e26-4e93-ae49-8c6be5aace6c-kube-api-access-ngqph\") pod \"nova-cell1-conductor-0\" (UID: \"ee6256eb-4e26-4e93-ae49-8c6be5aace6c\") " pod="openstack/nova-cell1-conductor-0" Feb 17 13:57:26 crc kubenswrapper[4768]: I0217 13:57:26.929332 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ee6256eb-4e26-4e93-ae49-8c6be5aace6c-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"ee6256eb-4e26-4e93-ae49-8c6be5aace6c\") " pod="openstack/nova-cell1-conductor-0" Feb 17 13:57:26 crc kubenswrapper[4768]: I0217 13:57:26.929535 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ee6256eb-4e26-4e93-ae49-8c6be5aace6c-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"ee6256eb-4e26-4e93-ae49-8c6be5aace6c\") " pod="openstack/nova-cell1-conductor-0" Feb 17 13:57:27 crc kubenswrapper[4768]: I0217 13:57:27.014876 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 17 13:57:27 crc kubenswrapper[4768]: I0217 13:57:27.031369 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ngqph\" (UniqueName: \"kubernetes.io/projected/ee6256eb-4e26-4e93-ae49-8c6be5aace6c-kube-api-access-ngqph\") pod \"nova-cell1-conductor-0\" (UID: \"ee6256eb-4e26-4e93-ae49-8c6be5aace6c\") " pod="openstack/nova-cell1-conductor-0" Feb 17 13:57:27 crc kubenswrapper[4768]: I0217 13:57:27.031421 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ee6256eb-4e26-4e93-ae49-8c6be5aace6c-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"ee6256eb-4e26-4e93-ae49-8c6be5aace6c\") " pod="openstack/nova-cell1-conductor-0" Feb 17 13:57:27 crc kubenswrapper[4768]: I0217 13:57:27.031499 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ee6256eb-4e26-4e93-ae49-8c6be5aace6c-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"ee6256eb-4e26-4e93-ae49-8c6be5aace6c\") " pod="openstack/nova-cell1-conductor-0" Feb 17 13:57:27 crc kubenswrapper[4768]: I0217 13:57:27.037908 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ee6256eb-4e26-4e93-ae49-8c6be5aace6c-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"ee6256eb-4e26-4e93-ae49-8c6be5aace6c\") " pod="openstack/nova-cell1-conductor-0" Feb 17 13:57:27 crc kubenswrapper[4768]: I0217 13:57:27.039715 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ee6256eb-4e26-4e93-ae49-8c6be5aace6c-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"ee6256eb-4e26-4e93-ae49-8c6be5aace6c\") " pod="openstack/nova-cell1-conductor-0" Feb 17 13:57:27 crc kubenswrapper[4768]: I0217 13:57:27.047183 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ngqph\" (UniqueName: \"kubernetes.io/projected/ee6256eb-4e26-4e93-ae49-8c6be5aace6c-kube-api-access-ngqph\") pod \"nova-cell1-conductor-0\" (UID: \"ee6256eb-4e26-4e93-ae49-8c6be5aace6c\") " pod="openstack/nova-cell1-conductor-0" Feb 17 13:57:27 crc kubenswrapper[4768]: I0217 13:57:27.097064 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Feb 17 13:57:27 crc kubenswrapper[4768]: I0217 13:57:27.548239 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2a875be7-6c07-4783-8f27-e13f4b89997e" path="/var/lib/kubelet/pods/2a875be7-6c07-4783-8f27-e13f4b89997e/volumes" Feb 17 13:57:27 crc kubenswrapper[4768]: I0217 13:57:27.549390 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e9696419-5c03-4d1d-bd0c-7bf7becd6239" path="/var/lib/kubelet/pods/e9696419-5c03-4d1d-bd0c-7bf7becd6239/volumes" Feb 17 13:57:27 crc kubenswrapper[4768]: I0217 13:57:27.553188 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Feb 17 13:57:27 crc kubenswrapper[4768]: W0217 13:57:27.554075 4768 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podee6256eb_4e26_4e93_ae49_8c6be5aace6c.slice/crio-7e557bbe80df4ebc8a69baf3c9e685ec9059316c6152bcbc00dd68f26650d7a2 WatchSource:0}: Error finding container 7e557bbe80df4ebc8a69baf3c9e685ec9059316c6152bcbc00dd68f26650d7a2: Status 404 returned error can't find the container with id 7e557bbe80df4ebc8a69baf3c9e685ec9059316c6152bcbc00dd68f26650d7a2 Feb 17 13:57:27 crc kubenswrapper[4768]: I0217 13:57:27.711381 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"ee6256eb-4e26-4e93-ae49-8c6be5aace6c","Type":"ContainerStarted","Data":"2236f8c06f420594528de74a99144431e35c465d6d5f88e44e9df8f5c50e8cdb"} Feb 17 13:57:27 crc kubenswrapper[4768]: I0217 13:57:27.711437 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"ee6256eb-4e26-4e93-ae49-8c6be5aace6c","Type":"ContainerStarted","Data":"7e557bbe80df4ebc8a69baf3c9e685ec9059316c6152bcbc00dd68f26650d7a2"} Feb 17 13:57:27 crc kubenswrapper[4768]: I0217 13:57:27.712623 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-conductor-0" Feb 17 13:57:27 crc kubenswrapper[4768]: I0217 13:57:27.714298 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"022e1dee-9e9a-4898-8667-cdce272dce30","Type":"ContainerStarted","Data":"27be652ef96bc300d40975efd52ad65c772827c7235097f02b4da5a9257bd2c5"} Feb 17 13:57:27 crc kubenswrapper[4768]: I0217 13:57:27.714342 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"022e1dee-9e9a-4898-8667-cdce272dce30","Type":"ContainerStarted","Data":"76a2c98d16b8679f51a4fe0a4661f6ac197d032186dd0ca9b3cef0356311e6d5"} Feb 17 13:57:27 crc kubenswrapper[4768]: I0217 13:57:27.714352 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"022e1dee-9e9a-4898-8667-cdce272dce30","Type":"ContainerStarted","Data":"7e3df3d713a510aa95412e9ba3f658d2e46b6105ee36f177f8cf97b5891662d6"} Feb 17 13:57:27 crc kubenswrapper[4768]: I0217 13:57:27.734774 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-0" podStartSLOduration=1.734756228 podStartE2EDuration="1.734756228s" podCreationTimestamp="2026-02-17 13:57:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 13:57:27.729809814 +0000 UTC m=+1267.009196256" watchObservedRunningTime="2026-02-17 13:57:27.734756228 +0000 UTC m=+1267.014142660" Feb 17 13:57:27 crc kubenswrapper[4768]: I0217 13:57:27.750205 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=1.750185155 podStartE2EDuration="1.750185155s" podCreationTimestamp="2026-02-17 13:57:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 13:57:27.748750627 +0000 UTC m=+1267.028137069" watchObservedRunningTime="2026-02-17 13:57:27.750185155 +0000 UTC m=+1267.029571607" Feb 17 13:57:28 crc kubenswrapper[4768]: I0217 13:57:28.059629 4768 patch_prober.go:28] interesting pod/machine-config-daemon-p97z4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 13:57:28 crc kubenswrapper[4768]: I0217 13:57:28.059691 4768 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-p97z4" podUID="10c685ba-8fe0-425c-958c-3fb6754d3d84" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 13:57:29 crc kubenswrapper[4768]: E0217 13:57:29.060780 4768 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="39ae10eee605ad226ded5f02dd7ac7cd3908e988b7081ff0616b4a6f677dbfda" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Feb 17 13:57:29 crc kubenswrapper[4768]: E0217 13:57:29.062746 4768 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="39ae10eee605ad226ded5f02dd7ac7cd3908e988b7081ff0616b4a6f677dbfda" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Feb 17 13:57:29 crc kubenswrapper[4768]: E0217 13:57:29.064289 4768 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="39ae10eee605ad226ded5f02dd7ac7cd3908e988b7081ff0616b4a6f677dbfda" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Feb 17 13:57:29 crc kubenswrapper[4768]: E0217 13:57:29.064333 4768 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="fa54770b-b069-4fe4-b618-30ab5004db67" containerName="nova-scheduler-scheduler" Feb 17 13:57:29 crc kubenswrapper[4768]: I0217 13:57:29.765982 4768 generic.go:334] "Generic (PLEG): container finished" podID="fa54770b-b069-4fe4-b618-30ab5004db67" containerID="39ae10eee605ad226ded5f02dd7ac7cd3908e988b7081ff0616b4a6f677dbfda" exitCode=0 Feb 17 13:57:29 crc kubenswrapper[4768]: I0217 13:57:29.766068 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"fa54770b-b069-4fe4-b618-30ab5004db67","Type":"ContainerDied","Data":"39ae10eee605ad226ded5f02dd7ac7cd3908e988b7081ff0616b4a6f677dbfda"} Feb 17 13:57:29 crc kubenswrapper[4768]: I0217 13:57:29.937731 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 17 13:57:29 crc kubenswrapper[4768]: I0217 13:57:29.986094 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pnv7m\" (UniqueName: \"kubernetes.io/projected/fa54770b-b069-4fe4-b618-30ab5004db67-kube-api-access-pnv7m\") pod \"fa54770b-b069-4fe4-b618-30ab5004db67\" (UID: \"fa54770b-b069-4fe4-b618-30ab5004db67\") " Feb 17 13:57:29 crc kubenswrapper[4768]: I0217 13:57:29.986565 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fa54770b-b069-4fe4-b618-30ab5004db67-config-data\") pod \"fa54770b-b069-4fe4-b618-30ab5004db67\" (UID: \"fa54770b-b069-4fe4-b618-30ab5004db67\") " Feb 17 13:57:29 crc kubenswrapper[4768]: I0217 13:57:29.986730 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fa54770b-b069-4fe4-b618-30ab5004db67-combined-ca-bundle\") pod \"fa54770b-b069-4fe4-b618-30ab5004db67\" (UID: \"fa54770b-b069-4fe4-b618-30ab5004db67\") " Feb 17 13:57:30 crc kubenswrapper[4768]: I0217 13:57:30.001375 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fa54770b-b069-4fe4-b618-30ab5004db67-kube-api-access-pnv7m" (OuterVolumeSpecName: "kube-api-access-pnv7m") pod "fa54770b-b069-4fe4-b618-30ab5004db67" (UID: "fa54770b-b069-4fe4-b618-30ab5004db67"). InnerVolumeSpecName "kube-api-access-pnv7m". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 13:57:30 crc kubenswrapper[4768]: I0217 13:57:30.014050 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fa54770b-b069-4fe4-b618-30ab5004db67-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "fa54770b-b069-4fe4-b618-30ab5004db67" (UID: "fa54770b-b069-4fe4-b618-30ab5004db67"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 13:57:30 crc kubenswrapper[4768]: I0217 13:57:30.016447 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fa54770b-b069-4fe4-b618-30ab5004db67-config-data" (OuterVolumeSpecName: "config-data") pod "fa54770b-b069-4fe4-b618-30ab5004db67" (UID: "fa54770b-b069-4fe4-b618-30ab5004db67"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 13:57:30 crc kubenswrapper[4768]: I0217 13:57:30.088962 4768 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fa54770b-b069-4fe4-b618-30ab5004db67-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 13:57:30 crc kubenswrapper[4768]: I0217 13:57:30.089000 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pnv7m\" (UniqueName: \"kubernetes.io/projected/fa54770b-b069-4fe4-b618-30ab5004db67-kube-api-access-pnv7m\") on node \"crc\" DevicePath \"\"" Feb 17 13:57:30 crc kubenswrapper[4768]: I0217 13:57:30.089014 4768 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fa54770b-b069-4fe4-b618-30ab5004db67-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 13:57:30 crc kubenswrapper[4768]: I0217 13:57:30.781521 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"fa54770b-b069-4fe4-b618-30ab5004db67","Type":"ContainerDied","Data":"c0777139f89ace0754282f33110f172efbf95013c97c46c7fdeac21fe937cf69"} Feb 17 13:57:30 crc kubenswrapper[4768]: I0217 13:57:30.781991 4768 scope.go:117] "RemoveContainer" containerID="39ae10eee605ad226ded5f02dd7ac7cd3908e988b7081ff0616b4a6f677dbfda" Feb 17 13:57:30 crc kubenswrapper[4768]: I0217 13:57:30.781688 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 17 13:57:30 crc kubenswrapper[4768]: I0217 13:57:30.823028 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Feb 17 13:57:30 crc kubenswrapper[4768]: I0217 13:57:30.833008 4768 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Feb 17 13:57:30 crc kubenswrapper[4768]: I0217 13:57:30.840593 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Feb 17 13:57:30 crc kubenswrapper[4768]: E0217 13:57:30.841188 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fa54770b-b069-4fe4-b618-30ab5004db67" containerName="nova-scheduler-scheduler" Feb 17 13:57:30 crc kubenswrapper[4768]: I0217 13:57:30.841219 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="fa54770b-b069-4fe4-b618-30ab5004db67" containerName="nova-scheduler-scheduler" Feb 17 13:57:30 crc kubenswrapper[4768]: I0217 13:57:30.841425 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="fa54770b-b069-4fe4-b618-30ab5004db67" containerName="nova-scheduler-scheduler" Feb 17 13:57:30 crc kubenswrapper[4768]: I0217 13:57:30.842083 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 17 13:57:30 crc kubenswrapper[4768]: I0217 13:57:30.844634 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Feb 17 13:57:30 crc kubenswrapper[4768]: I0217 13:57:30.849741 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 17 13:57:30 crc kubenswrapper[4768]: I0217 13:57:30.904033 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c98f298b-2ec8-4d90-9112-8b5ad9109a92-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"c98f298b-2ec8-4d90-9112-8b5ad9109a92\") " pod="openstack/nova-scheduler-0" Feb 17 13:57:30 crc kubenswrapper[4768]: I0217 13:57:30.904076 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c98f298b-2ec8-4d90-9112-8b5ad9109a92-config-data\") pod \"nova-scheduler-0\" (UID: \"c98f298b-2ec8-4d90-9112-8b5ad9109a92\") " pod="openstack/nova-scheduler-0" Feb 17 13:57:30 crc kubenswrapper[4768]: I0217 13:57:30.904227 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2jjkq\" (UniqueName: \"kubernetes.io/projected/c98f298b-2ec8-4d90-9112-8b5ad9109a92-kube-api-access-2jjkq\") pod \"nova-scheduler-0\" (UID: \"c98f298b-2ec8-4d90-9112-8b5ad9109a92\") " pod="openstack/nova-scheduler-0" Feb 17 13:57:31 crc kubenswrapper[4768]: I0217 13:57:31.005402 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2jjkq\" (UniqueName: \"kubernetes.io/projected/c98f298b-2ec8-4d90-9112-8b5ad9109a92-kube-api-access-2jjkq\") pod \"nova-scheduler-0\" (UID: \"c98f298b-2ec8-4d90-9112-8b5ad9109a92\") " pod="openstack/nova-scheduler-0" Feb 17 13:57:31 crc kubenswrapper[4768]: I0217 13:57:31.005477 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c98f298b-2ec8-4d90-9112-8b5ad9109a92-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"c98f298b-2ec8-4d90-9112-8b5ad9109a92\") " pod="openstack/nova-scheduler-0" Feb 17 13:57:31 crc kubenswrapper[4768]: I0217 13:57:31.005505 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c98f298b-2ec8-4d90-9112-8b5ad9109a92-config-data\") pod \"nova-scheduler-0\" (UID: \"c98f298b-2ec8-4d90-9112-8b5ad9109a92\") " pod="openstack/nova-scheduler-0" Feb 17 13:57:31 crc kubenswrapper[4768]: I0217 13:57:31.010396 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c98f298b-2ec8-4d90-9112-8b5ad9109a92-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"c98f298b-2ec8-4d90-9112-8b5ad9109a92\") " pod="openstack/nova-scheduler-0" Feb 17 13:57:31 crc kubenswrapper[4768]: I0217 13:57:31.011001 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c98f298b-2ec8-4d90-9112-8b5ad9109a92-config-data\") pod \"nova-scheduler-0\" (UID: \"c98f298b-2ec8-4d90-9112-8b5ad9109a92\") " pod="openstack/nova-scheduler-0" Feb 17 13:57:31 crc kubenswrapper[4768]: I0217 13:57:31.025916 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2jjkq\" (UniqueName: \"kubernetes.io/projected/c98f298b-2ec8-4d90-9112-8b5ad9109a92-kube-api-access-2jjkq\") pod \"nova-scheduler-0\" (UID: \"c98f298b-2ec8-4d90-9112-8b5ad9109a92\") " pod="openstack/nova-scheduler-0" Feb 17 13:57:31 crc kubenswrapper[4768]: I0217 13:57:31.160764 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 17 13:57:31 crc kubenswrapper[4768]: I0217 13:57:31.506746 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 17 13:57:31 crc kubenswrapper[4768]: I0217 13:57:31.507068 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 17 13:57:31 crc kubenswrapper[4768]: I0217 13:57:31.550140 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fa54770b-b069-4fe4-b618-30ab5004db67" path="/var/lib/kubelet/pods/fa54770b-b069-4fe4-b618-30ab5004db67/volumes" Feb 17 13:57:31 crc kubenswrapper[4768]: I0217 13:57:31.623345 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 17 13:57:31 crc kubenswrapper[4768]: W0217 13:57:31.628279 4768 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc98f298b_2ec8_4d90_9112_8b5ad9109a92.slice/crio-281c3ad551b1ead1989f0d1dde0cc07ad7d3e09f0b25558a16c5aca585763c14 WatchSource:0}: Error finding container 281c3ad551b1ead1989f0d1dde0cc07ad7d3e09f0b25558a16c5aca585763c14: Status 404 returned error can't find the container with id 281c3ad551b1ead1989f0d1dde0cc07ad7d3e09f0b25558a16c5aca585763c14 Feb 17 13:57:31 crc kubenswrapper[4768]: I0217 13:57:31.642535 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 17 13:57:31 crc kubenswrapper[4768]: I0217 13:57:31.791472 4768 generic.go:334] "Generic (PLEG): container finished" podID="71481e86-0ed0-47c2-a15a-3379dacb425c" containerID="81e45c86629ef1f04ba43e5477ade7e085e9347a89cf29b7daafa5c4496432ff" exitCode=0 Feb 17 13:57:31 crc kubenswrapper[4768]: I0217 13:57:31.791541 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"71481e86-0ed0-47c2-a15a-3379dacb425c","Type":"ContainerDied","Data":"81e45c86629ef1f04ba43e5477ade7e085e9347a89cf29b7daafa5c4496432ff"} Feb 17 13:57:31 crc kubenswrapper[4768]: I0217 13:57:31.791570 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"71481e86-0ed0-47c2-a15a-3379dacb425c","Type":"ContainerDied","Data":"482617c7e2a10641ea2c3d22bb6f885cf733d40855e3e0c388fb0d3a95f2933a"} Feb 17 13:57:31 crc kubenswrapper[4768]: I0217 13:57:31.791567 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 17 13:57:31 crc kubenswrapper[4768]: I0217 13:57:31.791632 4768 scope.go:117] "RemoveContainer" containerID="81e45c86629ef1f04ba43e5477ade7e085e9347a89cf29b7daafa5c4496432ff" Feb 17 13:57:31 crc kubenswrapper[4768]: I0217 13:57:31.793734 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"c98f298b-2ec8-4d90-9112-8b5ad9109a92","Type":"ContainerStarted","Data":"281c3ad551b1ead1989f0d1dde0cc07ad7d3e09f0b25558a16c5aca585763c14"} Feb 17 13:57:31 crc kubenswrapper[4768]: I0217 13:57:31.817908 4768 scope.go:117] "RemoveContainer" containerID="7ce8a2df0172071d921fc117a138e0c62e92da3614e355458d5478a4f3ac78ed" Feb 17 13:57:31 crc kubenswrapper[4768]: I0217 13:57:31.819671 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/71481e86-0ed0-47c2-a15a-3379dacb425c-combined-ca-bundle\") pod \"71481e86-0ed0-47c2-a15a-3379dacb425c\" (UID: \"71481e86-0ed0-47c2-a15a-3379dacb425c\") " Feb 17 13:57:31 crc kubenswrapper[4768]: I0217 13:57:31.819872 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/71481e86-0ed0-47c2-a15a-3379dacb425c-config-data\") pod \"71481e86-0ed0-47c2-a15a-3379dacb425c\" (UID: \"71481e86-0ed0-47c2-a15a-3379dacb425c\") " Feb 17 13:57:31 crc kubenswrapper[4768]: I0217 13:57:31.820001 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/71481e86-0ed0-47c2-a15a-3379dacb425c-logs\") pod \"71481e86-0ed0-47c2-a15a-3379dacb425c\" (UID: \"71481e86-0ed0-47c2-a15a-3379dacb425c\") " Feb 17 13:57:31 crc kubenswrapper[4768]: I0217 13:57:31.820327 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dlj48\" (UniqueName: \"kubernetes.io/projected/71481e86-0ed0-47c2-a15a-3379dacb425c-kube-api-access-dlj48\") pod \"71481e86-0ed0-47c2-a15a-3379dacb425c\" (UID: \"71481e86-0ed0-47c2-a15a-3379dacb425c\") " Feb 17 13:57:31 crc kubenswrapper[4768]: I0217 13:57:31.820525 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/71481e86-0ed0-47c2-a15a-3379dacb425c-logs" (OuterVolumeSpecName: "logs") pod "71481e86-0ed0-47c2-a15a-3379dacb425c" (UID: "71481e86-0ed0-47c2-a15a-3379dacb425c"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 13:57:31 crc kubenswrapper[4768]: I0217 13:57:31.821137 4768 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/71481e86-0ed0-47c2-a15a-3379dacb425c-logs\") on node \"crc\" DevicePath \"\"" Feb 17 13:57:31 crc kubenswrapper[4768]: I0217 13:57:31.824231 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/71481e86-0ed0-47c2-a15a-3379dacb425c-kube-api-access-dlj48" (OuterVolumeSpecName: "kube-api-access-dlj48") pod "71481e86-0ed0-47c2-a15a-3379dacb425c" (UID: "71481e86-0ed0-47c2-a15a-3379dacb425c"). InnerVolumeSpecName "kube-api-access-dlj48". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 13:57:31 crc kubenswrapper[4768]: I0217 13:57:31.839681 4768 scope.go:117] "RemoveContainer" containerID="81e45c86629ef1f04ba43e5477ade7e085e9347a89cf29b7daafa5c4496432ff" Feb 17 13:57:31 crc kubenswrapper[4768]: E0217 13:57:31.840298 4768 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"81e45c86629ef1f04ba43e5477ade7e085e9347a89cf29b7daafa5c4496432ff\": container with ID starting with 81e45c86629ef1f04ba43e5477ade7e085e9347a89cf29b7daafa5c4496432ff not found: ID does not exist" containerID="81e45c86629ef1f04ba43e5477ade7e085e9347a89cf29b7daafa5c4496432ff" Feb 17 13:57:31 crc kubenswrapper[4768]: I0217 13:57:31.840331 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"81e45c86629ef1f04ba43e5477ade7e085e9347a89cf29b7daafa5c4496432ff"} err="failed to get container status \"81e45c86629ef1f04ba43e5477ade7e085e9347a89cf29b7daafa5c4496432ff\": rpc error: code = NotFound desc = could not find container \"81e45c86629ef1f04ba43e5477ade7e085e9347a89cf29b7daafa5c4496432ff\": container with ID starting with 81e45c86629ef1f04ba43e5477ade7e085e9347a89cf29b7daafa5c4496432ff not found: ID does not exist" Feb 17 13:57:31 crc kubenswrapper[4768]: I0217 13:57:31.840363 4768 scope.go:117] "RemoveContainer" containerID="7ce8a2df0172071d921fc117a138e0c62e92da3614e355458d5478a4f3ac78ed" Feb 17 13:57:31 crc kubenswrapper[4768]: E0217 13:57:31.840605 4768 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7ce8a2df0172071d921fc117a138e0c62e92da3614e355458d5478a4f3ac78ed\": container with ID starting with 7ce8a2df0172071d921fc117a138e0c62e92da3614e355458d5478a4f3ac78ed not found: ID does not exist" containerID="7ce8a2df0172071d921fc117a138e0c62e92da3614e355458d5478a4f3ac78ed" Feb 17 13:57:31 crc kubenswrapper[4768]: I0217 13:57:31.840623 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7ce8a2df0172071d921fc117a138e0c62e92da3614e355458d5478a4f3ac78ed"} err="failed to get container status \"7ce8a2df0172071d921fc117a138e0c62e92da3614e355458d5478a4f3ac78ed\": rpc error: code = NotFound desc = could not find container \"7ce8a2df0172071d921fc117a138e0c62e92da3614e355458d5478a4f3ac78ed\": container with ID starting with 7ce8a2df0172071d921fc117a138e0c62e92da3614e355458d5478a4f3ac78ed not found: ID does not exist" Feb 17 13:57:31 crc kubenswrapper[4768]: I0217 13:57:31.847380 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/71481e86-0ed0-47c2-a15a-3379dacb425c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "71481e86-0ed0-47c2-a15a-3379dacb425c" (UID: "71481e86-0ed0-47c2-a15a-3379dacb425c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 13:57:31 crc kubenswrapper[4768]: I0217 13:57:31.854742 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/71481e86-0ed0-47c2-a15a-3379dacb425c-config-data" (OuterVolumeSpecName: "config-data") pod "71481e86-0ed0-47c2-a15a-3379dacb425c" (UID: "71481e86-0ed0-47c2-a15a-3379dacb425c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 13:57:31 crc kubenswrapper[4768]: I0217 13:57:31.922398 4768 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/71481e86-0ed0-47c2-a15a-3379dacb425c-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 13:57:31 crc kubenswrapper[4768]: I0217 13:57:31.922432 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dlj48\" (UniqueName: \"kubernetes.io/projected/71481e86-0ed0-47c2-a15a-3379dacb425c-kube-api-access-dlj48\") on node \"crc\" DevicePath \"\"" Feb 17 13:57:31 crc kubenswrapper[4768]: I0217 13:57:31.922443 4768 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/71481e86-0ed0-47c2-a15a-3379dacb425c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 13:57:32 crc kubenswrapper[4768]: I0217 13:57:32.149355 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 17 13:57:32 crc kubenswrapper[4768]: I0217 13:57:32.158797 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-conductor-0" Feb 17 13:57:32 crc kubenswrapper[4768]: I0217 13:57:32.173206 4768 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Feb 17 13:57:32 crc kubenswrapper[4768]: I0217 13:57:32.182237 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Feb 17 13:57:32 crc kubenswrapper[4768]: E0217 13:57:32.182675 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="71481e86-0ed0-47c2-a15a-3379dacb425c" containerName="nova-api-api" Feb 17 13:57:32 crc kubenswrapper[4768]: I0217 13:57:32.182693 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="71481e86-0ed0-47c2-a15a-3379dacb425c" containerName="nova-api-api" Feb 17 13:57:32 crc kubenswrapper[4768]: E0217 13:57:32.182703 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="71481e86-0ed0-47c2-a15a-3379dacb425c" containerName="nova-api-log" Feb 17 13:57:32 crc kubenswrapper[4768]: I0217 13:57:32.182709 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="71481e86-0ed0-47c2-a15a-3379dacb425c" containerName="nova-api-log" Feb 17 13:57:32 crc kubenswrapper[4768]: I0217 13:57:32.182966 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="71481e86-0ed0-47c2-a15a-3379dacb425c" containerName="nova-api-log" Feb 17 13:57:32 crc kubenswrapper[4768]: I0217 13:57:32.182997 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="71481e86-0ed0-47c2-a15a-3379dacb425c" containerName="nova-api-api" Feb 17 13:57:32 crc kubenswrapper[4768]: I0217 13:57:32.184033 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 17 13:57:32 crc kubenswrapper[4768]: I0217 13:57:32.190263 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 17 13:57:32 crc kubenswrapper[4768]: I0217 13:57:32.212843 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Feb 17 13:57:32 crc kubenswrapper[4768]: I0217 13:57:32.339508 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f2be3d62-c831-45db-b7a4-34557edcf1af-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"f2be3d62-c831-45db-b7a4-34557edcf1af\") " pod="openstack/nova-api-0" Feb 17 13:57:32 crc kubenswrapper[4768]: I0217 13:57:32.339654 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f2be3d62-c831-45db-b7a4-34557edcf1af-config-data\") pod \"nova-api-0\" (UID: \"f2be3d62-c831-45db-b7a4-34557edcf1af\") " pod="openstack/nova-api-0" Feb 17 13:57:32 crc kubenswrapper[4768]: I0217 13:57:32.339739 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f2be3d62-c831-45db-b7a4-34557edcf1af-logs\") pod \"nova-api-0\" (UID: \"f2be3d62-c831-45db-b7a4-34557edcf1af\") " pod="openstack/nova-api-0" Feb 17 13:57:32 crc kubenswrapper[4768]: I0217 13:57:32.339892 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zf95q\" (UniqueName: \"kubernetes.io/projected/f2be3d62-c831-45db-b7a4-34557edcf1af-kube-api-access-zf95q\") pod \"nova-api-0\" (UID: \"f2be3d62-c831-45db-b7a4-34557edcf1af\") " pod="openstack/nova-api-0" Feb 17 13:57:32 crc kubenswrapper[4768]: I0217 13:57:32.441466 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zf95q\" (UniqueName: \"kubernetes.io/projected/f2be3d62-c831-45db-b7a4-34557edcf1af-kube-api-access-zf95q\") pod \"nova-api-0\" (UID: \"f2be3d62-c831-45db-b7a4-34557edcf1af\") " pod="openstack/nova-api-0" Feb 17 13:57:32 crc kubenswrapper[4768]: I0217 13:57:32.441544 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f2be3d62-c831-45db-b7a4-34557edcf1af-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"f2be3d62-c831-45db-b7a4-34557edcf1af\") " pod="openstack/nova-api-0" Feb 17 13:57:32 crc kubenswrapper[4768]: I0217 13:57:32.441568 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f2be3d62-c831-45db-b7a4-34557edcf1af-config-data\") pod \"nova-api-0\" (UID: \"f2be3d62-c831-45db-b7a4-34557edcf1af\") " pod="openstack/nova-api-0" Feb 17 13:57:32 crc kubenswrapper[4768]: I0217 13:57:32.441605 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f2be3d62-c831-45db-b7a4-34557edcf1af-logs\") pod \"nova-api-0\" (UID: \"f2be3d62-c831-45db-b7a4-34557edcf1af\") " pod="openstack/nova-api-0" Feb 17 13:57:32 crc kubenswrapper[4768]: I0217 13:57:32.441950 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f2be3d62-c831-45db-b7a4-34557edcf1af-logs\") pod \"nova-api-0\" (UID: \"f2be3d62-c831-45db-b7a4-34557edcf1af\") " pod="openstack/nova-api-0" Feb 17 13:57:32 crc kubenswrapper[4768]: I0217 13:57:32.445711 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f2be3d62-c831-45db-b7a4-34557edcf1af-config-data\") pod \"nova-api-0\" (UID: \"f2be3d62-c831-45db-b7a4-34557edcf1af\") " pod="openstack/nova-api-0" Feb 17 13:57:32 crc kubenswrapper[4768]: I0217 13:57:32.445936 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f2be3d62-c831-45db-b7a4-34557edcf1af-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"f2be3d62-c831-45db-b7a4-34557edcf1af\") " pod="openstack/nova-api-0" Feb 17 13:57:32 crc kubenswrapper[4768]: I0217 13:57:32.458844 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zf95q\" (UniqueName: \"kubernetes.io/projected/f2be3d62-c831-45db-b7a4-34557edcf1af-kube-api-access-zf95q\") pod \"nova-api-0\" (UID: \"f2be3d62-c831-45db-b7a4-34557edcf1af\") " pod="openstack/nova-api-0" Feb 17 13:57:32 crc kubenswrapper[4768]: I0217 13:57:32.534746 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 17 13:57:32 crc kubenswrapper[4768]: I0217 13:57:32.806158 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"c98f298b-2ec8-4d90-9112-8b5ad9109a92","Type":"ContainerStarted","Data":"edbfa891c9ee3949919595ea6240dcb07608c35f0542a6f28bc8f60edfe895f7"} Feb 17 13:57:32 crc kubenswrapper[4768]: I0217 13:57:32.823562 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.823543572 podStartE2EDuration="2.823543572s" podCreationTimestamp="2026-02-17 13:57:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 13:57:32.822077162 +0000 UTC m=+1272.101463594" watchObservedRunningTime="2026-02-17 13:57:32.823543572 +0000 UTC m=+1272.102930014" Feb 17 13:57:33 crc kubenswrapper[4768]: I0217 13:57:33.024193 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 17 13:57:33 crc kubenswrapper[4768]: W0217 13:57:33.024388 4768 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf2be3d62_c831_45db_b7a4_34557edcf1af.slice/crio-5474b41de4745ea780c8f7ad28bf194024f1562cef57a1e39106751570898b8d WatchSource:0}: Error finding container 5474b41de4745ea780c8f7ad28bf194024f1562cef57a1e39106751570898b8d: Status 404 returned error can't find the container with id 5474b41de4745ea780c8f7ad28bf194024f1562cef57a1e39106751570898b8d Feb 17 13:57:33 crc kubenswrapper[4768]: I0217 13:57:33.551456 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="71481e86-0ed0-47c2-a15a-3379dacb425c" path="/var/lib/kubelet/pods/71481e86-0ed0-47c2-a15a-3379dacb425c/volumes" Feb 17 13:57:33 crc kubenswrapper[4768]: I0217 13:57:33.818154 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"f2be3d62-c831-45db-b7a4-34557edcf1af","Type":"ContainerStarted","Data":"9d81c03dbcf4a7e51e762768d8b1c34cbf8b60e5bb2bf99cd08091fcda221b28"} Feb 17 13:57:33 crc kubenswrapper[4768]: I0217 13:57:33.818462 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"f2be3d62-c831-45db-b7a4-34557edcf1af","Type":"ContainerStarted","Data":"112e3c20d5ad0a22c2fff652ae7de7438ed4ea691f0543b2fe529bd11407b47f"} Feb 17 13:57:33 crc kubenswrapper[4768]: I0217 13:57:33.818478 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"f2be3d62-c831-45db-b7a4-34557edcf1af","Type":"ContainerStarted","Data":"5474b41de4745ea780c8f7ad28bf194024f1562cef57a1e39106751570898b8d"} Feb 17 13:57:33 crc kubenswrapper[4768]: I0217 13:57:33.841186 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=1.8411656619999999 podStartE2EDuration="1.841165662s" podCreationTimestamp="2026-02-17 13:57:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 13:57:33.83591553 +0000 UTC m=+1273.115302012" watchObservedRunningTime="2026-02-17 13:57:33.841165662 +0000 UTC m=+1273.120552104" Feb 17 13:57:36 crc kubenswrapper[4768]: I0217 13:57:36.161360 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Feb 17 13:57:36 crc kubenswrapper[4768]: I0217 13:57:36.502087 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Feb 17 13:57:36 crc kubenswrapper[4768]: I0217 13:57:36.502186 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Feb 17 13:57:37 crc kubenswrapper[4768]: I0217 13:57:37.514247 4768 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="022e1dee-9e9a-4898-8667-cdce272dce30" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.196:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 17 13:57:37 crc kubenswrapper[4768]: I0217 13:57:37.514238 4768 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="022e1dee-9e9a-4898-8667-cdce272dce30" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.196:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 17 13:57:41 crc kubenswrapper[4768]: I0217 13:57:41.161421 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Feb 17 13:57:41 crc kubenswrapper[4768]: I0217 13:57:41.191427 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Feb 17 13:57:41 crc kubenswrapper[4768]: I0217 13:57:41.934477 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Feb 17 13:57:42 crc kubenswrapper[4768]: I0217 13:57:42.535079 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 17 13:57:42 crc kubenswrapper[4768]: I0217 13:57:42.535143 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 17 13:57:43 crc kubenswrapper[4768]: I0217 13:57:43.617283 4768 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="f2be3d62-c831-45db-b7a4-34557edcf1af" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.199:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 17 13:57:43 crc kubenswrapper[4768]: I0217 13:57:43.617311 4768 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="f2be3d62-c831-45db-b7a4-34557edcf1af" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.199:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 17 13:57:46 crc kubenswrapper[4768]: I0217 13:57:46.511292 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Feb 17 13:57:46 crc kubenswrapper[4768]: I0217 13:57:46.512264 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Feb 17 13:57:46 crc kubenswrapper[4768]: I0217 13:57:46.518985 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Feb 17 13:57:46 crc kubenswrapper[4768]: I0217 13:57:46.519087 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Feb 17 13:57:48 crc kubenswrapper[4768]: I0217 13:57:48.912494 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 17 13:57:48 crc kubenswrapper[4768]: I0217 13:57:48.968074 4768 generic.go:334] "Generic (PLEG): container finished" podID="7549ca8c-a029-408c-b740-6c376751dbf6" containerID="e341641e7ad5f7011c170f38c42145bafbb17a6b90a496d8430c17bfb55e2964" exitCode=137 Feb 17 13:57:48 crc kubenswrapper[4768]: I0217 13:57:48.968151 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"7549ca8c-a029-408c-b740-6c376751dbf6","Type":"ContainerDied","Data":"e341641e7ad5f7011c170f38c42145bafbb17a6b90a496d8430c17bfb55e2964"} Feb 17 13:57:48 crc kubenswrapper[4768]: I0217 13:57:48.968207 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"7549ca8c-a029-408c-b740-6c376751dbf6","Type":"ContainerDied","Data":"a845bbd91f149d5bc2faab9bc753bc26eaebf6a5ee4cd66c29572055650cb2ad"} Feb 17 13:57:48 crc kubenswrapper[4768]: I0217 13:57:48.968226 4768 scope.go:117] "RemoveContainer" containerID="e341641e7ad5f7011c170f38c42145bafbb17a6b90a496d8430c17bfb55e2964" Feb 17 13:57:48 crc kubenswrapper[4768]: I0217 13:57:48.968245 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 17 13:57:48 crc kubenswrapper[4768]: I0217 13:57:48.982822 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7549ca8c-a029-408c-b740-6c376751dbf6-combined-ca-bundle\") pod \"7549ca8c-a029-408c-b740-6c376751dbf6\" (UID: \"7549ca8c-a029-408c-b740-6c376751dbf6\") " Feb 17 13:57:48 crc kubenswrapper[4768]: I0217 13:57:48.983027 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5gnzv\" (UniqueName: \"kubernetes.io/projected/7549ca8c-a029-408c-b740-6c376751dbf6-kube-api-access-5gnzv\") pod \"7549ca8c-a029-408c-b740-6c376751dbf6\" (UID: \"7549ca8c-a029-408c-b740-6c376751dbf6\") " Feb 17 13:57:48 crc kubenswrapper[4768]: I0217 13:57:48.984220 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7549ca8c-a029-408c-b740-6c376751dbf6-config-data\") pod \"7549ca8c-a029-408c-b740-6c376751dbf6\" (UID: \"7549ca8c-a029-408c-b740-6c376751dbf6\") " Feb 17 13:57:48 crc kubenswrapper[4768]: I0217 13:57:48.994450 4768 scope.go:117] "RemoveContainer" containerID="e341641e7ad5f7011c170f38c42145bafbb17a6b90a496d8430c17bfb55e2964" Feb 17 13:57:48 crc kubenswrapper[4768]: I0217 13:57:48.994499 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7549ca8c-a029-408c-b740-6c376751dbf6-kube-api-access-5gnzv" (OuterVolumeSpecName: "kube-api-access-5gnzv") pod "7549ca8c-a029-408c-b740-6c376751dbf6" (UID: "7549ca8c-a029-408c-b740-6c376751dbf6"). InnerVolumeSpecName "kube-api-access-5gnzv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 13:57:48 crc kubenswrapper[4768]: E0217 13:57:48.994974 4768 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e341641e7ad5f7011c170f38c42145bafbb17a6b90a496d8430c17bfb55e2964\": container with ID starting with e341641e7ad5f7011c170f38c42145bafbb17a6b90a496d8430c17bfb55e2964 not found: ID does not exist" containerID="e341641e7ad5f7011c170f38c42145bafbb17a6b90a496d8430c17bfb55e2964" Feb 17 13:57:48 crc kubenswrapper[4768]: I0217 13:57:48.995037 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e341641e7ad5f7011c170f38c42145bafbb17a6b90a496d8430c17bfb55e2964"} err="failed to get container status \"e341641e7ad5f7011c170f38c42145bafbb17a6b90a496d8430c17bfb55e2964\": rpc error: code = NotFound desc = could not find container \"e341641e7ad5f7011c170f38c42145bafbb17a6b90a496d8430c17bfb55e2964\": container with ID starting with e341641e7ad5f7011c170f38c42145bafbb17a6b90a496d8430c17bfb55e2964 not found: ID does not exist" Feb 17 13:57:49 crc kubenswrapper[4768]: I0217 13:57:49.011323 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7549ca8c-a029-408c-b740-6c376751dbf6-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7549ca8c-a029-408c-b740-6c376751dbf6" (UID: "7549ca8c-a029-408c-b740-6c376751dbf6"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 13:57:49 crc kubenswrapper[4768]: I0217 13:57:49.011611 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7549ca8c-a029-408c-b740-6c376751dbf6-config-data" (OuterVolumeSpecName: "config-data") pod "7549ca8c-a029-408c-b740-6c376751dbf6" (UID: "7549ca8c-a029-408c-b740-6c376751dbf6"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 13:57:49 crc kubenswrapper[4768]: I0217 13:57:49.086659 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5gnzv\" (UniqueName: \"kubernetes.io/projected/7549ca8c-a029-408c-b740-6c376751dbf6-kube-api-access-5gnzv\") on node \"crc\" DevicePath \"\"" Feb 17 13:57:49 crc kubenswrapper[4768]: I0217 13:57:49.086696 4768 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7549ca8c-a029-408c-b740-6c376751dbf6-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 13:57:49 crc kubenswrapper[4768]: I0217 13:57:49.086706 4768 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7549ca8c-a029-408c-b740-6c376751dbf6-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 13:57:49 crc kubenswrapper[4768]: I0217 13:57:49.307778 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 17 13:57:49 crc kubenswrapper[4768]: I0217 13:57:49.317418 4768 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 17 13:57:49 crc kubenswrapper[4768]: I0217 13:57:49.338787 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 17 13:57:49 crc kubenswrapper[4768]: E0217 13:57:49.339300 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7549ca8c-a029-408c-b740-6c376751dbf6" containerName="nova-cell1-novncproxy-novncproxy" Feb 17 13:57:49 crc kubenswrapper[4768]: I0217 13:57:49.339316 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="7549ca8c-a029-408c-b740-6c376751dbf6" containerName="nova-cell1-novncproxy-novncproxy" Feb 17 13:57:49 crc kubenswrapper[4768]: I0217 13:57:49.339516 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="7549ca8c-a029-408c-b740-6c376751dbf6" containerName="nova-cell1-novncproxy-novncproxy" Feb 17 13:57:49 crc kubenswrapper[4768]: I0217 13:57:49.340188 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 17 13:57:49 crc kubenswrapper[4768]: I0217 13:57:49.342480 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-public-svc" Feb 17 13:57:49 crc kubenswrapper[4768]: I0217 13:57:49.342719 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-vencrypt" Feb 17 13:57:49 crc kubenswrapper[4768]: I0217 13:57:49.342897 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Feb 17 13:57:49 crc kubenswrapper[4768]: I0217 13:57:49.348373 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 17 13:57:49 crc kubenswrapper[4768]: I0217 13:57:49.493495 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/15ac025d-e62d-4a1d-8f2c-86d36c7261f2-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"15ac025d-e62d-4a1d-8f2c-86d36c7261f2\") " pod="openstack/nova-cell1-novncproxy-0" Feb 17 13:57:49 crc kubenswrapper[4768]: I0217 13:57:49.493987 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/15ac025d-e62d-4a1d-8f2c-86d36c7261f2-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"15ac025d-e62d-4a1d-8f2c-86d36c7261f2\") " pod="openstack/nova-cell1-novncproxy-0" Feb 17 13:57:49 crc kubenswrapper[4768]: I0217 13:57:49.494458 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/15ac025d-e62d-4a1d-8f2c-86d36c7261f2-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"15ac025d-e62d-4a1d-8f2c-86d36c7261f2\") " pod="openstack/nova-cell1-novncproxy-0" Feb 17 13:57:49 crc kubenswrapper[4768]: I0217 13:57:49.494628 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jvjsp\" (UniqueName: \"kubernetes.io/projected/15ac025d-e62d-4a1d-8f2c-86d36c7261f2-kube-api-access-jvjsp\") pod \"nova-cell1-novncproxy-0\" (UID: \"15ac025d-e62d-4a1d-8f2c-86d36c7261f2\") " pod="openstack/nova-cell1-novncproxy-0" Feb 17 13:57:49 crc kubenswrapper[4768]: I0217 13:57:49.494733 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/15ac025d-e62d-4a1d-8f2c-86d36c7261f2-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"15ac025d-e62d-4a1d-8f2c-86d36c7261f2\") " pod="openstack/nova-cell1-novncproxy-0" Feb 17 13:57:49 crc kubenswrapper[4768]: I0217 13:57:49.546353 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7549ca8c-a029-408c-b740-6c376751dbf6" path="/var/lib/kubelet/pods/7549ca8c-a029-408c-b740-6c376751dbf6/volumes" Feb 17 13:57:49 crc kubenswrapper[4768]: I0217 13:57:49.596921 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/15ac025d-e62d-4a1d-8f2c-86d36c7261f2-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"15ac025d-e62d-4a1d-8f2c-86d36c7261f2\") " pod="openstack/nova-cell1-novncproxy-0" Feb 17 13:57:49 crc kubenswrapper[4768]: I0217 13:57:49.597062 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jvjsp\" (UniqueName: \"kubernetes.io/projected/15ac025d-e62d-4a1d-8f2c-86d36c7261f2-kube-api-access-jvjsp\") pod \"nova-cell1-novncproxy-0\" (UID: \"15ac025d-e62d-4a1d-8f2c-86d36c7261f2\") " pod="openstack/nova-cell1-novncproxy-0" Feb 17 13:57:49 crc kubenswrapper[4768]: I0217 13:57:49.597120 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/15ac025d-e62d-4a1d-8f2c-86d36c7261f2-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"15ac025d-e62d-4a1d-8f2c-86d36c7261f2\") " pod="openstack/nova-cell1-novncproxy-0" Feb 17 13:57:49 crc kubenswrapper[4768]: I0217 13:57:49.600274 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/15ac025d-e62d-4a1d-8f2c-86d36c7261f2-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"15ac025d-e62d-4a1d-8f2c-86d36c7261f2\") " pod="openstack/nova-cell1-novncproxy-0" Feb 17 13:57:49 crc kubenswrapper[4768]: I0217 13:57:49.600350 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/15ac025d-e62d-4a1d-8f2c-86d36c7261f2-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"15ac025d-e62d-4a1d-8f2c-86d36c7261f2\") " pod="openstack/nova-cell1-novncproxy-0" Feb 17 13:57:49 crc kubenswrapper[4768]: I0217 13:57:49.604359 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/15ac025d-e62d-4a1d-8f2c-86d36c7261f2-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"15ac025d-e62d-4a1d-8f2c-86d36c7261f2\") " pod="openstack/nova-cell1-novncproxy-0" Feb 17 13:57:49 crc kubenswrapper[4768]: I0217 13:57:49.604860 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/15ac025d-e62d-4a1d-8f2c-86d36c7261f2-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"15ac025d-e62d-4a1d-8f2c-86d36c7261f2\") " pod="openstack/nova-cell1-novncproxy-0" Feb 17 13:57:49 crc kubenswrapper[4768]: I0217 13:57:49.605569 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/15ac025d-e62d-4a1d-8f2c-86d36c7261f2-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"15ac025d-e62d-4a1d-8f2c-86d36c7261f2\") " pod="openstack/nova-cell1-novncproxy-0" Feb 17 13:57:49 crc kubenswrapper[4768]: I0217 13:57:49.606196 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/15ac025d-e62d-4a1d-8f2c-86d36c7261f2-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"15ac025d-e62d-4a1d-8f2c-86d36c7261f2\") " pod="openstack/nova-cell1-novncproxy-0" Feb 17 13:57:49 crc kubenswrapper[4768]: I0217 13:57:49.619829 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jvjsp\" (UniqueName: \"kubernetes.io/projected/15ac025d-e62d-4a1d-8f2c-86d36c7261f2-kube-api-access-jvjsp\") pod \"nova-cell1-novncproxy-0\" (UID: \"15ac025d-e62d-4a1d-8f2c-86d36c7261f2\") " pod="openstack/nova-cell1-novncproxy-0" Feb 17 13:57:49 crc kubenswrapper[4768]: I0217 13:57:49.663066 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 17 13:57:50 crc kubenswrapper[4768]: W0217 13:57:50.133734 4768 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod15ac025d_e62d_4a1d_8f2c_86d36c7261f2.slice/crio-37b8ad943506ac8fa4c93ce34a68f1222ccb00f138f9ede127a46045b8e3ef21 WatchSource:0}: Error finding container 37b8ad943506ac8fa4c93ce34a68f1222ccb00f138f9ede127a46045b8e3ef21: Status 404 returned error can't find the container with id 37b8ad943506ac8fa4c93ce34a68f1222ccb00f138f9ede127a46045b8e3ef21 Feb 17 13:57:50 crc kubenswrapper[4768]: I0217 13:57:50.137468 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 17 13:57:50 crc kubenswrapper[4768]: I0217 13:57:50.988470 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"15ac025d-e62d-4a1d-8f2c-86d36c7261f2","Type":"ContainerStarted","Data":"c24126cf434978a9b4cba97e3bebbde435309eca9904a1690f87b68bf065a873"} Feb 17 13:57:50 crc kubenswrapper[4768]: I0217 13:57:50.989669 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"15ac025d-e62d-4a1d-8f2c-86d36c7261f2","Type":"ContainerStarted","Data":"37b8ad943506ac8fa4c93ce34a68f1222ccb00f138f9ede127a46045b8e3ef21"} Feb 17 13:57:51 crc kubenswrapper[4768]: I0217 13:57:51.015809 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.015786549 podStartE2EDuration="2.015786549s" podCreationTimestamp="2026-02-17 13:57:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 13:57:51.005631653 +0000 UTC m=+1290.285018105" watchObservedRunningTime="2026-02-17 13:57:51.015786549 +0000 UTC m=+1290.295172991" Feb 17 13:57:52 crc kubenswrapper[4768]: I0217 13:57:52.538892 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Feb 17 13:57:52 crc kubenswrapper[4768]: I0217 13:57:52.540397 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Feb 17 13:57:52 crc kubenswrapper[4768]: I0217 13:57:52.540595 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Feb 17 13:57:52 crc kubenswrapper[4768]: I0217 13:57:52.542250 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Feb 17 13:57:53 crc kubenswrapper[4768]: I0217 13:57:53.008556 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Feb 17 13:57:53 crc kubenswrapper[4768]: I0217 13:57:53.012042 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Feb 17 13:57:53 crc kubenswrapper[4768]: I0217 13:57:53.181298 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-89c5cd4d5-mmnn5"] Feb 17 13:57:53 crc kubenswrapper[4768]: I0217 13:57:53.183260 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-89c5cd4d5-mmnn5" Feb 17 13:57:53 crc kubenswrapper[4768]: I0217 13:57:53.204057 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-89c5cd4d5-mmnn5"] Feb 17 13:57:53 crc kubenswrapper[4768]: I0217 13:57:53.293660 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/07e49b07-273e-48ae-8c45-c523632d87fe-config\") pod \"dnsmasq-dns-89c5cd4d5-mmnn5\" (UID: \"07e49b07-273e-48ae-8c45-c523632d87fe\") " pod="openstack/dnsmasq-dns-89c5cd4d5-mmnn5" Feb 17 13:57:53 crc kubenswrapper[4768]: I0217 13:57:53.294074 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sn2jc\" (UniqueName: \"kubernetes.io/projected/07e49b07-273e-48ae-8c45-c523632d87fe-kube-api-access-sn2jc\") pod \"dnsmasq-dns-89c5cd4d5-mmnn5\" (UID: \"07e49b07-273e-48ae-8c45-c523632d87fe\") " pod="openstack/dnsmasq-dns-89c5cd4d5-mmnn5" Feb 17 13:57:53 crc kubenswrapper[4768]: I0217 13:57:53.294178 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/07e49b07-273e-48ae-8c45-c523632d87fe-dns-swift-storage-0\") pod \"dnsmasq-dns-89c5cd4d5-mmnn5\" (UID: \"07e49b07-273e-48ae-8c45-c523632d87fe\") " pod="openstack/dnsmasq-dns-89c5cd4d5-mmnn5" Feb 17 13:57:53 crc kubenswrapper[4768]: I0217 13:57:53.294354 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/07e49b07-273e-48ae-8c45-c523632d87fe-dns-svc\") pod \"dnsmasq-dns-89c5cd4d5-mmnn5\" (UID: \"07e49b07-273e-48ae-8c45-c523632d87fe\") " pod="openstack/dnsmasq-dns-89c5cd4d5-mmnn5" Feb 17 13:57:53 crc kubenswrapper[4768]: I0217 13:57:53.294444 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/07e49b07-273e-48ae-8c45-c523632d87fe-ovsdbserver-nb\") pod \"dnsmasq-dns-89c5cd4d5-mmnn5\" (UID: \"07e49b07-273e-48ae-8c45-c523632d87fe\") " pod="openstack/dnsmasq-dns-89c5cd4d5-mmnn5" Feb 17 13:57:53 crc kubenswrapper[4768]: I0217 13:57:53.294469 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/07e49b07-273e-48ae-8c45-c523632d87fe-ovsdbserver-sb\") pod \"dnsmasq-dns-89c5cd4d5-mmnn5\" (UID: \"07e49b07-273e-48ae-8c45-c523632d87fe\") " pod="openstack/dnsmasq-dns-89c5cd4d5-mmnn5" Feb 17 13:57:53 crc kubenswrapper[4768]: I0217 13:57:53.397529 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/07e49b07-273e-48ae-8c45-c523632d87fe-ovsdbserver-nb\") pod \"dnsmasq-dns-89c5cd4d5-mmnn5\" (UID: \"07e49b07-273e-48ae-8c45-c523632d87fe\") " pod="openstack/dnsmasq-dns-89c5cd4d5-mmnn5" Feb 17 13:57:53 crc kubenswrapper[4768]: I0217 13:57:53.397579 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/07e49b07-273e-48ae-8c45-c523632d87fe-ovsdbserver-sb\") pod \"dnsmasq-dns-89c5cd4d5-mmnn5\" (UID: \"07e49b07-273e-48ae-8c45-c523632d87fe\") " pod="openstack/dnsmasq-dns-89c5cd4d5-mmnn5" Feb 17 13:57:53 crc kubenswrapper[4768]: I0217 13:57:53.397610 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/07e49b07-273e-48ae-8c45-c523632d87fe-config\") pod \"dnsmasq-dns-89c5cd4d5-mmnn5\" (UID: \"07e49b07-273e-48ae-8c45-c523632d87fe\") " pod="openstack/dnsmasq-dns-89c5cd4d5-mmnn5" Feb 17 13:57:53 crc kubenswrapper[4768]: I0217 13:57:53.397863 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sn2jc\" (UniqueName: \"kubernetes.io/projected/07e49b07-273e-48ae-8c45-c523632d87fe-kube-api-access-sn2jc\") pod \"dnsmasq-dns-89c5cd4d5-mmnn5\" (UID: \"07e49b07-273e-48ae-8c45-c523632d87fe\") " pod="openstack/dnsmasq-dns-89c5cd4d5-mmnn5" Feb 17 13:57:53 crc kubenswrapper[4768]: I0217 13:57:53.397929 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/07e49b07-273e-48ae-8c45-c523632d87fe-dns-swift-storage-0\") pod \"dnsmasq-dns-89c5cd4d5-mmnn5\" (UID: \"07e49b07-273e-48ae-8c45-c523632d87fe\") " pod="openstack/dnsmasq-dns-89c5cd4d5-mmnn5" Feb 17 13:57:53 crc kubenswrapper[4768]: I0217 13:57:53.398228 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/07e49b07-273e-48ae-8c45-c523632d87fe-dns-svc\") pod \"dnsmasq-dns-89c5cd4d5-mmnn5\" (UID: \"07e49b07-273e-48ae-8c45-c523632d87fe\") " pod="openstack/dnsmasq-dns-89c5cd4d5-mmnn5" Feb 17 13:57:53 crc kubenswrapper[4768]: I0217 13:57:53.399049 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/07e49b07-273e-48ae-8c45-c523632d87fe-ovsdbserver-sb\") pod \"dnsmasq-dns-89c5cd4d5-mmnn5\" (UID: \"07e49b07-273e-48ae-8c45-c523632d87fe\") " pod="openstack/dnsmasq-dns-89c5cd4d5-mmnn5" Feb 17 13:57:53 crc kubenswrapper[4768]: I0217 13:57:53.399121 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/07e49b07-273e-48ae-8c45-c523632d87fe-ovsdbserver-nb\") pod \"dnsmasq-dns-89c5cd4d5-mmnn5\" (UID: \"07e49b07-273e-48ae-8c45-c523632d87fe\") " pod="openstack/dnsmasq-dns-89c5cd4d5-mmnn5" Feb 17 13:57:53 crc kubenswrapper[4768]: I0217 13:57:53.399334 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/07e49b07-273e-48ae-8c45-c523632d87fe-dns-svc\") pod \"dnsmasq-dns-89c5cd4d5-mmnn5\" (UID: \"07e49b07-273e-48ae-8c45-c523632d87fe\") " pod="openstack/dnsmasq-dns-89c5cd4d5-mmnn5" Feb 17 13:57:53 crc kubenswrapper[4768]: I0217 13:57:53.399430 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/07e49b07-273e-48ae-8c45-c523632d87fe-config\") pod \"dnsmasq-dns-89c5cd4d5-mmnn5\" (UID: \"07e49b07-273e-48ae-8c45-c523632d87fe\") " pod="openstack/dnsmasq-dns-89c5cd4d5-mmnn5" Feb 17 13:57:53 crc kubenswrapper[4768]: I0217 13:57:53.399557 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/07e49b07-273e-48ae-8c45-c523632d87fe-dns-swift-storage-0\") pod \"dnsmasq-dns-89c5cd4d5-mmnn5\" (UID: \"07e49b07-273e-48ae-8c45-c523632d87fe\") " pod="openstack/dnsmasq-dns-89c5cd4d5-mmnn5" Feb 17 13:57:53 crc kubenswrapper[4768]: I0217 13:57:53.419570 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sn2jc\" (UniqueName: \"kubernetes.io/projected/07e49b07-273e-48ae-8c45-c523632d87fe-kube-api-access-sn2jc\") pod \"dnsmasq-dns-89c5cd4d5-mmnn5\" (UID: \"07e49b07-273e-48ae-8c45-c523632d87fe\") " pod="openstack/dnsmasq-dns-89c5cd4d5-mmnn5" Feb 17 13:57:53 crc kubenswrapper[4768]: I0217 13:57:53.520177 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-89c5cd4d5-mmnn5" Feb 17 13:57:54 crc kubenswrapper[4768]: I0217 13:57:54.217484 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-89c5cd4d5-mmnn5"] Feb 17 13:57:54 crc kubenswrapper[4768]: W0217 13:57:54.228552 4768 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod07e49b07_273e_48ae_8c45_c523632d87fe.slice/crio-836ef9b8b4f0f35fd118f369b2222738da3a57c0b92f27ba0e237ff2db752847 WatchSource:0}: Error finding container 836ef9b8b4f0f35fd118f369b2222738da3a57c0b92f27ba0e237ff2db752847: Status 404 returned error can't find the container with id 836ef9b8b4f0f35fd118f369b2222738da3a57c0b92f27ba0e237ff2db752847 Feb 17 13:57:54 crc kubenswrapper[4768]: I0217 13:57:54.664001 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Feb 17 13:57:55 crc kubenswrapper[4768]: I0217 13:57:55.027496 4768 generic.go:334] "Generic (PLEG): container finished" podID="07e49b07-273e-48ae-8c45-c523632d87fe" containerID="191a0da1b91a29d7334b6e4503f20c8013eaef02f143f61e5fcaaeb4123524e5" exitCode=0 Feb 17 13:57:55 crc kubenswrapper[4768]: I0217 13:57:55.027714 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-89c5cd4d5-mmnn5" event={"ID":"07e49b07-273e-48ae-8c45-c523632d87fe","Type":"ContainerDied","Data":"191a0da1b91a29d7334b6e4503f20c8013eaef02f143f61e5fcaaeb4123524e5"} Feb 17 13:57:55 crc kubenswrapper[4768]: I0217 13:57:55.028568 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-89c5cd4d5-mmnn5" event={"ID":"07e49b07-273e-48ae-8c45-c523632d87fe","Type":"ContainerStarted","Data":"836ef9b8b4f0f35fd118f369b2222738da3a57c0b92f27ba0e237ff2db752847"} Feb 17 13:57:55 crc kubenswrapper[4768]: I0217 13:57:55.256869 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 17 13:57:55 crc kubenswrapper[4768]: I0217 13:57:55.257169 4768 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="85ca1076-5485-492d-a920-d51cf7b376f8" containerName="ceilometer-central-agent" containerID="cri-o://ac7ddf9eb06d6f18d7342a13b09c6dba4c27fc305d6af225083538a2f6409c04" gracePeriod=30 Feb 17 13:57:55 crc kubenswrapper[4768]: I0217 13:57:55.257215 4768 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="85ca1076-5485-492d-a920-d51cf7b376f8" containerName="proxy-httpd" containerID="cri-o://2df5118ed0669bb5e795d62dcce34e68c36a4595b69a6d6101d81fbc8264afa0" gracePeriod=30 Feb 17 13:57:55 crc kubenswrapper[4768]: I0217 13:57:55.257313 4768 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="85ca1076-5485-492d-a920-d51cf7b376f8" containerName="ceilometer-notification-agent" containerID="cri-o://615d8f7a44e0f946114808e2d38caf13a2c7181d71f6317fcce1c0f02c291289" gracePeriod=30 Feb 17 13:57:55 crc kubenswrapper[4768]: I0217 13:57:55.257215 4768 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="85ca1076-5485-492d-a920-d51cf7b376f8" containerName="sg-core" containerID="cri-o://2a7e21d4192de968fb6d5d8f9994980f98e62ef160bd5436c786b903aaa24c6c" gracePeriod=30 Feb 17 13:57:55 crc kubenswrapper[4768]: I0217 13:57:55.402880 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 17 13:57:56 crc kubenswrapper[4768]: I0217 13:57:56.054553 4768 generic.go:334] "Generic (PLEG): container finished" podID="85ca1076-5485-492d-a920-d51cf7b376f8" containerID="2df5118ed0669bb5e795d62dcce34e68c36a4595b69a6d6101d81fbc8264afa0" exitCode=0 Feb 17 13:57:56 crc kubenswrapper[4768]: I0217 13:57:56.054596 4768 generic.go:334] "Generic (PLEG): container finished" podID="85ca1076-5485-492d-a920-d51cf7b376f8" containerID="2a7e21d4192de968fb6d5d8f9994980f98e62ef160bd5436c786b903aaa24c6c" exitCode=2 Feb 17 13:57:56 crc kubenswrapper[4768]: I0217 13:57:56.054606 4768 generic.go:334] "Generic (PLEG): container finished" podID="85ca1076-5485-492d-a920-d51cf7b376f8" containerID="ac7ddf9eb06d6f18d7342a13b09c6dba4c27fc305d6af225083538a2f6409c04" exitCode=0 Feb 17 13:57:56 crc kubenswrapper[4768]: I0217 13:57:56.054662 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"85ca1076-5485-492d-a920-d51cf7b376f8","Type":"ContainerDied","Data":"2df5118ed0669bb5e795d62dcce34e68c36a4595b69a6d6101d81fbc8264afa0"} Feb 17 13:57:56 crc kubenswrapper[4768]: I0217 13:57:56.054690 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"85ca1076-5485-492d-a920-d51cf7b376f8","Type":"ContainerDied","Data":"2a7e21d4192de968fb6d5d8f9994980f98e62ef160bd5436c786b903aaa24c6c"} Feb 17 13:57:56 crc kubenswrapper[4768]: I0217 13:57:56.054704 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"85ca1076-5485-492d-a920-d51cf7b376f8","Type":"ContainerDied","Data":"ac7ddf9eb06d6f18d7342a13b09c6dba4c27fc305d6af225083538a2f6409c04"} Feb 17 13:57:56 crc kubenswrapper[4768]: I0217 13:57:56.057248 4768 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="f2be3d62-c831-45db-b7a4-34557edcf1af" containerName="nova-api-log" containerID="cri-o://112e3c20d5ad0a22c2fff652ae7de7438ed4ea691f0543b2fe529bd11407b47f" gracePeriod=30 Feb 17 13:57:56 crc kubenswrapper[4768]: I0217 13:57:56.058623 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-89c5cd4d5-mmnn5" event={"ID":"07e49b07-273e-48ae-8c45-c523632d87fe","Type":"ContainerStarted","Data":"c7a4e728edc44ba31a85cda8b55e994cd92607634f0ab3e64fc925a009789710"} Feb 17 13:57:56 crc kubenswrapper[4768]: I0217 13:57:56.058679 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-89c5cd4d5-mmnn5" Feb 17 13:57:56 crc kubenswrapper[4768]: I0217 13:57:56.059083 4768 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="f2be3d62-c831-45db-b7a4-34557edcf1af" containerName="nova-api-api" containerID="cri-o://9d81c03dbcf4a7e51e762768d8b1c34cbf8b60e5bb2bf99cd08091fcda221b28" gracePeriod=30 Feb 17 13:57:56 crc kubenswrapper[4768]: I0217 13:57:56.087259 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-89c5cd4d5-mmnn5" podStartSLOduration=3.087238803 podStartE2EDuration="3.087238803s" podCreationTimestamp="2026-02-17 13:57:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 13:57:56.07824304 +0000 UTC m=+1295.357629482" watchObservedRunningTime="2026-02-17 13:57:56.087238803 +0000 UTC m=+1295.366625245" Feb 17 13:57:57 crc kubenswrapper[4768]: I0217 13:57:57.068193 4768 generic.go:334] "Generic (PLEG): container finished" podID="f2be3d62-c831-45db-b7a4-34557edcf1af" containerID="112e3c20d5ad0a22c2fff652ae7de7438ed4ea691f0543b2fe529bd11407b47f" exitCode=143 Feb 17 13:57:57 crc kubenswrapper[4768]: I0217 13:57:57.069008 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"f2be3d62-c831-45db-b7a4-34557edcf1af","Type":"ContainerDied","Data":"112e3c20d5ad0a22c2fff652ae7de7438ed4ea691f0543b2fe529bd11407b47f"} Feb 17 13:57:58 crc kubenswrapper[4768]: I0217 13:57:58.060918 4768 patch_prober.go:28] interesting pod/machine-config-daemon-p97z4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 13:57:58 crc kubenswrapper[4768]: I0217 13:57:58.060975 4768 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-p97z4" podUID="10c685ba-8fe0-425c-958c-3fb6754d3d84" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 13:57:58 crc kubenswrapper[4768]: I0217 13:57:58.084833 4768 generic.go:334] "Generic (PLEG): container finished" podID="85ca1076-5485-492d-a920-d51cf7b376f8" containerID="615d8f7a44e0f946114808e2d38caf13a2c7181d71f6317fcce1c0f02c291289" exitCode=0 Feb 17 13:57:58 crc kubenswrapper[4768]: I0217 13:57:58.084875 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"85ca1076-5485-492d-a920-d51cf7b376f8","Type":"ContainerDied","Data":"615d8f7a44e0f946114808e2d38caf13a2c7181d71f6317fcce1c0f02c291289"} Feb 17 13:57:58 crc kubenswrapper[4768]: I0217 13:57:58.869905 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 17 13:57:59 crc kubenswrapper[4768]: I0217 13:57:59.021179 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/85ca1076-5485-492d-a920-d51cf7b376f8-combined-ca-bundle\") pod \"85ca1076-5485-492d-a920-d51cf7b376f8\" (UID: \"85ca1076-5485-492d-a920-d51cf7b376f8\") " Feb 17 13:57:59 crc kubenswrapper[4768]: I0217 13:57:59.021230 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/85ca1076-5485-492d-a920-d51cf7b376f8-sg-core-conf-yaml\") pod \"85ca1076-5485-492d-a920-d51cf7b376f8\" (UID: \"85ca1076-5485-492d-a920-d51cf7b376f8\") " Feb 17 13:57:59 crc kubenswrapper[4768]: I0217 13:57:59.021255 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/85ca1076-5485-492d-a920-d51cf7b376f8-scripts\") pod \"85ca1076-5485-492d-a920-d51cf7b376f8\" (UID: \"85ca1076-5485-492d-a920-d51cf7b376f8\") " Feb 17 13:57:59 crc kubenswrapper[4768]: I0217 13:57:59.021344 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/85ca1076-5485-492d-a920-d51cf7b376f8-log-httpd\") pod \"85ca1076-5485-492d-a920-d51cf7b376f8\" (UID: \"85ca1076-5485-492d-a920-d51cf7b376f8\") " Feb 17 13:57:59 crc kubenswrapper[4768]: I0217 13:57:59.021425 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/85ca1076-5485-492d-a920-d51cf7b376f8-ceilometer-tls-certs\") pod \"85ca1076-5485-492d-a920-d51cf7b376f8\" (UID: \"85ca1076-5485-492d-a920-d51cf7b376f8\") " Feb 17 13:57:59 crc kubenswrapper[4768]: I0217 13:57:59.021457 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9wv74\" (UniqueName: \"kubernetes.io/projected/85ca1076-5485-492d-a920-d51cf7b376f8-kube-api-access-9wv74\") pod \"85ca1076-5485-492d-a920-d51cf7b376f8\" (UID: \"85ca1076-5485-492d-a920-d51cf7b376f8\") " Feb 17 13:57:59 crc kubenswrapper[4768]: I0217 13:57:59.021732 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/85ca1076-5485-492d-a920-d51cf7b376f8-run-httpd\") pod \"85ca1076-5485-492d-a920-d51cf7b376f8\" (UID: \"85ca1076-5485-492d-a920-d51cf7b376f8\") " Feb 17 13:57:59 crc kubenswrapper[4768]: I0217 13:57:59.021803 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/85ca1076-5485-492d-a920-d51cf7b376f8-config-data\") pod \"85ca1076-5485-492d-a920-d51cf7b376f8\" (UID: \"85ca1076-5485-492d-a920-d51cf7b376f8\") " Feb 17 13:57:59 crc kubenswrapper[4768]: I0217 13:57:59.022006 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/85ca1076-5485-492d-a920-d51cf7b376f8-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "85ca1076-5485-492d-a920-d51cf7b376f8" (UID: "85ca1076-5485-492d-a920-d51cf7b376f8"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 13:57:59 crc kubenswrapper[4768]: I0217 13:57:59.022407 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/85ca1076-5485-492d-a920-d51cf7b376f8-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "85ca1076-5485-492d-a920-d51cf7b376f8" (UID: "85ca1076-5485-492d-a920-d51cf7b376f8"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 13:57:59 crc kubenswrapper[4768]: I0217 13:57:59.024799 4768 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/85ca1076-5485-492d-a920-d51cf7b376f8-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 17 13:57:59 crc kubenswrapper[4768]: I0217 13:57:59.024855 4768 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/85ca1076-5485-492d-a920-d51cf7b376f8-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 17 13:57:59 crc kubenswrapper[4768]: I0217 13:57:59.028751 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/85ca1076-5485-492d-a920-d51cf7b376f8-kube-api-access-9wv74" (OuterVolumeSpecName: "kube-api-access-9wv74") pod "85ca1076-5485-492d-a920-d51cf7b376f8" (UID: "85ca1076-5485-492d-a920-d51cf7b376f8"). InnerVolumeSpecName "kube-api-access-9wv74". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 13:57:59 crc kubenswrapper[4768]: I0217 13:57:59.034203 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/85ca1076-5485-492d-a920-d51cf7b376f8-scripts" (OuterVolumeSpecName: "scripts") pod "85ca1076-5485-492d-a920-d51cf7b376f8" (UID: "85ca1076-5485-492d-a920-d51cf7b376f8"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 13:57:59 crc kubenswrapper[4768]: I0217 13:57:59.057920 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/85ca1076-5485-492d-a920-d51cf7b376f8-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "85ca1076-5485-492d-a920-d51cf7b376f8" (UID: "85ca1076-5485-492d-a920-d51cf7b376f8"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 13:57:59 crc kubenswrapper[4768]: I0217 13:57:59.098213 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/85ca1076-5485-492d-a920-d51cf7b376f8-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "85ca1076-5485-492d-a920-d51cf7b376f8" (UID: "85ca1076-5485-492d-a920-d51cf7b376f8"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 13:57:59 crc kubenswrapper[4768]: I0217 13:57:59.101638 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"85ca1076-5485-492d-a920-d51cf7b376f8","Type":"ContainerDied","Data":"c4f9405d0a29c454e7119dc5d24c76eb1b4ff5aa43f039040e548d296343a894"} Feb 17 13:57:59 crc kubenswrapper[4768]: I0217 13:57:59.101695 4768 scope.go:117] "RemoveContainer" containerID="2df5118ed0669bb5e795d62dcce34e68c36a4595b69a6d6101d81fbc8264afa0" Feb 17 13:57:59 crc kubenswrapper[4768]: I0217 13:57:59.101721 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 17 13:57:59 crc kubenswrapper[4768]: I0217 13:57:59.127949 4768 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/85ca1076-5485-492d-a920-d51cf7b376f8-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 17 13:57:59 crc kubenswrapper[4768]: I0217 13:57:59.127993 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9wv74\" (UniqueName: \"kubernetes.io/projected/85ca1076-5485-492d-a920-d51cf7b376f8-kube-api-access-9wv74\") on node \"crc\" DevicePath \"\"" Feb 17 13:57:59 crc kubenswrapper[4768]: I0217 13:57:59.128008 4768 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/85ca1076-5485-492d-a920-d51cf7b376f8-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 17 13:57:59 crc kubenswrapper[4768]: I0217 13:57:59.128022 4768 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/85ca1076-5485-492d-a920-d51cf7b376f8-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 13:57:59 crc kubenswrapper[4768]: I0217 13:57:59.135908 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/85ca1076-5485-492d-a920-d51cf7b376f8-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "85ca1076-5485-492d-a920-d51cf7b376f8" (UID: "85ca1076-5485-492d-a920-d51cf7b376f8"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 13:57:59 crc kubenswrapper[4768]: I0217 13:57:59.171603 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/85ca1076-5485-492d-a920-d51cf7b376f8-config-data" (OuterVolumeSpecName: "config-data") pod "85ca1076-5485-492d-a920-d51cf7b376f8" (UID: "85ca1076-5485-492d-a920-d51cf7b376f8"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 13:57:59 crc kubenswrapper[4768]: I0217 13:57:59.229269 4768 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/85ca1076-5485-492d-a920-d51cf7b376f8-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 13:57:59 crc kubenswrapper[4768]: I0217 13:57:59.229305 4768 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/85ca1076-5485-492d-a920-d51cf7b376f8-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 13:57:59 crc kubenswrapper[4768]: I0217 13:57:59.367878 4768 scope.go:117] "RemoveContainer" containerID="2a7e21d4192de968fb6d5d8f9994980f98e62ef160bd5436c786b903aaa24c6c" Feb 17 13:57:59 crc kubenswrapper[4768]: I0217 13:57:59.402453 4768 scope.go:117] "RemoveContainer" containerID="615d8f7a44e0f946114808e2d38caf13a2c7181d71f6317fcce1c0f02c291289" Feb 17 13:57:59 crc kubenswrapper[4768]: I0217 13:57:59.427430 4768 scope.go:117] "RemoveContainer" containerID="ac7ddf9eb06d6f18d7342a13b09c6dba4c27fc305d6af225083538a2f6409c04" Feb 17 13:57:59 crc kubenswrapper[4768]: I0217 13:57:59.474625 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 17 13:57:59 crc kubenswrapper[4768]: I0217 13:57:59.496767 4768 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 17 13:57:59 crc kubenswrapper[4768]: I0217 13:57:59.506339 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 17 13:57:59 crc kubenswrapper[4768]: E0217 13:57:59.506917 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="85ca1076-5485-492d-a920-d51cf7b376f8" containerName="ceilometer-notification-agent" Feb 17 13:57:59 crc kubenswrapper[4768]: I0217 13:57:59.506937 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="85ca1076-5485-492d-a920-d51cf7b376f8" containerName="ceilometer-notification-agent" Feb 17 13:57:59 crc kubenswrapper[4768]: E0217 13:57:59.506975 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="85ca1076-5485-492d-a920-d51cf7b376f8" containerName="sg-core" Feb 17 13:57:59 crc kubenswrapper[4768]: I0217 13:57:59.506987 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="85ca1076-5485-492d-a920-d51cf7b376f8" containerName="sg-core" Feb 17 13:57:59 crc kubenswrapper[4768]: E0217 13:57:59.507014 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="85ca1076-5485-492d-a920-d51cf7b376f8" containerName="proxy-httpd" Feb 17 13:57:59 crc kubenswrapper[4768]: I0217 13:57:59.507023 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="85ca1076-5485-492d-a920-d51cf7b376f8" containerName="proxy-httpd" Feb 17 13:57:59 crc kubenswrapper[4768]: E0217 13:57:59.507036 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="85ca1076-5485-492d-a920-d51cf7b376f8" containerName="ceilometer-central-agent" Feb 17 13:57:59 crc kubenswrapper[4768]: I0217 13:57:59.507044 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="85ca1076-5485-492d-a920-d51cf7b376f8" containerName="ceilometer-central-agent" Feb 17 13:57:59 crc kubenswrapper[4768]: I0217 13:57:59.507305 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="85ca1076-5485-492d-a920-d51cf7b376f8" containerName="ceilometer-notification-agent" Feb 17 13:57:59 crc kubenswrapper[4768]: I0217 13:57:59.507323 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="85ca1076-5485-492d-a920-d51cf7b376f8" containerName="ceilometer-central-agent" Feb 17 13:57:59 crc kubenswrapper[4768]: I0217 13:57:59.507359 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="85ca1076-5485-492d-a920-d51cf7b376f8" containerName="proxy-httpd" Feb 17 13:57:59 crc kubenswrapper[4768]: I0217 13:57:59.507374 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="85ca1076-5485-492d-a920-d51cf7b376f8" containerName="sg-core" Feb 17 13:57:59 crc kubenswrapper[4768]: I0217 13:57:59.510373 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 17 13:57:59 crc kubenswrapper[4768]: I0217 13:57:59.513053 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 17 13:57:59 crc kubenswrapper[4768]: I0217 13:57:59.513381 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 17 13:57:59 crc kubenswrapper[4768]: I0217 13:57:59.514498 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Feb 17 13:57:59 crc kubenswrapper[4768]: I0217 13:57:59.518280 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 17 13:57:59 crc kubenswrapper[4768]: I0217 13:57:59.535471 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b042dd2b-3a49-4aec-a401-e0f3980f0e73-log-httpd\") pod \"ceilometer-0\" (UID: \"b042dd2b-3a49-4aec-a401-e0f3980f0e73\") " pod="openstack/ceilometer-0" Feb 17 13:57:59 crc kubenswrapper[4768]: I0217 13:57:59.535536 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s6jzh\" (UniqueName: \"kubernetes.io/projected/b042dd2b-3a49-4aec-a401-e0f3980f0e73-kube-api-access-s6jzh\") pod \"ceilometer-0\" (UID: \"b042dd2b-3a49-4aec-a401-e0f3980f0e73\") " pod="openstack/ceilometer-0" Feb 17 13:57:59 crc kubenswrapper[4768]: I0217 13:57:59.535563 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b042dd2b-3a49-4aec-a401-e0f3980f0e73-scripts\") pod \"ceilometer-0\" (UID: \"b042dd2b-3a49-4aec-a401-e0f3980f0e73\") " pod="openstack/ceilometer-0" Feb 17 13:57:59 crc kubenswrapper[4768]: I0217 13:57:59.535596 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b042dd2b-3a49-4aec-a401-e0f3980f0e73-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"b042dd2b-3a49-4aec-a401-e0f3980f0e73\") " pod="openstack/ceilometer-0" Feb 17 13:57:59 crc kubenswrapper[4768]: I0217 13:57:59.535619 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b042dd2b-3a49-4aec-a401-e0f3980f0e73-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"b042dd2b-3a49-4aec-a401-e0f3980f0e73\") " pod="openstack/ceilometer-0" Feb 17 13:57:59 crc kubenswrapper[4768]: I0217 13:57:59.535639 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/b042dd2b-3a49-4aec-a401-e0f3980f0e73-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"b042dd2b-3a49-4aec-a401-e0f3980f0e73\") " pod="openstack/ceilometer-0" Feb 17 13:57:59 crc kubenswrapper[4768]: I0217 13:57:59.535674 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b042dd2b-3a49-4aec-a401-e0f3980f0e73-config-data\") pod \"ceilometer-0\" (UID: \"b042dd2b-3a49-4aec-a401-e0f3980f0e73\") " pod="openstack/ceilometer-0" Feb 17 13:57:59 crc kubenswrapper[4768]: I0217 13:57:59.535694 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b042dd2b-3a49-4aec-a401-e0f3980f0e73-run-httpd\") pod \"ceilometer-0\" (UID: \"b042dd2b-3a49-4aec-a401-e0f3980f0e73\") " pod="openstack/ceilometer-0" Feb 17 13:57:59 crc kubenswrapper[4768]: I0217 13:57:59.549950 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="85ca1076-5485-492d-a920-d51cf7b376f8" path="/var/lib/kubelet/pods/85ca1076-5485-492d-a920-d51cf7b376f8/volumes" Feb 17 13:57:59 crc kubenswrapper[4768]: I0217 13:57:59.627250 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 17 13:57:59 crc kubenswrapper[4768]: I0217 13:57:59.637226 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f2be3d62-c831-45db-b7a4-34557edcf1af-logs\") pod \"f2be3d62-c831-45db-b7a4-34557edcf1af\" (UID: \"f2be3d62-c831-45db-b7a4-34557edcf1af\") " Feb 17 13:57:59 crc kubenswrapper[4768]: I0217 13:57:59.637276 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zf95q\" (UniqueName: \"kubernetes.io/projected/f2be3d62-c831-45db-b7a4-34557edcf1af-kube-api-access-zf95q\") pod \"f2be3d62-c831-45db-b7a4-34557edcf1af\" (UID: \"f2be3d62-c831-45db-b7a4-34557edcf1af\") " Feb 17 13:57:59 crc kubenswrapper[4768]: I0217 13:57:59.637382 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f2be3d62-c831-45db-b7a4-34557edcf1af-config-data\") pod \"f2be3d62-c831-45db-b7a4-34557edcf1af\" (UID: \"f2be3d62-c831-45db-b7a4-34557edcf1af\") " Feb 17 13:57:59 crc kubenswrapper[4768]: I0217 13:57:59.637446 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f2be3d62-c831-45db-b7a4-34557edcf1af-combined-ca-bundle\") pod \"f2be3d62-c831-45db-b7a4-34557edcf1af\" (UID: \"f2be3d62-c831-45db-b7a4-34557edcf1af\") " Feb 17 13:57:59 crc kubenswrapper[4768]: I0217 13:57:59.637662 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b042dd2b-3a49-4aec-a401-e0f3980f0e73-config-data\") pod \"ceilometer-0\" (UID: \"b042dd2b-3a49-4aec-a401-e0f3980f0e73\") " pod="openstack/ceilometer-0" Feb 17 13:57:59 crc kubenswrapper[4768]: I0217 13:57:59.637689 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b042dd2b-3a49-4aec-a401-e0f3980f0e73-run-httpd\") pod \"ceilometer-0\" (UID: \"b042dd2b-3a49-4aec-a401-e0f3980f0e73\") " pod="openstack/ceilometer-0" Feb 17 13:57:59 crc kubenswrapper[4768]: I0217 13:57:59.637787 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b042dd2b-3a49-4aec-a401-e0f3980f0e73-log-httpd\") pod \"ceilometer-0\" (UID: \"b042dd2b-3a49-4aec-a401-e0f3980f0e73\") " pod="openstack/ceilometer-0" Feb 17 13:57:59 crc kubenswrapper[4768]: I0217 13:57:59.637839 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s6jzh\" (UniqueName: \"kubernetes.io/projected/b042dd2b-3a49-4aec-a401-e0f3980f0e73-kube-api-access-s6jzh\") pod \"ceilometer-0\" (UID: \"b042dd2b-3a49-4aec-a401-e0f3980f0e73\") " pod="openstack/ceilometer-0" Feb 17 13:57:59 crc kubenswrapper[4768]: I0217 13:57:59.637865 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b042dd2b-3a49-4aec-a401-e0f3980f0e73-scripts\") pod \"ceilometer-0\" (UID: \"b042dd2b-3a49-4aec-a401-e0f3980f0e73\") " pod="openstack/ceilometer-0" Feb 17 13:57:59 crc kubenswrapper[4768]: I0217 13:57:59.637912 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b042dd2b-3a49-4aec-a401-e0f3980f0e73-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"b042dd2b-3a49-4aec-a401-e0f3980f0e73\") " pod="openstack/ceilometer-0" Feb 17 13:57:59 crc kubenswrapper[4768]: I0217 13:57:59.637939 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b042dd2b-3a49-4aec-a401-e0f3980f0e73-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"b042dd2b-3a49-4aec-a401-e0f3980f0e73\") " pod="openstack/ceilometer-0" Feb 17 13:57:59 crc kubenswrapper[4768]: I0217 13:57:59.637969 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/b042dd2b-3a49-4aec-a401-e0f3980f0e73-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"b042dd2b-3a49-4aec-a401-e0f3980f0e73\") " pod="openstack/ceilometer-0" Feb 17 13:57:59 crc kubenswrapper[4768]: I0217 13:57:59.638500 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b042dd2b-3a49-4aec-a401-e0f3980f0e73-log-httpd\") pod \"ceilometer-0\" (UID: \"b042dd2b-3a49-4aec-a401-e0f3980f0e73\") " pod="openstack/ceilometer-0" Feb 17 13:57:59 crc kubenswrapper[4768]: I0217 13:57:59.638557 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b042dd2b-3a49-4aec-a401-e0f3980f0e73-run-httpd\") pod \"ceilometer-0\" (UID: \"b042dd2b-3a49-4aec-a401-e0f3980f0e73\") " pod="openstack/ceilometer-0" Feb 17 13:57:59 crc kubenswrapper[4768]: I0217 13:57:59.639030 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f2be3d62-c831-45db-b7a4-34557edcf1af-logs" (OuterVolumeSpecName: "logs") pod "f2be3d62-c831-45db-b7a4-34557edcf1af" (UID: "f2be3d62-c831-45db-b7a4-34557edcf1af"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 13:57:59 crc kubenswrapper[4768]: I0217 13:57:59.641292 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f2be3d62-c831-45db-b7a4-34557edcf1af-kube-api-access-zf95q" (OuterVolumeSpecName: "kube-api-access-zf95q") pod "f2be3d62-c831-45db-b7a4-34557edcf1af" (UID: "f2be3d62-c831-45db-b7a4-34557edcf1af"). InnerVolumeSpecName "kube-api-access-zf95q". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 13:57:59 crc kubenswrapper[4768]: I0217 13:57:59.641639 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b042dd2b-3a49-4aec-a401-e0f3980f0e73-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"b042dd2b-3a49-4aec-a401-e0f3980f0e73\") " pod="openstack/ceilometer-0" Feb 17 13:57:59 crc kubenswrapper[4768]: I0217 13:57:59.643753 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b042dd2b-3a49-4aec-a401-e0f3980f0e73-scripts\") pod \"ceilometer-0\" (UID: \"b042dd2b-3a49-4aec-a401-e0f3980f0e73\") " pod="openstack/ceilometer-0" Feb 17 13:57:59 crc kubenswrapper[4768]: I0217 13:57:59.644469 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b042dd2b-3a49-4aec-a401-e0f3980f0e73-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"b042dd2b-3a49-4aec-a401-e0f3980f0e73\") " pod="openstack/ceilometer-0" Feb 17 13:57:59 crc kubenswrapper[4768]: I0217 13:57:59.647137 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b042dd2b-3a49-4aec-a401-e0f3980f0e73-config-data\") pod \"ceilometer-0\" (UID: \"b042dd2b-3a49-4aec-a401-e0f3980f0e73\") " pod="openstack/ceilometer-0" Feb 17 13:57:59 crc kubenswrapper[4768]: I0217 13:57:59.647983 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/b042dd2b-3a49-4aec-a401-e0f3980f0e73-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"b042dd2b-3a49-4aec-a401-e0f3980f0e73\") " pod="openstack/ceilometer-0" Feb 17 13:57:59 crc kubenswrapper[4768]: I0217 13:57:59.666499 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s6jzh\" (UniqueName: \"kubernetes.io/projected/b042dd2b-3a49-4aec-a401-e0f3980f0e73-kube-api-access-s6jzh\") pod \"ceilometer-0\" (UID: \"b042dd2b-3a49-4aec-a401-e0f3980f0e73\") " pod="openstack/ceilometer-0" Feb 17 13:57:59 crc kubenswrapper[4768]: I0217 13:57:59.675450 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f2be3d62-c831-45db-b7a4-34557edcf1af-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f2be3d62-c831-45db-b7a4-34557edcf1af" (UID: "f2be3d62-c831-45db-b7a4-34557edcf1af"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 13:57:59 crc kubenswrapper[4768]: I0217 13:57:59.686139 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-cell1-novncproxy-0" Feb 17 13:57:59 crc kubenswrapper[4768]: I0217 13:57:59.703807 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-cell1-novncproxy-0" Feb 17 13:57:59 crc kubenswrapper[4768]: I0217 13:57:59.713737 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f2be3d62-c831-45db-b7a4-34557edcf1af-config-data" (OuterVolumeSpecName: "config-data") pod "f2be3d62-c831-45db-b7a4-34557edcf1af" (UID: "f2be3d62-c831-45db-b7a4-34557edcf1af"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 13:57:59 crc kubenswrapper[4768]: I0217 13:57:59.740672 4768 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f2be3d62-c831-45db-b7a4-34557edcf1af-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 13:57:59 crc kubenswrapper[4768]: I0217 13:57:59.740903 4768 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f2be3d62-c831-45db-b7a4-34557edcf1af-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 13:57:59 crc kubenswrapper[4768]: I0217 13:57:59.740968 4768 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f2be3d62-c831-45db-b7a4-34557edcf1af-logs\") on node \"crc\" DevicePath \"\"" Feb 17 13:57:59 crc kubenswrapper[4768]: I0217 13:57:59.741033 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zf95q\" (UniqueName: \"kubernetes.io/projected/f2be3d62-c831-45db-b7a4-34557edcf1af-kube-api-access-zf95q\") on node \"crc\" DevicePath \"\"" Feb 17 13:57:59 crc kubenswrapper[4768]: I0217 13:57:59.840378 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 17 13:58:00 crc kubenswrapper[4768]: I0217 13:58:00.122944 4768 generic.go:334] "Generic (PLEG): container finished" podID="f2be3d62-c831-45db-b7a4-34557edcf1af" containerID="9d81c03dbcf4a7e51e762768d8b1c34cbf8b60e5bb2bf99cd08091fcda221b28" exitCode=0 Feb 17 13:58:00 crc kubenswrapper[4768]: I0217 13:58:00.123021 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 17 13:58:00 crc kubenswrapper[4768]: I0217 13:58:00.123074 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"f2be3d62-c831-45db-b7a4-34557edcf1af","Type":"ContainerDied","Data":"9d81c03dbcf4a7e51e762768d8b1c34cbf8b60e5bb2bf99cd08091fcda221b28"} Feb 17 13:58:00 crc kubenswrapper[4768]: I0217 13:58:00.123129 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"f2be3d62-c831-45db-b7a4-34557edcf1af","Type":"ContainerDied","Data":"5474b41de4745ea780c8f7ad28bf194024f1562cef57a1e39106751570898b8d"} Feb 17 13:58:00 crc kubenswrapper[4768]: I0217 13:58:00.123149 4768 scope.go:117] "RemoveContainer" containerID="9d81c03dbcf4a7e51e762768d8b1c34cbf8b60e5bb2bf99cd08091fcda221b28" Feb 17 13:58:00 crc kubenswrapper[4768]: I0217 13:58:00.147226 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-novncproxy-0" Feb 17 13:58:00 crc kubenswrapper[4768]: I0217 13:58:00.152020 4768 scope.go:117] "RemoveContainer" containerID="112e3c20d5ad0a22c2fff652ae7de7438ed4ea691f0543b2fe529bd11407b47f" Feb 17 13:58:00 crc kubenswrapper[4768]: I0217 13:58:00.171581 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 17 13:58:00 crc kubenswrapper[4768]: I0217 13:58:00.182829 4768 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Feb 17 13:58:00 crc kubenswrapper[4768]: I0217 13:58:00.193270 4768 scope.go:117] "RemoveContainer" containerID="9d81c03dbcf4a7e51e762768d8b1c34cbf8b60e5bb2bf99cd08091fcda221b28" Feb 17 13:58:00 crc kubenswrapper[4768]: E0217 13:58:00.195158 4768 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9d81c03dbcf4a7e51e762768d8b1c34cbf8b60e5bb2bf99cd08091fcda221b28\": container with ID starting with 9d81c03dbcf4a7e51e762768d8b1c34cbf8b60e5bb2bf99cd08091fcda221b28 not found: ID does not exist" containerID="9d81c03dbcf4a7e51e762768d8b1c34cbf8b60e5bb2bf99cd08091fcda221b28" Feb 17 13:58:00 crc kubenswrapper[4768]: I0217 13:58:00.195202 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9d81c03dbcf4a7e51e762768d8b1c34cbf8b60e5bb2bf99cd08091fcda221b28"} err="failed to get container status \"9d81c03dbcf4a7e51e762768d8b1c34cbf8b60e5bb2bf99cd08091fcda221b28\": rpc error: code = NotFound desc = could not find container \"9d81c03dbcf4a7e51e762768d8b1c34cbf8b60e5bb2bf99cd08091fcda221b28\": container with ID starting with 9d81c03dbcf4a7e51e762768d8b1c34cbf8b60e5bb2bf99cd08091fcda221b28 not found: ID does not exist" Feb 17 13:58:00 crc kubenswrapper[4768]: I0217 13:58:00.195234 4768 scope.go:117] "RemoveContainer" containerID="112e3c20d5ad0a22c2fff652ae7de7438ed4ea691f0543b2fe529bd11407b47f" Feb 17 13:58:00 crc kubenswrapper[4768]: I0217 13:58:00.198173 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Feb 17 13:58:00 crc kubenswrapper[4768]: E0217 13:58:00.198714 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f2be3d62-c831-45db-b7a4-34557edcf1af" containerName="nova-api-api" Feb 17 13:58:00 crc kubenswrapper[4768]: I0217 13:58:00.198737 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="f2be3d62-c831-45db-b7a4-34557edcf1af" containerName="nova-api-api" Feb 17 13:58:00 crc kubenswrapper[4768]: E0217 13:58:00.198762 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f2be3d62-c831-45db-b7a4-34557edcf1af" containerName="nova-api-log" Feb 17 13:58:00 crc kubenswrapper[4768]: I0217 13:58:00.198772 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="f2be3d62-c831-45db-b7a4-34557edcf1af" containerName="nova-api-log" Feb 17 13:58:00 crc kubenswrapper[4768]: I0217 13:58:00.198960 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="f2be3d62-c831-45db-b7a4-34557edcf1af" containerName="nova-api-api" Feb 17 13:58:00 crc kubenswrapper[4768]: I0217 13:58:00.198981 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="f2be3d62-c831-45db-b7a4-34557edcf1af" containerName="nova-api-log" Feb 17 13:58:00 crc kubenswrapper[4768]: E0217 13:58:00.199232 4768 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"112e3c20d5ad0a22c2fff652ae7de7438ed4ea691f0543b2fe529bd11407b47f\": container with ID starting with 112e3c20d5ad0a22c2fff652ae7de7438ed4ea691f0543b2fe529bd11407b47f not found: ID does not exist" containerID="112e3c20d5ad0a22c2fff652ae7de7438ed4ea691f0543b2fe529bd11407b47f" Feb 17 13:58:00 crc kubenswrapper[4768]: I0217 13:58:00.199295 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"112e3c20d5ad0a22c2fff652ae7de7438ed4ea691f0543b2fe529bd11407b47f"} err="failed to get container status \"112e3c20d5ad0a22c2fff652ae7de7438ed4ea691f0543b2fe529bd11407b47f\": rpc error: code = NotFound desc = could not find container \"112e3c20d5ad0a22c2fff652ae7de7438ed4ea691f0543b2fe529bd11407b47f\": container with ID starting with 112e3c20d5ad0a22c2fff652ae7de7438ed4ea691f0543b2fe529bd11407b47f not found: ID does not exist" Feb 17 13:58:00 crc kubenswrapper[4768]: I0217 13:58:00.200290 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 17 13:58:00 crc kubenswrapper[4768]: I0217 13:58:00.205634 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Feb 17 13:58:00 crc kubenswrapper[4768]: I0217 13:58:00.205904 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Feb 17 13:58:00 crc kubenswrapper[4768]: I0217 13:58:00.211610 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Feb 17 13:58:00 crc kubenswrapper[4768]: I0217 13:58:00.249739 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b29a0fbe-a809-4df2-9023-99f8cf4ae420-internal-tls-certs\") pod \"nova-api-0\" (UID: \"b29a0fbe-a809-4df2-9023-99f8cf4ae420\") " pod="openstack/nova-api-0" Feb 17 13:58:00 crc kubenswrapper[4768]: I0217 13:58:00.249795 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b29a0fbe-a809-4df2-9023-99f8cf4ae420-config-data\") pod \"nova-api-0\" (UID: \"b29a0fbe-a809-4df2-9023-99f8cf4ae420\") " pod="openstack/nova-api-0" Feb 17 13:58:00 crc kubenswrapper[4768]: I0217 13:58:00.249880 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b29a0fbe-a809-4df2-9023-99f8cf4ae420-public-tls-certs\") pod \"nova-api-0\" (UID: \"b29a0fbe-a809-4df2-9023-99f8cf4ae420\") " pod="openstack/nova-api-0" Feb 17 13:58:00 crc kubenswrapper[4768]: I0217 13:58:00.250123 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b29a0fbe-a809-4df2-9023-99f8cf4ae420-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"b29a0fbe-a809-4df2-9023-99f8cf4ae420\") " pod="openstack/nova-api-0" Feb 17 13:58:00 crc kubenswrapper[4768]: I0217 13:58:00.250406 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b29a0fbe-a809-4df2-9023-99f8cf4ae420-logs\") pod \"nova-api-0\" (UID: \"b29a0fbe-a809-4df2-9023-99f8cf4ae420\") " pod="openstack/nova-api-0" Feb 17 13:58:00 crc kubenswrapper[4768]: I0217 13:58:00.250445 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rkkwh\" (UniqueName: \"kubernetes.io/projected/b29a0fbe-a809-4df2-9023-99f8cf4ae420-kube-api-access-rkkwh\") pod \"nova-api-0\" (UID: \"b29a0fbe-a809-4df2-9023-99f8cf4ae420\") " pod="openstack/nova-api-0" Feb 17 13:58:00 crc kubenswrapper[4768]: I0217 13:58:00.255213 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 17 13:58:00 crc kubenswrapper[4768]: I0217 13:58:00.332914 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 17 13:58:00 crc kubenswrapper[4768]: I0217 13:58:00.354613 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b29a0fbe-a809-4df2-9023-99f8cf4ae420-logs\") pod \"nova-api-0\" (UID: \"b29a0fbe-a809-4df2-9023-99f8cf4ae420\") " pod="openstack/nova-api-0" Feb 17 13:58:00 crc kubenswrapper[4768]: I0217 13:58:00.354667 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rkkwh\" (UniqueName: \"kubernetes.io/projected/b29a0fbe-a809-4df2-9023-99f8cf4ae420-kube-api-access-rkkwh\") pod \"nova-api-0\" (UID: \"b29a0fbe-a809-4df2-9023-99f8cf4ae420\") " pod="openstack/nova-api-0" Feb 17 13:58:00 crc kubenswrapper[4768]: I0217 13:58:00.354748 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b29a0fbe-a809-4df2-9023-99f8cf4ae420-internal-tls-certs\") pod \"nova-api-0\" (UID: \"b29a0fbe-a809-4df2-9023-99f8cf4ae420\") " pod="openstack/nova-api-0" Feb 17 13:58:00 crc kubenswrapper[4768]: I0217 13:58:00.354787 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b29a0fbe-a809-4df2-9023-99f8cf4ae420-config-data\") pod \"nova-api-0\" (UID: \"b29a0fbe-a809-4df2-9023-99f8cf4ae420\") " pod="openstack/nova-api-0" Feb 17 13:58:00 crc kubenswrapper[4768]: I0217 13:58:00.354841 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b29a0fbe-a809-4df2-9023-99f8cf4ae420-public-tls-certs\") pod \"nova-api-0\" (UID: \"b29a0fbe-a809-4df2-9023-99f8cf4ae420\") " pod="openstack/nova-api-0" Feb 17 13:58:00 crc kubenswrapper[4768]: I0217 13:58:00.354879 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b29a0fbe-a809-4df2-9023-99f8cf4ae420-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"b29a0fbe-a809-4df2-9023-99f8cf4ae420\") " pod="openstack/nova-api-0" Feb 17 13:58:00 crc kubenswrapper[4768]: I0217 13:58:00.355060 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b29a0fbe-a809-4df2-9023-99f8cf4ae420-logs\") pod \"nova-api-0\" (UID: \"b29a0fbe-a809-4df2-9023-99f8cf4ae420\") " pod="openstack/nova-api-0" Feb 17 13:58:00 crc kubenswrapper[4768]: I0217 13:58:00.360886 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b29a0fbe-a809-4df2-9023-99f8cf4ae420-public-tls-certs\") pod \"nova-api-0\" (UID: \"b29a0fbe-a809-4df2-9023-99f8cf4ae420\") " pod="openstack/nova-api-0" Feb 17 13:58:00 crc kubenswrapper[4768]: I0217 13:58:00.361231 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b29a0fbe-a809-4df2-9023-99f8cf4ae420-internal-tls-certs\") pod \"nova-api-0\" (UID: \"b29a0fbe-a809-4df2-9023-99f8cf4ae420\") " pod="openstack/nova-api-0" Feb 17 13:58:00 crc kubenswrapper[4768]: I0217 13:58:00.361833 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b29a0fbe-a809-4df2-9023-99f8cf4ae420-config-data\") pod \"nova-api-0\" (UID: \"b29a0fbe-a809-4df2-9023-99f8cf4ae420\") " pod="openstack/nova-api-0" Feb 17 13:58:00 crc kubenswrapper[4768]: I0217 13:58:00.361909 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b29a0fbe-a809-4df2-9023-99f8cf4ae420-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"b29a0fbe-a809-4df2-9023-99f8cf4ae420\") " pod="openstack/nova-api-0" Feb 17 13:58:00 crc kubenswrapper[4768]: I0217 13:58:00.373214 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rkkwh\" (UniqueName: \"kubernetes.io/projected/b29a0fbe-a809-4df2-9023-99f8cf4ae420-kube-api-access-rkkwh\") pod \"nova-api-0\" (UID: \"b29a0fbe-a809-4df2-9023-99f8cf4ae420\") " pod="openstack/nova-api-0" Feb 17 13:58:00 crc kubenswrapper[4768]: I0217 13:58:00.444401 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-cell-mapping-k959c"] Feb 17 13:58:00 crc kubenswrapper[4768]: I0217 13:58:00.445944 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-k959c" Feb 17 13:58:00 crc kubenswrapper[4768]: I0217 13:58:00.449065 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-scripts" Feb 17 13:58:00 crc kubenswrapper[4768]: I0217 13:58:00.449074 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-config-data" Feb 17 13:58:00 crc kubenswrapper[4768]: I0217 13:58:00.454485 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-k959c"] Feb 17 13:58:00 crc kubenswrapper[4768]: I0217 13:58:00.456647 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/24767692-8f87-45e7-b2cc-f80b48b4fcf7-scripts\") pod \"nova-cell1-cell-mapping-k959c\" (UID: \"24767692-8f87-45e7-b2cc-f80b48b4fcf7\") " pod="openstack/nova-cell1-cell-mapping-k959c" Feb 17 13:58:00 crc kubenswrapper[4768]: I0217 13:58:00.456683 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/24767692-8f87-45e7-b2cc-f80b48b4fcf7-config-data\") pod \"nova-cell1-cell-mapping-k959c\" (UID: \"24767692-8f87-45e7-b2cc-f80b48b4fcf7\") " pod="openstack/nova-cell1-cell-mapping-k959c" Feb 17 13:58:00 crc kubenswrapper[4768]: I0217 13:58:00.456707 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/24767692-8f87-45e7-b2cc-f80b48b4fcf7-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-k959c\" (UID: \"24767692-8f87-45e7-b2cc-f80b48b4fcf7\") " pod="openstack/nova-cell1-cell-mapping-k959c" Feb 17 13:58:00 crc kubenswrapper[4768]: I0217 13:58:00.456729 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k7l5z\" (UniqueName: \"kubernetes.io/projected/24767692-8f87-45e7-b2cc-f80b48b4fcf7-kube-api-access-k7l5z\") pod \"nova-cell1-cell-mapping-k959c\" (UID: \"24767692-8f87-45e7-b2cc-f80b48b4fcf7\") " pod="openstack/nova-cell1-cell-mapping-k959c" Feb 17 13:58:00 crc kubenswrapper[4768]: I0217 13:58:00.539901 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 17 13:58:00 crc kubenswrapper[4768]: I0217 13:58:00.558396 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/24767692-8f87-45e7-b2cc-f80b48b4fcf7-scripts\") pod \"nova-cell1-cell-mapping-k959c\" (UID: \"24767692-8f87-45e7-b2cc-f80b48b4fcf7\") " pod="openstack/nova-cell1-cell-mapping-k959c" Feb 17 13:58:00 crc kubenswrapper[4768]: I0217 13:58:00.558437 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/24767692-8f87-45e7-b2cc-f80b48b4fcf7-config-data\") pod \"nova-cell1-cell-mapping-k959c\" (UID: \"24767692-8f87-45e7-b2cc-f80b48b4fcf7\") " pod="openstack/nova-cell1-cell-mapping-k959c" Feb 17 13:58:00 crc kubenswrapper[4768]: I0217 13:58:00.558461 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/24767692-8f87-45e7-b2cc-f80b48b4fcf7-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-k959c\" (UID: \"24767692-8f87-45e7-b2cc-f80b48b4fcf7\") " pod="openstack/nova-cell1-cell-mapping-k959c" Feb 17 13:58:00 crc kubenswrapper[4768]: I0217 13:58:00.558485 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k7l5z\" (UniqueName: \"kubernetes.io/projected/24767692-8f87-45e7-b2cc-f80b48b4fcf7-kube-api-access-k7l5z\") pod \"nova-cell1-cell-mapping-k959c\" (UID: \"24767692-8f87-45e7-b2cc-f80b48b4fcf7\") " pod="openstack/nova-cell1-cell-mapping-k959c" Feb 17 13:58:00 crc kubenswrapper[4768]: I0217 13:58:00.563866 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/24767692-8f87-45e7-b2cc-f80b48b4fcf7-scripts\") pod \"nova-cell1-cell-mapping-k959c\" (UID: \"24767692-8f87-45e7-b2cc-f80b48b4fcf7\") " pod="openstack/nova-cell1-cell-mapping-k959c" Feb 17 13:58:00 crc kubenswrapper[4768]: I0217 13:58:00.564601 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/24767692-8f87-45e7-b2cc-f80b48b4fcf7-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-k959c\" (UID: \"24767692-8f87-45e7-b2cc-f80b48b4fcf7\") " pod="openstack/nova-cell1-cell-mapping-k959c" Feb 17 13:58:00 crc kubenswrapper[4768]: I0217 13:58:00.564760 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/24767692-8f87-45e7-b2cc-f80b48b4fcf7-config-data\") pod \"nova-cell1-cell-mapping-k959c\" (UID: \"24767692-8f87-45e7-b2cc-f80b48b4fcf7\") " pod="openstack/nova-cell1-cell-mapping-k959c" Feb 17 13:58:00 crc kubenswrapper[4768]: I0217 13:58:00.575846 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k7l5z\" (UniqueName: \"kubernetes.io/projected/24767692-8f87-45e7-b2cc-f80b48b4fcf7-kube-api-access-k7l5z\") pod \"nova-cell1-cell-mapping-k959c\" (UID: \"24767692-8f87-45e7-b2cc-f80b48b4fcf7\") " pod="openstack/nova-cell1-cell-mapping-k959c" Feb 17 13:58:00 crc kubenswrapper[4768]: I0217 13:58:00.776479 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-k959c" Feb 17 13:58:01 crc kubenswrapper[4768]: I0217 13:58:01.089404 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 17 13:58:01 crc kubenswrapper[4768]: I0217 13:58:01.134520 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b042dd2b-3a49-4aec-a401-e0f3980f0e73","Type":"ContainerStarted","Data":"5c9f13241bceca0eaf27bfb975e0eca451d58b9ae874c1e9a2542479a4f0453f"} Feb 17 13:58:01 crc kubenswrapper[4768]: I0217 13:58:01.136481 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"b29a0fbe-a809-4df2-9023-99f8cf4ae420","Type":"ContainerStarted","Data":"5c49722c03556f3ef8e0a8ba48be82f21f5ec80032c0bd00aac0153aa2de1081"} Feb 17 13:58:01 crc kubenswrapper[4768]: I0217 13:58:01.332704 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-k959c"] Feb 17 13:58:01 crc kubenswrapper[4768]: W0217 13:58:01.340606 4768 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod24767692_8f87_45e7_b2cc_f80b48b4fcf7.slice/crio-c0734117760d1da3ac4012283979ec16e454647ae80b4c92dc345700fa45bb6a WatchSource:0}: Error finding container c0734117760d1da3ac4012283979ec16e454647ae80b4c92dc345700fa45bb6a: Status 404 returned error can't find the container with id c0734117760d1da3ac4012283979ec16e454647ae80b4c92dc345700fa45bb6a Feb 17 13:58:01 crc kubenswrapper[4768]: I0217 13:58:01.563675 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f2be3d62-c831-45db-b7a4-34557edcf1af" path="/var/lib/kubelet/pods/f2be3d62-c831-45db-b7a4-34557edcf1af/volumes" Feb 17 13:58:02 crc kubenswrapper[4768]: I0217 13:58:02.146290 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b042dd2b-3a49-4aec-a401-e0f3980f0e73","Type":"ContainerStarted","Data":"78eecf555736780ef90e4462ec57a92f60eca2006be4ce07ce90636284befd7a"} Feb 17 13:58:02 crc kubenswrapper[4768]: I0217 13:58:02.150526 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-k959c" event={"ID":"24767692-8f87-45e7-b2cc-f80b48b4fcf7","Type":"ContainerStarted","Data":"ab7aa0a25c48011c11b72614458f8cdfdae219a4d4976421375fd5451f2ec087"} Feb 17 13:58:02 crc kubenswrapper[4768]: I0217 13:58:02.150585 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-k959c" event={"ID":"24767692-8f87-45e7-b2cc-f80b48b4fcf7","Type":"ContainerStarted","Data":"c0734117760d1da3ac4012283979ec16e454647ae80b4c92dc345700fa45bb6a"} Feb 17 13:58:02 crc kubenswrapper[4768]: I0217 13:58:02.152925 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"b29a0fbe-a809-4df2-9023-99f8cf4ae420","Type":"ContainerStarted","Data":"2a21e43854549fb63ae6d7371d69072ff687d729a325689f1d53d153395832c5"} Feb 17 13:58:02 crc kubenswrapper[4768]: I0217 13:58:02.152959 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"b29a0fbe-a809-4df2-9023-99f8cf4ae420","Type":"ContainerStarted","Data":"e352393dab8a239b1db73f05e2e3570d0b69f2bba89cc23a33643a036e524e63"} Feb 17 13:58:02 crc kubenswrapper[4768]: I0217 13:58:02.182323 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-cell-mapping-k959c" podStartSLOduration=2.182301932 podStartE2EDuration="2.182301932s" podCreationTimestamp="2026-02-17 13:58:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 13:58:02.172125497 +0000 UTC m=+1301.451511939" watchObservedRunningTime="2026-02-17 13:58:02.182301932 +0000 UTC m=+1301.461688374" Feb 17 13:58:02 crc kubenswrapper[4768]: I0217 13:58:02.197401 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.1973793600000002 podStartE2EDuration="2.19737936s" podCreationTimestamp="2026-02-17 13:58:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 13:58:02.190573486 +0000 UTC m=+1301.469959948" watchObservedRunningTime="2026-02-17 13:58:02.19737936 +0000 UTC m=+1301.476765802" Feb 17 13:58:03 crc kubenswrapper[4768]: I0217 13:58:03.164514 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b042dd2b-3a49-4aec-a401-e0f3980f0e73","Type":"ContainerStarted","Data":"54a1adf1fad049f3b8fa64cb12d12fc26c4e2441d47d363e78a5102a295bb306"} Feb 17 13:58:03 crc kubenswrapper[4768]: I0217 13:58:03.164792 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b042dd2b-3a49-4aec-a401-e0f3980f0e73","Type":"ContainerStarted","Data":"45cc409b77067d28de44fa5056944fcaa3f0a513354b476d7227c3509c6ca2dc"} Feb 17 13:58:03 crc kubenswrapper[4768]: I0217 13:58:03.522031 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-89c5cd4d5-mmnn5" Feb 17 13:58:03 crc kubenswrapper[4768]: I0217 13:58:03.598888 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-757b4f8459-zmmmh"] Feb 17 13:58:03 crc kubenswrapper[4768]: I0217 13:58:03.599156 4768 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-757b4f8459-zmmmh" podUID="bc797443-7e90-4b43-be24-df2291d9a72e" containerName="dnsmasq-dns" containerID="cri-o://262fa1f5ddb0f8698cbc4b09579b6053fb8c1482fd561b0c8896fdf6158103e7" gracePeriod=10 Feb 17 13:58:04 crc kubenswrapper[4768]: I0217 13:58:04.993767 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-757b4f8459-zmmmh" Feb 17 13:58:05 crc kubenswrapper[4768]: I0217 13:58:05.142438 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/bc797443-7e90-4b43-be24-df2291d9a72e-ovsdbserver-sb\") pod \"bc797443-7e90-4b43-be24-df2291d9a72e\" (UID: \"bc797443-7e90-4b43-be24-df2291d9a72e\") " Feb 17 13:58:05 crc kubenswrapper[4768]: I0217 13:58:05.142528 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bc797443-7e90-4b43-be24-df2291d9a72e-dns-svc\") pod \"bc797443-7e90-4b43-be24-df2291d9a72e\" (UID: \"bc797443-7e90-4b43-be24-df2291d9a72e\") " Feb 17 13:58:05 crc kubenswrapper[4768]: I0217 13:58:05.142594 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wbjk5\" (UniqueName: \"kubernetes.io/projected/bc797443-7e90-4b43-be24-df2291d9a72e-kube-api-access-wbjk5\") pod \"bc797443-7e90-4b43-be24-df2291d9a72e\" (UID: \"bc797443-7e90-4b43-be24-df2291d9a72e\") " Feb 17 13:58:05 crc kubenswrapper[4768]: I0217 13:58:05.142631 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bc797443-7e90-4b43-be24-df2291d9a72e-config\") pod \"bc797443-7e90-4b43-be24-df2291d9a72e\" (UID: \"bc797443-7e90-4b43-be24-df2291d9a72e\") " Feb 17 13:58:05 crc kubenswrapper[4768]: I0217 13:58:05.142738 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/bc797443-7e90-4b43-be24-df2291d9a72e-ovsdbserver-nb\") pod \"bc797443-7e90-4b43-be24-df2291d9a72e\" (UID: \"bc797443-7e90-4b43-be24-df2291d9a72e\") " Feb 17 13:58:05 crc kubenswrapper[4768]: I0217 13:58:05.142818 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/bc797443-7e90-4b43-be24-df2291d9a72e-dns-swift-storage-0\") pod \"bc797443-7e90-4b43-be24-df2291d9a72e\" (UID: \"bc797443-7e90-4b43-be24-df2291d9a72e\") " Feb 17 13:58:05 crc kubenswrapper[4768]: I0217 13:58:05.148947 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc797443-7e90-4b43-be24-df2291d9a72e-kube-api-access-wbjk5" (OuterVolumeSpecName: "kube-api-access-wbjk5") pod "bc797443-7e90-4b43-be24-df2291d9a72e" (UID: "bc797443-7e90-4b43-be24-df2291d9a72e"). InnerVolumeSpecName "kube-api-access-wbjk5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 13:58:05 crc kubenswrapper[4768]: I0217 13:58:05.191172 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bc797443-7e90-4b43-be24-df2291d9a72e-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "bc797443-7e90-4b43-be24-df2291d9a72e" (UID: "bc797443-7e90-4b43-be24-df2291d9a72e"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 13:58:05 crc kubenswrapper[4768]: I0217 13:58:05.199323 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bc797443-7e90-4b43-be24-df2291d9a72e-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "bc797443-7e90-4b43-be24-df2291d9a72e" (UID: "bc797443-7e90-4b43-be24-df2291d9a72e"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 13:58:05 crc kubenswrapper[4768]: I0217 13:58:05.204591 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bc797443-7e90-4b43-be24-df2291d9a72e-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "bc797443-7e90-4b43-be24-df2291d9a72e" (UID: "bc797443-7e90-4b43-be24-df2291d9a72e"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 13:58:05 crc kubenswrapper[4768]: I0217 13:58:05.205412 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bc797443-7e90-4b43-be24-df2291d9a72e-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "bc797443-7e90-4b43-be24-df2291d9a72e" (UID: "bc797443-7e90-4b43-be24-df2291d9a72e"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 13:58:05 crc kubenswrapper[4768]: I0217 13:58:05.205679 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bc797443-7e90-4b43-be24-df2291d9a72e-config" (OuterVolumeSpecName: "config") pod "bc797443-7e90-4b43-be24-df2291d9a72e" (UID: "bc797443-7e90-4b43-be24-df2291d9a72e"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 13:58:05 crc kubenswrapper[4768]: I0217 13:58:05.228909 4768 generic.go:334] "Generic (PLEG): container finished" podID="bc797443-7e90-4b43-be24-df2291d9a72e" containerID="262fa1f5ddb0f8698cbc4b09579b6053fb8c1482fd561b0c8896fdf6158103e7" exitCode=0 Feb 17 13:58:05 crc kubenswrapper[4768]: I0217 13:58:05.228953 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-757b4f8459-zmmmh" event={"ID":"bc797443-7e90-4b43-be24-df2291d9a72e","Type":"ContainerDied","Data":"262fa1f5ddb0f8698cbc4b09579b6053fb8c1482fd561b0c8896fdf6158103e7"} Feb 17 13:58:05 crc kubenswrapper[4768]: I0217 13:58:05.228990 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-757b4f8459-zmmmh" event={"ID":"bc797443-7e90-4b43-be24-df2291d9a72e","Type":"ContainerDied","Data":"3113e5b93bd11fe5e3416ae7d4585cd788af7333c9081dc426602798ca9a8cee"} Feb 17 13:58:05 crc kubenswrapper[4768]: I0217 13:58:05.229013 4768 scope.go:117] "RemoveContainer" containerID="262fa1f5ddb0f8698cbc4b09579b6053fb8c1482fd561b0c8896fdf6158103e7" Feb 17 13:58:05 crc kubenswrapper[4768]: I0217 13:58:05.229014 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-757b4f8459-zmmmh" Feb 17 13:58:05 crc kubenswrapper[4768]: I0217 13:58:05.245627 4768 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/bc797443-7e90-4b43-be24-df2291d9a72e-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 17 13:58:05 crc kubenswrapper[4768]: I0217 13:58:05.245660 4768 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/bc797443-7e90-4b43-be24-df2291d9a72e-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 17 13:58:05 crc kubenswrapper[4768]: I0217 13:58:05.245673 4768 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bc797443-7e90-4b43-be24-df2291d9a72e-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 17 13:58:05 crc kubenswrapper[4768]: I0217 13:58:05.245681 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wbjk5\" (UniqueName: \"kubernetes.io/projected/bc797443-7e90-4b43-be24-df2291d9a72e-kube-api-access-wbjk5\") on node \"crc\" DevicePath \"\"" Feb 17 13:58:05 crc kubenswrapper[4768]: I0217 13:58:05.245692 4768 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bc797443-7e90-4b43-be24-df2291d9a72e-config\") on node \"crc\" DevicePath \"\"" Feb 17 13:58:05 crc kubenswrapper[4768]: I0217 13:58:05.245700 4768 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/bc797443-7e90-4b43-be24-df2291d9a72e-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 17 13:58:05 crc kubenswrapper[4768]: I0217 13:58:05.267575 4768 scope.go:117] "RemoveContainer" containerID="928effde91625203c6c6632d507372a7382c408160acf757c2a837f1eb87e7a4" Feb 17 13:58:05 crc kubenswrapper[4768]: I0217 13:58:05.277859 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-757b4f8459-zmmmh"] Feb 17 13:58:05 crc kubenswrapper[4768]: I0217 13:58:05.286907 4768 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-757b4f8459-zmmmh"] Feb 17 13:58:05 crc kubenswrapper[4768]: I0217 13:58:05.297140 4768 scope.go:117] "RemoveContainer" containerID="262fa1f5ddb0f8698cbc4b09579b6053fb8c1482fd561b0c8896fdf6158103e7" Feb 17 13:58:05 crc kubenswrapper[4768]: E0217 13:58:05.300268 4768 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"262fa1f5ddb0f8698cbc4b09579b6053fb8c1482fd561b0c8896fdf6158103e7\": container with ID starting with 262fa1f5ddb0f8698cbc4b09579b6053fb8c1482fd561b0c8896fdf6158103e7 not found: ID does not exist" containerID="262fa1f5ddb0f8698cbc4b09579b6053fb8c1482fd561b0c8896fdf6158103e7" Feb 17 13:58:05 crc kubenswrapper[4768]: I0217 13:58:05.300300 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"262fa1f5ddb0f8698cbc4b09579b6053fb8c1482fd561b0c8896fdf6158103e7"} err="failed to get container status \"262fa1f5ddb0f8698cbc4b09579b6053fb8c1482fd561b0c8896fdf6158103e7\": rpc error: code = NotFound desc = could not find container \"262fa1f5ddb0f8698cbc4b09579b6053fb8c1482fd561b0c8896fdf6158103e7\": container with ID starting with 262fa1f5ddb0f8698cbc4b09579b6053fb8c1482fd561b0c8896fdf6158103e7 not found: ID does not exist" Feb 17 13:58:05 crc kubenswrapper[4768]: I0217 13:58:05.300322 4768 scope.go:117] "RemoveContainer" containerID="928effde91625203c6c6632d507372a7382c408160acf757c2a837f1eb87e7a4" Feb 17 13:58:05 crc kubenswrapper[4768]: E0217 13:58:05.301068 4768 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"928effde91625203c6c6632d507372a7382c408160acf757c2a837f1eb87e7a4\": container with ID starting with 928effde91625203c6c6632d507372a7382c408160acf757c2a837f1eb87e7a4 not found: ID does not exist" containerID="928effde91625203c6c6632d507372a7382c408160acf757c2a837f1eb87e7a4" Feb 17 13:58:05 crc kubenswrapper[4768]: I0217 13:58:05.301126 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"928effde91625203c6c6632d507372a7382c408160acf757c2a837f1eb87e7a4"} err="failed to get container status \"928effde91625203c6c6632d507372a7382c408160acf757c2a837f1eb87e7a4\": rpc error: code = NotFound desc = could not find container \"928effde91625203c6c6632d507372a7382c408160acf757c2a837f1eb87e7a4\": container with ID starting with 928effde91625203c6c6632d507372a7382c408160acf757c2a837f1eb87e7a4 not found: ID does not exist" Feb 17 13:58:05 crc kubenswrapper[4768]: I0217 13:58:05.599776 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc797443-7e90-4b43-be24-df2291d9a72e" path="/var/lib/kubelet/pods/bc797443-7e90-4b43-be24-df2291d9a72e/volumes" Feb 17 13:58:06 crc kubenswrapper[4768]: I0217 13:58:06.240359 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b042dd2b-3a49-4aec-a401-e0f3980f0e73","Type":"ContainerStarted","Data":"b25e47cac40078f0d2e7e8828e4c97a056fe04e7ddc81f8dd4c700da13d4cfa9"} Feb 17 13:58:06 crc kubenswrapper[4768]: I0217 13:58:06.240836 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 17 13:58:06 crc kubenswrapper[4768]: I0217 13:58:06.291765 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.157090504 podStartE2EDuration="7.291743061s" podCreationTimestamp="2026-02-17 13:57:59 +0000 UTC" firstStartedPulling="2026-02-17 13:58:00.346874599 +0000 UTC m=+1299.626261041" lastFinishedPulling="2026-02-17 13:58:05.481527146 +0000 UTC m=+1304.760913598" observedRunningTime="2026-02-17 13:58:06.285684997 +0000 UTC m=+1305.565071439" watchObservedRunningTime="2026-02-17 13:58:06.291743061 +0000 UTC m=+1305.571129503" Feb 17 13:58:08 crc kubenswrapper[4768]: I0217 13:58:08.257889 4768 generic.go:334] "Generic (PLEG): container finished" podID="24767692-8f87-45e7-b2cc-f80b48b4fcf7" containerID="ab7aa0a25c48011c11b72614458f8cdfdae219a4d4976421375fd5451f2ec087" exitCode=0 Feb 17 13:58:08 crc kubenswrapper[4768]: I0217 13:58:08.257974 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-k959c" event={"ID":"24767692-8f87-45e7-b2cc-f80b48b4fcf7","Type":"ContainerDied","Data":"ab7aa0a25c48011c11b72614458f8cdfdae219a4d4976421375fd5451f2ec087"} Feb 17 13:58:09 crc kubenswrapper[4768]: I0217 13:58:09.646927 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-k959c" Feb 17 13:58:09 crc kubenswrapper[4768]: I0217 13:58:09.731050 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/24767692-8f87-45e7-b2cc-f80b48b4fcf7-combined-ca-bundle\") pod \"24767692-8f87-45e7-b2cc-f80b48b4fcf7\" (UID: \"24767692-8f87-45e7-b2cc-f80b48b4fcf7\") " Feb 17 13:58:09 crc kubenswrapper[4768]: I0217 13:58:09.731138 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k7l5z\" (UniqueName: \"kubernetes.io/projected/24767692-8f87-45e7-b2cc-f80b48b4fcf7-kube-api-access-k7l5z\") pod \"24767692-8f87-45e7-b2cc-f80b48b4fcf7\" (UID: \"24767692-8f87-45e7-b2cc-f80b48b4fcf7\") " Feb 17 13:58:09 crc kubenswrapper[4768]: I0217 13:58:09.731324 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/24767692-8f87-45e7-b2cc-f80b48b4fcf7-scripts\") pod \"24767692-8f87-45e7-b2cc-f80b48b4fcf7\" (UID: \"24767692-8f87-45e7-b2cc-f80b48b4fcf7\") " Feb 17 13:58:09 crc kubenswrapper[4768]: I0217 13:58:09.731399 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/24767692-8f87-45e7-b2cc-f80b48b4fcf7-config-data\") pod \"24767692-8f87-45e7-b2cc-f80b48b4fcf7\" (UID: \"24767692-8f87-45e7-b2cc-f80b48b4fcf7\") " Feb 17 13:58:09 crc kubenswrapper[4768]: I0217 13:58:09.737609 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/24767692-8f87-45e7-b2cc-f80b48b4fcf7-kube-api-access-k7l5z" (OuterVolumeSpecName: "kube-api-access-k7l5z") pod "24767692-8f87-45e7-b2cc-f80b48b4fcf7" (UID: "24767692-8f87-45e7-b2cc-f80b48b4fcf7"). InnerVolumeSpecName "kube-api-access-k7l5z". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 13:58:09 crc kubenswrapper[4768]: I0217 13:58:09.753390 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/24767692-8f87-45e7-b2cc-f80b48b4fcf7-scripts" (OuterVolumeSpecName: "scripts") pod "24767692-8f87-45e7-b2cc-f80b48b4fcf7" (UID: "24767692-8f87-45e7-b2cc-f80b48b4fcf7"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 13:58:09 crc kubenswrapper[4768]: I0217 13:58:09.761400 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/24767692-8f87-45e7-b2cc-f80b48b4fcf7-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "24767692-8f87-45e7-b2cc-f80b48b4fcf7" (UID: "24767692-8f87-45e7-b2cc-f80b48b4fcf7"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 13:58:09 crc kubenswrapper[4768]: I0217 13:58:09.768201 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/24767692-8f87-45e7-b2cc-f80b48b4fcf7-config-data" (OuterVolumeSpecName: "config-data") pod "24767692-8f87-45e7-b2cc-f80b48b4fcf7" (UID: "24767692-8f87-45e7-b2cc-f80b48b4fcf7"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 13:58:09 crc kubenswrapper[4768]: I0217 13:58:09.833739 4768 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/24767692-8f87-45e7-b2cc-f80b48b4fcf7-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 13:58:09 crc kubenswrapper[4768]: I0217 13:58:09.833772 4768 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/24767692-8f87-45e7-b2cc-f80b48b4fcf7-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 13:58:09 crc kubenswrapper[4768]: I0217 13:58:09.833784 4768 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/24767692-8f87-45e7-b2cc-f80b48b4fcf7-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 13:58:09 crc kubenswrapper[4768]: I0217 13:58:09.833797 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k7l5z\" (UniqueName: \"kubernetes.io/projected/24767692-8f87-45e7-b2cc-f80b48b4fcf7-kube-api-access-k7l5z\") on node \"crc\" DevicePath \"\"" Feb 17 13:58:10 crc kubenswrapper[4768]: I0217 13:58:10.284916 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-k959c" event={"ID":"24767692-8f87-45e7-b2cc-f80b48b4fcf7","Type":"ContainerDied","Data":"c0734117760d1da3ac4012283979ec16e454647ae80b4c92dc345700fa45bb6a"} Feb 17 13:58:10 crc kubenswrapper[4768]: I0217 13:58:10.285319 4768 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c0734117760d1da3ac4012283979ec16e454647ae80b4c92dc345700fa45bb6a" Feb 17 13:58:10 crc kubenswrapper[4768]: I0217 13:58:10.285002 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-k959c" Feb 17 13:58:10 crc kubenswrapper[4768]: I0217 13:58:10.530932 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 17 13:58:10 crc kubenswrapper[4768]: I0217 13:58:10.531185 4768 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="b29a0fbe-a809-4df2-9023-99f8cf4ae420" containerName="nova-api-log" containerID="cri-o://e352393dab8a239b1db73f05e2e3570d0b69f2bba89cc23a33643a036e524e63" gracePeriod=30 Feb 17 13:58:10 crc kubenswrapper[4768]: I0217 13:58:10.531225 4768 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="b29a0fbe-a809-4df2-9023-99f8cf4ae420" containerName="nova-api-api" containerID="cri-o://2a21e43854549fb63ae6d7371d69072ff687d729a325689f1d53d153395832c5" gracePeriod=30 Feb 17 13:58:10 crc kubenswrapper[4768]: I0217 13:58:10.556616 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Feb 17 13:58:10 crc kubenswrapper[4768]: I0217 13:58:10.557203 4768 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="c98f298b-2ec8-4d90-9112-8b5ad9109a92" containerName="nova-scheduler-scheduler" containerID="cri-o://edbfa891c9ee3949919595ea6240dcb07608c35f0542a6f28bc8f60edfe895f7" gracePeriod=30 Feb 17 13:58:10 crc kubenswrapper[4768]: I0217 13:58:10.583096 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 17 13:58:10 crc kubenswrapper[4768]: I0217 13:58:10.583548 4768 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="022e1dee-9e9a-4898-8667-cdce272dce30" containerName="nova-metadata-log" containerID="cri-o://76a2c98d16b8679f51a4fe0a4661f6ac197d032186dd0ca9b3cef0356311e6d5" gracePeriod=30 Feb 17 13:58:10 crc kubenswrapper[4768]: I0217 13:58:10.583992 4768 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="022e1dee-9e9a-4898-8667-cdce272dce30" containerName="nova-metadata-metadata" containerID="cri-o://27be652ef96bc300d40975efd52ad65c772827c7235097f02b4da5a9257bd2c5" gracePeriod=30 Feb 17 13:58:11 crc kubenswrapper[4768]: E0217 13:58:11.171997 4768 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="edbfa891c9ee3949919595ea6240dcb07608c35f0542a6f28bc8f60edfe895f7" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Feb 17 13:58:11 crc kubenswrapper[4768]: E0217 13:58:11.173694 4768 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="edbfa891c9ee3949919595ea6240dcb07608c35f0542a6f28bc8f60edfe895f7" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Feb 17 13:58:11 crc kubenswrapper[4768]: E0217 13:58:11.175567 4768 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="edbfa891c9ee3949919595ea6240dcb07608c35f0542a6f28bc8f60edfe895f7" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Feb 17 13:58:11 crc kubenswrapper[4768]: E0217 13:58:11.175612 4768 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="c98f298b-2ec8-4d90-9112-8b5ad9109a92" containerName="nova-scheduler-scheduler" Feb 17 13:58:11 crc kubenswrapper[4768]: I0217 13:58:11.314071 4768 generic.go:334] "Generic (PLEG): container finished" podID="022e1dee-9e9a-4898-8667-cdce272dce30" containerID="76a2c98d16b8679f51a4fe0a4661f6ac197d032186dd0ca9b3cef0356311e6d5" exitCode=143 Feb 17 13:58:11 crc kubenswrapper[4768]: I0217 13:58:11.314169 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"022e1dee-9e9a-4898-8667-cdce272dce30","Type":"ContainerDied","Data":"76a2c98d16b8679f51a4fe0a4661f6ac197d032186dd0ca9b3cef0356311e6d5"} Feb 17 13:58:11 crc kubenswrapper[4768]: I0217 13:58:11.318260 4768 generic.go:334] "Generic (PLEG): container finished" podID="b29a0fbe-a809-4df2-9023-99f8cf4ae420" containerID="2a21e43854549fb63ae6d7371d69072ff687d729a325689f1d53d153395832c5" exitCode=0 Feb 17 13:58:11 crc kubenswrapper[4768]: I0217 13:58:11.318295 4768 generic.go:334] "Generic (PLEG): container finished" podID="b29a0fbe-a809-4df2-9023-99f8cf4ae420" containerID="e352393dab8a239b1db73f05e2e3570d0b69f2bba89cc23a33643a036e524e63" exitCode=143 Feb 17 13:58:11 crc kubenswrapper[4768]: I0217 13:58:11.318316 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"b29a0fbe-a809-4df2-9023-99f8cf4ae420","Type":"ContainerDied","Data":"2a21e43854549fb63ae6d7371d69072ff687d729a325689f1d53d153395832c5"} Feb 17 13:58:11 crc kubenswrapper[4768]: I0217 13:58:11.318337 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"b29a0fbe-a809-4df2-9023-99f8cf4ae420","Type":"ContainerDied","Data":"e352393dab8a239b1db73f05e2e3570d0b69f2bba89cc23a33643a036e524e63"} Feb 17 13:58:11 crc kubenswrapper[4768]: I0217 13:58:11.416649 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 17 13:58:11 crc kubenswrapper[4768]: I0217 13:58:11.574294 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b29a0fbe-a809-4df2-9023-99f8cf4ae420-logs\") pod \"b29a0fbe-a809-4df2-9023-99f8cf4ae420\" (UID: \"b29a0fbe-a809-4df2-9023-99f8cf4ae420\") " Feb 17 13:58:11 crc kubenswrapper[4768]: I0217 13:58:11.574450 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b29a0fbe-a809-4df2-9023-99f8cf4ae420-public-tls-certs\") pod \"b29a0fbe-a809-4df2-9023-99f8cf4ae420\" (UID: \"b29a0fbe-a809-4df2-9023-99f8cf4ae420\") " Feb 17 13:58:11 crc kubenswrapper[4768]: I0217 13:58:11.574477 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b29a0fbe-a809-4df2-9023-99f8cf4ae420-config-data\") pod \"b29a0fbe-a809-4df2-9023-99f8cf4ae420\" (UID: \"b29a0fbe-a809-4df2-9023-99f8cf4ae420\") " Feb 17 13:58:11 crc kubenswrapper[4768]: I0217 13:58:11.574507 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rkkwh\" (UniqueName: \"kubernetes.io/projected/b29a0fbe-a809-4df2-9023-99f8cf4ae420-kube-api-access-rkkwh\") pod \"b29a0fbe-a809-4df2-9023-99f8cf4ae420\" (UID: \"b29a0fbe-a809-4df2-9023-99f8cf4ae420\") " Feb 17 13:58:11 crc kubenswrapper[4768]: I0217 13:58:11.574546 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b29a0fbe-a809-4df2-9023-99f8cf4ae420-combined-ca-bundle\") pod \"b29a0fbe-a809-4df2-9023-99f8cf4ae420\" (UID: \"b29a0fbe-a809-4df2-9023-99f8cf4ae420\") " Feb 17 13:58:11 crc kubenswrapper[4768]: I0217 13:58:11.574590 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b29a0fbe-a809-4df2-9023-99f8cf4ae420-internal-tls-certs\") pod \"b29a0fbe-a809-4df2-9023-99f8cf4ae420\" (UID: \"b29a0fbe-a809-4df2-9023-99f8cf4ae420\") " Feb 17 13:58:11 crc kubenswrapper[4768]: I0217 13:58:11.574879 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b29a0fbe-a809-4df2-9023-99f8cf4ae420-logs" (OuterVolumeSpecName: "logs") pod "b29a0fbe-a809-4df2-9023-99f8cf4ae420" (UID: "b29a0fbe-a809-4df2-9023-99f8cf4ae420"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 13:58:11 crc kubenswrapper[4768]: I0217 13:58:11.575072 4768 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b29a0fbe-a809-4df2-9023-99f8cf4ae420-logs\") on node \"crc\" DevicePath \"\"" Feb 17 13:58:11 crc kubenswrapper[4768]: I0217 13:58:11.581129 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b29a0fbe-a809-4df2-9023-99f8cf4ae420-kube-api-access-rkkwh" (OuterVolumeSpecName: "kube-api-access-rkkwh") pod "b29a0fbe-a809-4df2-9023-99f8cf4ae420" (UID: "b29a0fbe-a809-4df2-9023-99f8cf4ae420"). InnerVolumeSpecName "kube-api-access-rkkwh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 13:58:11 crc kubenswrapper[4768]: I0217 13:58:11.608092 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b29a0fbe-a809-4df2-9023-99f8cf4ae420-config-data" (OuterVolumeSpecName: "config-data") pod "b29a0fbe-a809-4df2-9023-99f8cf4ae420" (UID: "b29a0fbe-a809-4df2-9023-99f8cf4ae420"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 13:58:11 crc kubenswrapper[4768]: I0217 13:58:11.610408 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b29a0fbe-a809-4df2-9023-99f8cf4ae420-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b29a0fbe-a809-4df2-9023-99f8cf4ae420" (UID: "b29a0fbe-a809-4df2-9023-99f8cf4ae420"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 13:58:11 crc kubenswrapper[4768]: I0217 13:58:11.628011 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b29a0fbe-a809-4df2-9023-99f8cf4ae420-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "b29a0fbe-a809-4df2-9023-99f8cf4ae420" (UID: "b29a0fbe-a809-4df2-9023-99f8cf4ae420"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 13:58:11 crc kubenswrapper[4768]: I0217 13:58:11.639929 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b29a0fbe-a809-4df2-9023-99f8cf4ae420-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "b29a0fbe-a809-4df2-9023-99f8cf4ae420" (UID: "b29a0fbe-a809-4df2-9023-99f8cf4ae420"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 13:58:11 crc kubenswrapper[4768]: I0217 13:58:11.679275 4768 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b29a0fbe-a809-4df2-9023-99f8cf4ae420-public-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 17 13:58:11 crc kubenswrapper[4768]: I0217 13:58:11.679783 4768 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b29a0fbe-a809-4df2-9023-99f8cf4ae420-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 13:58:11 crc kubenswrapper[4768]: I0217 13:58:11.679883 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rkkwh\" (UniqueName: \"kubernetes.io/projected/b29a0fbe-a809-4df2-9023-99f8cf4ae420-kube-api-access-rkkwh\") on node \"crc\" DevicePath \"\"" Feb 17 13:58:11 crc kubenswrapper[4768]: I0217 13:58:11.680018 4768 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b29a0fbe-a809-4df2-9023-99f8cf4ae420-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 13:58:11 crc kubenswrapper[4768]: I0217 13:58:11.680126 4768 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b29a0fbe-a809-4df2-9023-99f8cf4ae420-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 17 13:58:12 crc kubenswrapper[4768]: I0217 13:58:12.331164 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"b29a0fbe-a809-4df2-9023-99f8cf4ae420","Type":"ContainerDied","Data":"5c49722c03556f3ef8e0a8ba48be82f21f5ec80032c0bd00aac0153aa2de1081"} Feb 17 13:58:12 crc kubenswrapper[4768]: I0217 13:58:12.331481 4768 scope.go:117] "RemoveContainer" containerID="2a21e43854549fb63ae6d7371d69072ff687d729a325689f1d53d153395832c5" Feb 17 13:58:12 crc kubenswrapper[4768]: I0217 13:58:12.331283 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 17 13:58:12 crc kubenswrapper[4768]: I0217 13:58:12.363064 4768 scope.go:117] "RemoveContainer" containerID="e352393dab8a239b1db73f05e2e3570d0b69f2bba89cc23a33643a036e524e63" Feb 17 13:58:12 crc kubenswrapper[4768]: I0217 13:58:12.376800 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 17 13:58:12 crc kubenswrapper[4768]: I0217 13:58:12.389086 4768 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Feb 17 13:58:12 crc kubenswrapper[4768]: I0217 13:58:12.403247 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Feb 17 13:58:12 crc kubenswrapper[4768]: E0217 13:58:12.403632 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="24767692-8f87-45e7-b2cc-f80b48b4fcf7" containerName="nova-manage" Feb 17 13:58:12 crc kubenswrapper[4768]: I0217 13:58:12.403648 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="24767692-8f87-45e7-b2cc-f80b48b4fcf7" containerName="nova-manage" Feb 17 13:58:12 crc kubenswrapper[4768]: E0217 13:58:12.403661 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b29a0fbe-a809-4df2-9023-99f8cf4ae420" containerName="nova-api-log" Feb 17 13:58:12 crc kubenswrapper[4768]: I0217 13:58:12.403667 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="b29a0fbe-a809-4df2-9023-99f8cf4ae420" containerName="nova-api-log" Feb 17 13:58:12 crc kubenswrapper[4768]: E0217 13:58:12.403678 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bc797443-7e90-4b43-be24-df2291d9a72e" containerName="dnsmasq-dns" Feb 17 13:58:12 crc kubenswrapper[4768]: I0217 13:58:12.403685 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="bc797443-7e90-4b43-be24-df2291d9a72e" containerName="dnsmasq-dns" Feb 17 13:58:12 crc kubenswrapper[4768]: E0217 13:58:12.403691 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bc797443-7e90-4b43-be24-df2291d9a72e" containerName="init" Feb 17 13:58:12 crc kubenswrapper[4768]: I0217 13:58:12.403697 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="bc797443-7e90-4b43-be24-df2291d9a72e" containerName="init" Feb 17 13:58:12 crc kubenswrapper[4768]: E0217 13:58:12.403726 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b29a0fbe-a809-4df2-9023-99f8cf4ae420" containerName="nova-api-api" Feb 17 13:58:12 crc kubenswrapper[4768]: I0217 13:58:12.403731 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="b29a0fbe-a809-4df2-9023-99f8cf4ae420" containerName="nova-api-api" Feb 17 13:58:12 crc kubenswrapper[4768]: I0217 13:58:12.403884 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="24767692-8f87-45e7-b2cc-f80b48b4fcf7" containerName="nova-manage" Feb 17 13:58:12 crc kubenswrapper[4768]: I0217 13:58:12.403900 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="bc797443-7e90-4b43-be24-df2291d9a72e" containerName="dnsmasq-dns" Feb 17 13:58:12 crc kubenswrapper[4768]: I0217 13:58:12.403907 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="b29a0fbe-a809-4df2-9023-99f8cf4ae420" containerName="nova-api-log" Feb 17 13:58:12 crc kubenswrapper[4768]: I0217 13:58:12.403927 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="b29a0fbe-a809-4df2-9023-99f8cf4ae420" containerName="nova-api-api" Feb 17 13:58:12 crc kubenswrapper[4768]: I0217 13:58:12.405415 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 17 13:58:12 crc kubenswrapper[4768]: I0217 13:58:12.409866 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Feb 17 13:58:12 crc kubenswrapper[4768]: I0217 13:58:12.409986 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Feb 17 13:58:12 crc kubenswrapper[4768]: I0217 13:58:12.410690 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Feb 17 13:58:12 crc kubenswrapper[4768]: I0217 13:58:12.445634 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 17 13:58:12 crc kubenswrapper[4768]: I0217 13:58:12.494137 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8fd17ba2-d7f2-4af3-a8e7-078578b6c8fe-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"8fd17ba2-d7f2-4af3-a8e7-078578b6c8fe\") " pod="openstack/nova-api-0" Feb 17 13:58:12 crc kubenswrapper[4768]: I0217 13:58:12.494282 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8fd17ba2-d7f2-4af3-a8e7-078578b6c8fe-logs\") pod \"nova-api-0\" (UID: \"8fd17ba2-d7f2-4af3-a8e7-078578b6c8fe\") " pod="openstack/nova-api-0" Feb 17 13:58:12 crc kubenswrapper[4768]: I0217 13:58:12.494395 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8fd17ba2-d7f2-4af3-a8e7-078578b6c8fe-config-data\") pod \"nova-api-0\" (UID: \"8fd17ba2-d7f2-4af3-a8e7-078578b6c8fe\") " pod="openstack/nova-api-0" Feb 17 13:58:12 crc kubenswrapper[4768]: I0217 13:58:12.494424 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8fd17ba2-d7f2-4af3-a8e7-078578b6c8fe-public-tls-certs\") pod \"nova-api-0\" (UID: \"8fd17ba2-d7f2-4af3-a8e7-078578b6c8fe\") " pod="openstack/nova-api-0" Feb 17 13:58:12 crc kubenswrapper[4768]: I0217 13:58:12.494452 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wsww6\" (UniqueName: \"kubernetes.io/projected/8fd17ba2-d7f2-4af3-a8e7-078578b6c8fe-kube-api-access-wsww6\") pod \"nova-api-0\" (UID: \"8fd17ba2-d7f2-4af3-a8e7-078578b6c8fe\") " pod="openstack/nova-api-0" Feb 17 13:58:12 crc kubenswrapper[4768]: I0217 13:58:12.494475 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8fd17ba2-d7f2-4af3-a8e7-078578b6c8fe-internal-tls-certs\") pod \"nova-api-0\" (UID: \"8fd17ba2-d7f2-4af3-a8e7-078578b6c8fe\") " pod="openstack/nova-api-0" Feb 17 13:58:12 crc kubenswrapper[4768]: I0217 13:58:12.596507 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8fd17ba2-d7f2-4af3-a8e7-078578b6c8fe-logs\") pod \"nova-api-0\" (UID: \"8fd17ba2-d7f2-4af3-a8e7-078578b6c8fe\") " pod="openstack/nova-api-0" Feb 17 13:58:12 crc kubenswrapper[4768]: I0217 13:58:12.596682 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8fd17ba2-d7f2-4af3-a8e7-078578b6c8fe-config-data\") pod \"nova-api-0\" (UID: \"8fd17ba2-d7f2-4af3-a8e7-078578b6c8fe\") " pod="openstack/nova-api-0" Feb 17 13:58:12 crc kubenswrapper[4768]: I0217 13:58:12.596718 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8fd17ba2-d7f2-4af3-a8e7-078578b6c8fe-public-tls-certs\") pod \"nova-api-0\" (UID: \"8fd17ba2-d7f2-4af3-a8e7-078578b6c8fe\") " pod="openstack/nova-api-0" Feb 17 13:58:12 crc kubenswrapper[4768]: I0217 13:58:12.596743 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wsww6\" (UniqueName: \"kubernetes.io/projected/8fd17ba2-d7f2-4af3-a8e7-078578b6c8fe-kube-api-access-wsww6\") pod \"nova-api-0\" (UID: \"8fd17ba2-d7f2-4af3-a8e7-078578b6c8fe\") " pod="openstack/nova-api-0" Feb 17 13:58:12 crc kubenswrapper[4768]: I0217 13:58:12.596769 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8fd17ba2-d7f2-4af3-a8e7-078578b6c8fe-internal-tls-certs\") pod \"nova-api-0\" (UID: \"8fd17ba2-d7f2-4af3-a8e7-078578b6c8fe\") " pod="openstack/nova-api-0" Feb 17 13:58:12 crc kubenswrapper[4768]: I0217 13:58:12.596861 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8fd17ba2-d7f2-4af3-a8e7-078578b6c8fe-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"8fd17ba2-d7f2-4af3-a8e7-078578b6c8fe\") " pod="openstack/nova-api-0" Feb 17 13:58:12 crc kubenswrapper[4768]: I0217 13:58:12.596920 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8fd17ba2-d7f2-4af3-a8e7-078578b6c8fe-logs\") pod \"nova-api-0\" (UID: \"8fd17ba2-d7f2-4af3-a8e7-078578b6c8fe\") " pod="openstack/nova-api-0" Feb 17 13:58:12 crc kubenswrapper[4768]: I0217 13:58:12.601191 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8fd17ba2-d7f2-4af3-a8e7-078578b6c8fe-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"8fd17ba2-d7f2-4af3-a8e7-078578b6c8fe\") " pod="openstack/nova-api-0" Feb 17 13:58:12 crc kubenswrapper[4768]: I0217 13:58:12.601525 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8fd17ba2-d7f2-4af3-a8e7-078578b6c8fe-internal-tls-certs\") pod \"nova-api-0\" (UID: \"8fd17ba2-d7f2-4af3-a8e7-078578b6c8fe\") " pod="openstack/nova-api-0" Feb 17 13:58:12 crc kubenswrapper[4768]: I0217 13:58:12.602276 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8fd17ba2-d7f2-4af3-a8e7-078578b6c8fe-public-tls-certs\") pod \"nova-api-0\" (UID: \"8fd17ba2-d7f2-4af3-a8e7-078578b6c8fe\") " pod="openstack/nova-api-0" Feb 17 13:58:12 crc kubenswrapper[4768]: I0217 13:58:12.602929 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8fd17ba2-d7f2-4af3-a8e7-078578b6c8fe-config-data\") pod \"nova-api-0\" (UID: \"8fd17ba2-d7f2-4af3-a8e7-078578b6c8fe\") " pod="openstack/nova-api-0" Feb 17 13:58:12 crc kubenswrapper[4768]: I0217 13:58:12.620931 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wsww6\" (UniqueName: \"kubernetes.io/projected/8fd17ba2-d7f2-4af3-a8e7-078578b6c8fe-kube-api-access-wsww6\") pod \"nova-api-0\" (UID: \"8fd17ba2-d7f2-4af3-a8e7-078578b6c8fe\") " pod="openstack/nova-api-0" Feb 17 13:58:12 crc kubenswrapper[4768]: I0217 13:58:12.740190 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 17 13:58:13 crc kubenswrapper[4768]: I0217 13:58:13.186644 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 17 13:58:13 crc kubenswrapper[4768]: W0217 13:58:13.195230 4768 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8fd17ba2_d7f2_4af3_a8e7_078578b6c8fe.slice/crio-19c0d121f66c26d77b1978f8520c102bea07ebcddcb612edec1cc81a8130322a WatchSource:0}: Error finding container 19c0d121f66c26d77b1978f8520c102bea07ebcddcb612edec1cc81a8130322a: Status 404 returned error can't find the container with id 19c0d121f66c26d77b1978f8520c102bea07ebcddcb612edec1cc81a8130322a Feb 17 13:58:13 crc kubenswrapper[4768]: I0217 13:58:13.345523 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"8fd17ba2-d7f2-4af3-a8e7-078578b6c8fe","Type":"ContainerStarted","Data":"19c0d121f66c26d77b1978f8520c102bea07ebcddcb612edec1cc81a8130322a"} Feb 17 13:58:13 crc kubenswrapper[4768]: I0217 13:58:13.549775 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b29a0fbe-a809-4df2-9023-99f8cf4ae420" path="/var/lib/kubelet/pods/b29a0fbe-a809-4df2-9023-99f8cf4ae420/volumes" Feb 17 13:58:13 crc kubenswrapper[4768]: I0217 13:58:13.839772 4768 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="022e1dee-9e9a-4898-8667-cdce272dce30" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.196:8775/\": read tcp 10.217.0.2:48762->10.217.0.196:8775: read: connection reset by peer" Feb 17 13:58:13 crc kubenswrapper[4768]: I0217 13:58:13.840544 4768 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="022e1dee-9e9a-4898-8667-cdce272dce30" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.196:8775/\": read tcp 10.217.0.2:48764->10.217.0.196:8775: read: connection reset by peer" Feb 17 13:58:14 crc kubenswrapper[4768]: I0217 13:58:14.276574 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 17 13:58:14 crc kubenswrapper[4768]: I0217 13:58:14.335765 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/022e1dee-9e9a-4898-8667-cdce272dce30-nova-metadata-tls-certs\") pod \"022e1dee-9e9a-4898-8667-cdce272dce30\" (UID: \"022e1dee-9e9a-4898-8667-cdce272dce30\") " Feb 17 13:58:14 crc kubenswrapper[4768]: I0217 13:58:14.336015 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4k5lm\" (UniqueName: \"kubernetes.io/projected/022e1dee-9e9a-4898-8667-cdce272dce30-kube-api-access-4k5lm\") pod \"022e1dee-9e9a-4898-8667-cdce272dce30\" (UID: \"022e1dee-9e9a-4898-8667-cdce272dce30\") " Feb 17 13:58:14 crc kubenswrapper[4768]: I0217 13:58:14.336080 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/022e1dee-9e9a-4898-8667-cdce272dce30-config-data\") pod \"022e1dee-9e9a-4898-8667-cdce272dce30\" (UID: \"022e1dee-9e9a-4898-8667-cdce272dce30\") " Feb 17 13:58:14 crc kubenswrapper[4768]: I0217 13:58:14.336135 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/022e1dee-9e9a-4898-8667-cdce272dce30-combined-ca-bundle\") pod \"022e1dee-9e9a-4898-8667-cdce272dce30\" (UID: \"022e1dee-9e9a-4898-8667-cdce272dce30\") " Feb 17 13:58:14 crc kubenswrapper[4768]: I0217 13:58:14.336211 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/022e1dee-9e9a-4898-8667-cdce272dce30-logs\") pod \"022e1dee-9e9a-4898-8667-cdce272dce30\" (UID: \"022e1dee-9e9a-4898-8667-cdce272dce30\") " Feb 17 13:58:14 crc kubenswrapper[4768]: I0217 13:58:14.336780 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/022e1dee-9e9a-4898-8667-cdce272dce30-logs" (OuterVolumeSpecName: "logs") pod "022e1dee-9e9a-4898-8667-cdce272dce30" (UID: "022e1dee-9e9a-4898-8667-cdce272dce30"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 13:58:14 crc kubenswrapper[4768]: I0217 13:58:14.351844 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/022e1dee-9e9a-4898-8667-cdce272dce30-kube-api-access-4k5lm" (OuterVolumeSpecName: "kube-api-access-4k5lm") pod "022e1dee-9e9a-4898-8667-cdce272dce30" (UID: "022e1dee-9e9a-4898-8667-cdce272dce30"). InnerVolumeSpecName "kube-api-access-4k5lm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 13:58:14 crc kubenswrapper[4768]: I0217 13:58:14.365588 4768 generic.go:334] "Generic (PLEG): container finished" podID="022e1dee-9e9a-4898-8667-cdce272dce30" containerID="27be652ef96bc300d40975efd52ad65c772827c7235097f02b4da5a9257bd2c5" exitCode=0 Feb 17 13:58:14 crc kubenswrapper[4768]: I0217 13:58:14.365697 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 17 13:58:14 crc kubenswrapper[4768]: I0217 13:58:14.365702 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"022e1dee-9e9a-4898-8667-cdce272dce30","Type":"ContainerDied","Data":"27be652ef96bc300d40975efd52ad65c772827c7235097f02b4da5a9257bd2c5"} Feb 17 13:58:14 crc kubenswrapper[4768]: I0217 13:58:14.365816 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"022e1dee-9e9a-4898-8667-cdce272dce30","Type":"ContainerDied","Data":"7e3df3d713a510aa95412e9ba3f658d2e46b6105ee36f177f8cf97b5891662d6"} Feb 17 13:58:14 crc kubenswrapper[4768]: I0217 13:58:14.365835 4768 scope.go:117] "RemoveContainer" containerID="27be652ef96bc300d40975efd52ad65c772827c7235097f02b4da5a9257bd2c5" Feb 17 13:58:14 crc kubenswrapper[4768]: I0217 13:58:14.372747 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"8fd17ba2-d7f2-4af3-a8e7-078578b6c8fe","Type":"ContainerStarted","Data":"aeae04398375c4f32d59489582990fcf86d57d6404fd006cf60848deb89ee8ee"} Feb 17 13:58:14 crc kubenswrapper[4768]: I0217 13:58:14.372791 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"8fd17ba2-d7f2-4af3-a8e7-078578b6c8fe","Type":"ContainerStarted","Data":"a79473a86633bd54d7aa854a3fde85eb0c94aaba88aae454ae3dd123d4e38347"} Feb 17 13:58:14 crc kubenswrapper[4768]: I0217 13:58:14.375645 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/022e1dee-9e9a-4898-8667-cdce272dce30-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "022e1dee-9e9a-4898-8667-cdce272dce30" (UID: "022e1dee-9e9a-4898-8667-cdce272dce30"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 13:58:14 crc kubenswrapper[4768]: I0217 13:58:14.389024 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/022e1dee-9e9a-4898-8667-cdce272dce30-config-data" (OuterVolumeSpecName: "config-data") pod "022e1dee-9e9a-4898-8667-cdce272dce30" (UID: "022e1dee-9e9a-4898-8667-cdce272dce30"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 13:58:14 crc kubenswrapper[4768]: I0217 13:58:14.409877 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.409843901 podStartE2EDuration="2.409843901s" podCreationTimestamp="2026-02-17 13:58:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 13:58:14.396757996 +0000 UTC m=+1313.676144458" watchObservedRunningTime="2026-02-17 13:58:14.409843901 +0000 UTC m=+1313.689230343" Feb 17 13:58:14 crc kubenswrapper[4768]: I0217 13:58:14.436525 4768 scope.go:117] "RemoveContainer" containerID="76a2c98d16b8679f51a4fe0a4661f6ac197d032186dd0ca9b3cef0356311e6d5" Feb 17 13:58:14 crc kubenswrapper[4768]: I0217 13:58:14.440785 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4k5lm\" (UniqueName: \"kubernetes.io/projected/022e1dee-9e9a-4898-8667-cdce272dce30-kube-api-access-4k5lm\") on node \"crc\" DevicePath \"\"" Feb 17 13:58:14 crc kubenswrapper[4768]: I0217 13:58:14.440831 4768 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/022e1dee-9e9a-4898-8667-cdce272dce30-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 13:58:14 crc kubenswrapper[4768]: I0217 13:58:14.440844 4768 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/022e1dee-9e9a-4898-8667-cdce272dce30-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 13:58:14 crc kubenswrapper[4768]: I0217 13:58:14.440855 4768 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/022e1dee-9e9a-4898-8667-cdce272dce30-logs\") on node \"crc\" DevicePath \"\"" Feb 17 13:58:14 crc kubenswrapper[4768]: I0217 13:58:14.442601 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/022e1dee-9e9a-4898-8667-cdce272dce30-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "022e1dee-9e9a-4898-8667-cdce272dce30" (UID: "022e1dee-9e9a-4898-8667-cdce272dce30"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 13:58:14 crc kubenswrapper[4768]: I0217 13:58:14.465868 4768 scope.go:117] "RemoveContainer" containerID="27be652ef96bc300d40975efd52ad65c772827c7235097f02b4da5a9257bd2c5" Feb 17 13:58:14 crc kubenswrapper[4768]: E0217 13:58:14.466611 4768 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"27be652ef96bc300d40975efd52ad65c772827c7235097f02b4da5a9257bd2c5\": container with ID starting with 27be652ef96bc300d40975efd52ad65c772827c7235097f02b4da5a9257bd2c5 not found: ID does not exist" containerID="27be652ef96bc300d40975efd52ad65c772827c7235097f02b4da5a9257bd2c5" Feb 17 13:58:14 crc kubenswrapper[4768]: I0217 13:58:14.466728 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"27be652ef96bc300d40975efd52ad65c772827c7235097f02b4da5a9257bd2c5"} err="failed to get container status \"27be652ef96bc300d40975efd52ad65c772827c7235097f02b4da5a9257bd2c5\": rpc error: code = NotFound desc = could not find container \"27be652ef96bc300d40975efd52ad65c772827c7235097f02b4da5a9257bd2c5\": container with ID starting with 27be652ef96bc300d40975efd52ad65c772827c7235097f02b4da5a9257bd2c5 not found: ID does not exist" Feb 17 13:58:14 crc kubenswrapper[4768]: I0217 13:58:14.466819 4768 scope.go:117] "RemoveContainer" containerID="76a2c98d16b8679f51a4fe0a4661f6ac197d032186dd0ca9b3cef0356311e6d5" Feb 17 13:58:14 crc kubenswrapper[4768]: E0217 13:58:14.467199 4768 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"76a2c98d16b8679f51a4fe0a4661f6ac197d032186dd0ca9b3cef0356311e6d5\": container with ID starting with 76a2c98d16b8679f51a4fe0a4661f6ac197d032186dd0ca9b3cef0356311e6d5 not found: ID does not exist" containerID="76a2c98d16b8679f51a4fe0a4661f6ac197d032186dd0ca9b3cef0356311e6d5" Feb 17 13:58:14 crc kubenswrapper[4768]: I0217 13:58:14.467288 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"76a2c98d16b8679f51a4fe0a4661f6ac197d032186dd0ca9b3cef0356311e6d5"} err="failed to get container status \"76a2c98d16b8679f51a4fe0a4661f6ac197d032186dd0ca9b3cef0356311e6d5\": rpc error: code = NotFound desc = could not find container \"76a2c98d16b8679f51a4fe0a4661f6ac197d032186dd0ca9b3cef0356311e6d5\": container with ID starting with 76a2c98d16b8679f51a4fe0a4661f6ac197d032186dd0ca9b3cef0356311e6d5 not found: ID does not exist" Feb 17 13:58:14 crc kubenswrapper[4768]: I0217 13:58:14.542258 4768 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/022e1dee-9e9a-4898-8667-cdce272dce30-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 17 13:58:14 crc kubenswrapper[4768]: I0217 13:58:14.757537 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 17 13:58:14 crc kubenswrapper[4768]: I0217 13:58:14.768048 4768 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Feb 17 13:58:14 crc kubenswrapper[4768]: I0217 13:58:14.803194 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Feb 17 13:58:14 crc kubenswrapper[4768]: E0217 13:58:14.803564 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="022e1dee-9e9a-4898-8667-cdce272dce30" containerName="nova-metadata-metadata" Feb 17 13:58:14 crc kubenswrapper[4768]: I0217 13:58:14.803580 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="022e1dee-9e9a-4898-8667-cdce272dce30" containerName="nova-metadata-metadata" Feb 17 13:58:14 crc kubenswrapper[4768]: E0217 13:58:14.803596 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="022e1dee-9e9a-4898-8667-cdce272dce30" containerName="nova-metadata-log" Feb 17 13:58:14 crc kubenswrapper[4768]: I0217 13:58:14.803602 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="022e1dee-9e9a-4898-8667-cdce272dce30" containerName="nova-metadata-log" Feb 17 13:58:14 crc kubenswrapper[4768]: I0217 13:58:14.803773 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="022e1dee-9e9a-4898-8667-cdce272dce30" containerName="nova-metadata-log" Feb 17 13:58:14 crc kubenswrapper[4768]: I0217 13:58:14.803801 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="022e1dee-9e9a-4898-8667-cdce272dce30" containerName="nova-metadata-metadata" Feb 17 13:58:14 crc kubenswrapper[4768]: I0217 13:58:14.804753 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 17 13:58:14 crc kubenswrapper[4768]: I0217 13:58:14.807382 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Feb 17 13:58:14 crc kubenswrapper[4768]: I0217 13:58:14.807573 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Feb 17 13:58:14 crc kubenswrapper[4768]: I0217 13:58:14.821154 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 17 13:58:14 crc kubenswrapper[4768]: I0217 13:58:14.850957 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/4a274ef1-85cc-4456-960d-079fe7c8ea6d-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"4a274ef1-85cc-4456-960d-079fe7c8ea6d\") " pod="openstack/nova-metadata-0" Feb 17 13:58:14 crc kubenswrapper[4768]: I0217 13:58:14.851034 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nqqkd\" (UniqueName: \"kubernetes.io/projected/4a274ef1-85cc-4456-960d-079fe7c8ea6d-kube-api-access-nqqkd\") pod \"nova-metadata-0\" (UID: \"4a274ef1-85cc-4456-960d-079fe7c8ea6d\") " pod="openstack/nova-metadata-0" Feb 17 13:58:14 crc kubenswrapper[4768]: I0217 13:58:14.851175 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4a274ef1-85cc-4456-960d-079fe7c8ea6d-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"4a274ef1-85cc-4456-960d-079fe7c8ea6d\") " pod="openstack/nova-metadata-0" Feb 17 13:58:14 crc kubenswrapper[4768]: I0217 13:58:14.851305 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4a274ef1-85cc-4456-960d-079fe7c8ea6d-config-data\") pod \"nova-metadata-0\" (UID: \"4a274ef1-85cc-4456-960d-079fe7c8ea6d\") " pod="openstack/nova-metadata-0" Feb 17 13:58:14 crc kubenswrapper[4768]: I0217 13:58:14.851353 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4a274ef1-85cc-4456-960d-079fe7c8ea6d-logs\") pod \"nova-metadata-0\" (UID: \"4a274ef1-85cc-4456-960d-079fe7c8ea6d\") " pod="openstack/nova-metadata-0" Feb 17 13:58:14 crc kubenswrapper[4768]: I0217 13:58:14.953421 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4a274ef1-85cc-4456-960d-079fe7c8ea6d-logs\") pod \"nova-metadata-0\" (UID: \"4a274ef1-85cc-4456-960d-079fe7c8ea6d\") " pod="openstack/nova-metadata-0" Feb 17 13:58:14 crc kubenswrapper[4768]: I0217 13:58:14.953576 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/4a274ef1-85cc-4456-960d-079fe7c8ea6d-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"4a274ef1-85cc-4456-960d-079fe7c8ea6d\") " pod="openstack/nova-metadata-0" Feb 17 13:58:14 crc kubenswrapper[4768]: I0217 13:58:14.953606 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nqqkd\" (UniqueName: \"kubernetes.io/projected/4a274ef1-85cc-4456-960d-079fe7c8ea6d-kube-api-access-nqqkd\") pod \"nova-metadata-0\" (UID: \"4a274ef1-85cc-4456-960d-079fe7c8ea6d\") " pod="openstack/nova-metadata-0" Feb 17 13:58:14 crc kubenswrapper[4768]: I0217 13:58:14.953626 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4a274ef1-85cc-4456-960d-079fe7c8ea6d-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"4a274ef1-85cc-4456-960d-079fe7c8ea6d\") " pod="openstack/nova-metadata-0" Feb 17 13:58:14 crc kubenswrapper[4768]: I0217 13:58:14.953657 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4a274ef1-85cc-4456-960d-079fe7c8ea6d-config-data\") pod \"nova-metadata-0\" (UID: \"4a274ef1-85cc-4456-960d-079fe7c8ea6d\") " pod="openstack/nova-metadata-0" Feb 17 13:58:14 crc kubenswrapper[4768]: I0217 13:58:14.954129 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4a274ef1-85cc-4456-960d-079fe7c8ea6d-logs\") pod \"nova-metadata-0\" (UID: \"4a274ef1-85cc-4456-960d-079fe7c8ea6d\") " pod="openstack/nova-metadata-0" Feb 17 13:58:14 crc kubenswrapper[4768]: I0217 13:58:14.958808 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4a274ef1-85cc-4456-960d-079fe7c8ea6d-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"4a274ef1-85cc-4456-960d-079fe7c8ea6d\") " pod="openstack/nova-metadata-0" Feb 17 13:58:14 crc kubenswrapper[4768]: I0217 13:58:14.959009 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4a274ef1-85cc-4456-960d-079fe7c8ea6d-config-data\") pod \"nova-metadata-0\" (UID: \"4a274ef1-85cc-4456-960d-079fe7c8ea6d\") " pod="openstack/nova-metadata-0" Feb 17 13:58:14 crc kubenswrapper[4768]: I0217 13:58:14.974126 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/4a274ef1-85cc-4456-960d-079fe7c8ea6d-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"4a274ef1-85cc-4456-960d-079fe7c8ea6d\") " pod="openstack/nova-metadata-0" Feb 17 13:58:14 crc kubenswrapper[4768]: I0217 13:58:14.984515 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nqqkd\" (UniqueName: \"kubernetes.io/projected/4a274ef1-85cc-4456-960d-079fe7c8ea6d-kube-api-access-nqqkd\") pod \"nova-metadata-0\" (UID: \"4a274ef1-85cc-4456-960d-079fe7c8ea6d\") " pod="openstack/nova-metadata-0" Feb 17 13:58:15 crc kubenswrapper[4768]: I0217 13:58:15.163196 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 17 13:58:15 crc kubenswrapper[4768]: I0217 13:58:15.382505 4768 generic.go:334] "Generic (PLEG): container finished" podID="c98f298b-2ec8-4d90-9112-8b5ad9109a92" containerID="edbfa891c9ee3949919595ea6240dcb07608c35f0542a6f28bc8f60edfe895f7" exitCode=0 Feb 17 13:58:15 crc kubenswrapper[4768]: I0217 13:58:15.383742 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"c98f298b-2ec8-4d90-9112-8b5ad9109a92","Type":"ContainerDied","Data":"edbfa891c9ee3949919595ea6240dcb07608c35f0542a6f28bc8f60edfe895f7"} Feb 17 13:58:15 crc kubenswrapper[4768]: I0217 13:58:15.393906 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 17 13:58:15 crc kubenswrapper[4768]: I0217 13:58:15.475576 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2jjkq\" (UniqueName: \"kubernetes.io/projected/c98f298b-2ec8-4d90-9112-8b5ad9109a92-kube-api-access-2jjkq\") pod \"c98f298b-2ec8-4d90-9112-8b5ad9109a92\" (UID: \"c98f298b-2ec8-4d90-9112-8b5ad9109a92\") " Feb 17 13:58:15 crc kubenswrapper[4768]: I0217 13:58:15.475646 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c98f298b-2ec8-4d90-9112-8b5ad9109a92-config-data\") pod \"c98f298b-2ec8-4d90-9112-8b5ad9109a92\" (UID: \"c98f298b-2ec8-4d90-9112-8b5ad9109a92\") " Feb 17 13:58:15 crc kubenswrapper[4768]: I0217 13:58:15.475740 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c98f298b-2ec8-4d90-9112-8b5ad9109a92-combined-ca-bundle\") pod \"c98f298b-2ec8-4d90-9112-8b5ad9109a92\" (UID: \"c98f298b-2ec8-4d90-9112-8b5ad9109a92\") " Feb 17 13:58:15 crc kubenswrapper[4768]: I0217 13:58:15.494673 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c98f298b-2ec8-4d90-9112-8b5ad9109a92-kube-api-access-2jjkq" (OuterVolumeSpecName: "kube-api-access-2jjkq") pod "c98f298b-2ec8-4d90-9112-8b5ad9109a92" (UID: "c98f298b-2ec8-4d90-9112-8b5ad9109a92"). InnerVolumeSpecName "kube-api-access-2jjkq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 13:58:15 crc kubenswrapper[4768]: I0217 13:58:15.508704 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c98f298b-2ec8-4d90-9112-8b5ad9109a92-config-data" (OuterVolumeSpecName: "config-data") pod "c98f298b-2ec8-4d90-9112-8b5ad9109a92" (UID: "c98f298b-2ec8-4d90-9112-8b5ad9109a92"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 13:58:15 crc kubenswrapper[4768]: I0217 13:58:15.509826 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c98f298b-2ec8-4d90-9112-8b5ad9109a92-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c98f298b-2ec8-4d90-9112-8b5ad9109a92" (UID: "c98f298b-2ec8-4d90-9112-8b5ad9109a92"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 13:58:15 crc kubenswrapper[4768]: I0217 13:58:15.545401 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="022e1dee-9e9a-4898-8667-cdce272dce30" path="/var/lib/kubelet/pods/022e1dee-9e9a-4898-8667-cdce272dce30/volumes" Feb 17 13:58:15 crc kubenswrapper[4768]: I0217 13:58:15.577890 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2jjkq\" (UniqueName: \"kubernetes.io/projected/c98f298b-2ec8-4d90-9112-8b5ad9109a92-kube-api-access-2jjkq\") on node \"crc\" DevicePath \"\"" Feb 17 13:58:15 crc kubenswrapper[4768]: I0217 13:58:15.577939 4768 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c98f298b-2ec8-4d90-9112-8b5ad9109a92-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 13:58:15 crc kubenswrapper[4768]: I0217 13:58:15.577956 4768 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c98f298b-2ec8-4d90-9112-8b5ad9109a92-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 13:58:15 crc kubenswrapper[4768]: I0217 13:58:15.617827 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 17 13:58:16 crc kubenswrapper[4768]: I0217 13:58:16.394239 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"4a274ef1-85cc-4456-960d-079fe7c8ea6d","Type":"ContainerStarted","Data":"b0f0dd856175d2d4d611e0f2f14a47be12b24e25c0d20404080f44f6a12073dd"} Feb 17 13:58:16 crc kubenswrapper[4768]: I0217 13:58:16.394535 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"4a274ef1-85cc-4456-960d-079fe7c8ea6d","Type":"ContainerStarted","Data":"506f28d1422d7c435a0ef725c1996a6e568442cb093f19566c9268cbb56cb694"} Feb 17 13:58:16 crc kubenswrapper[4768]: I0217 13:58:16.394545 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"4a274ef1-85cc-4456-960d-079fe7c8ea6d","Type":"ContainerStarted","Data":"7c98ac8c626aad0d6e6e5e2d4eea652b9dff83122a4a118085c56c8bb9739404"} Feb 17 13:58:16 crc kubenswrapper[4768]: I0217 13:58:16.395783 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"c98f298b-2ec8-4d90-9112-8b5ad9109a92","Type":"ContainerDied","Data":"281c3ad551b1ead1989f0d1dde0cc07ad7d3e09f0b25558a16c5aca585763c14"} Feb 17 13:58:16 crc kubenswrapper[4768]: I0217 13:58:16.395814 4768 scope.go:117] "RemoveContainer" containerID="edbfa891c9ee3949919595ea6240dcb07608c35f0542a6f28bc8f60edfe895f7" Feb 17 13:58:16 crc kubenswrapper[4768]: I0217 13:58:16.395838 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 17 13:58:16 crc kubenswrapper[4768]: I0217 13:58:16.425333 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.425312427 podStartE2EDuration="2.425312427s" podCreationTimestamp="2026-02-17 13:58:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 13:58:16.416619572 +0000 UTC m=+1315.696006014" watchObservedRunningTime="2026-02-17 13:58:16.425312427 +0000 UTC m=+1315.704698859" Feb 17 13:58:16 crc kubenswrapper[4768]: I0217 13:58:16.437537 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Feb 17 13:58:16 crc kubenswrapper[4768]: I0217 13:58:16.446704 4768 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Feb 17 13:58:16 crc kubenswrapper[4768]: I0217 13:58:16.458495 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Feb 17 13:58:16 crc kubenswrapper[4768]: E0217 13:58:16.458932 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c98f298b-2ec8-4d90-9112-8b5ad9109a92" containerName="nova-scheduler-scheduler" Feb 17 13:58:16 crc kubenswrapper[4768]: I0217 13:58:16.458952 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="c98f298b-2ec8-4d90-9112-8b5ad9109a92" containerName="nova-scheduler-scheduler" Feb 17 13:58:16 crc kubenswrapper[4768]: I0217 13:58:16.459158 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="c98f298b-2ec8-4d90-9112-8b5ad9109a92" containerName="nova-scheduler-scheduler" Feb 17 13:58:16 crc kubenswrapper[4768]: I0217 13:58:16.459760 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 17 13:58:16 crc kubenswrapper[4768]: I0217 13:58:16.462422 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Feb 17 13:58:16 crc kubenswrapper[4768]: I0217 13:58:16.469455 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 17 13:58:16 crc kubenswrapper[4768]: I0217 13:58:16.597251 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g8vc4\" (UniqueName: \"kubernetes.io/projected/1b3ad6f8-7496-467a-bdeb-7cf29963af21-kube-api-access-g8vc4\") pod \"nova-scheduler-0\" (UID: \"1b3ad6f8-7496-467a-bdeb-7cf29963af21\") " pod="openstack/nova-scheduler-0" Feb 17 13:58:16 crc kubenswrapper[4768]: I0217 13:58:16.597307 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1b3ad6f8-7496-467a-bdeb-7cf29963af21-config-data\") pod \"nova-scheduler-0\" (UID: \"1b3ad6f8-7496-467a-bdeb-7cf29963af21\") " pod="openstack/nova-scheduler-0" Feb 17 13:58:16 crc kubenswrapper[4768]: I0217 13:58:16.597351 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1b3ad6f8-7496-467a-bdeb-7cf29963af21-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"1b3ad6f8-7496-467a-bdeb-7cf29963af21\") " pod="openstack/nova-scheduler-0" Feb 17 13:58:16 crc kubenswrapper[4768]: I0217 13:58:16.698582 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1b3ad6f8-7496-467a-bdeb-7cf29963af21-config-data\") pod \"nova-scheduler-0\" (UID: \"1b3ad6f8-7496-467a-bdeb-7cf29963af21\") " pod="openstack/nova-scheduler-0" Feb 17 13:58:16 crc kubenswrapper[4768]: I0217 13:58:16.698650 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1b3ad6f8-7496-467a-bdeb-7cf29963af21-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"1b3ad6f8-7496-467a-bdeb-7cf29963af21\") " pod="openstack/nova-scheduler-0" Feb 17 13:58:16 crc kubenswrapper[4768]: I0217 13:58:16.698823 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g8vc4\" (UniqueName: \"kubernetes.io/projected/1b3ad6f8-7496-467a-bdeb-7cf29963af21-kube-api-access-g8vc4\") pod \"nova-scheduler-0\" (UID: \"1b3ad6f8-7496-467a-bdeb-7cf29963af21\") " pod="openstack/nova-scheduler-0" Feb 17 13:58:16 crc kubenswrapper[4768]: I0217 13:58:16.703840 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1b3ad6f8-7496-467a-bdeb-7cf29963af21-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"1b3ad6f8-7496-467a-bdeb-7cf29963af21\") " pod="openstack/nova-scheduler-0" Feb 17 13:58:16 crc kubenswrapper[4768]: I0217 13:58:16.707414 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1b3ad6f8-7496-467a-bdeb-7cf29963af21-config-data\") pod \"nova-scheduler-0\" (UID: \"1b3ad6f8-7496-467a-bdeb-7cf29963af21\") " pod="openstack/nova-scheduler-0" Feb 17 13:58:16 crc kubenswrapper[4768]: I0217 13:58:16.716955 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g8vc4\" (UniqueName: \"kubernetes.io/projected/1b3ad6f8-7496-467a-bdeb-7cf29963af21-kube-api-access-g8vc4\") pod \"nova-scheduler-0\" (UID: \"1b3ad6f8-7496-467a-bdeb-7cf29963af21\") " pod="openstack/nova-scheduler-0" Feb 17 13:58:16 crc kubenswrapper[4768]: I0217 13:58:16.780046 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 17 13:58:17 crc kubenswrapper[4768]: I0217 13:58:17.287504 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 17 13:58:17 crc kubenswrapper[4768]: W0217 13:58:17.301521 4768 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1b3ad6f8_7496_467a_bdeb_7cf29963af21.slice/crio-8d29da10a98cd02e5b3ceec65496cb1198ca464975aa768ba2443974e589036e WatchSource:0}: Error finding container 8d29da10a98cd02e5b3ceec65496cb1198ca464975aa768ba2443974e589036e: Status 404 returned error can't find the container with id 8d29da10a98cd02e5b3ceec65496cb1198ca464975aa768ba2443974e589036e Feb 17 13:58:17 crc kubenswrapper[4768]: I0217 13:58:17.420469 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"1b3ad6f8-7496-467a-bdeb-7cf29963af21","Type":"ContainerStarted","Data":"8d29da10a98cd02e5b3ceec65496cb1198ca464975aa768ba2443974e589036e"} Feb 17 13:58:17 crc kubenswrapper[4768]: I0217 13:58:17.546945 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c98f298b-2ec8-4d90-9112-8b5ad9109a92" path="/var/lib/kubelet/pods/c98f298b-2ec8-4d90-9112-8b5ad9109a92/volumes" Feb 17 13:58:18 crc kubenswrapper[4768]: I0217 13:58:18.432292 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"1b3ad6f8-7496-467a-bdeb-7cf29963af21","Type":"ContainerStarted","Data":"c3b7113c861e00da3b1fb3f59ad3792d1288217109724130fa65d1ea006e195d"} Feb 17 13:58:18 crc kubenswrapper[4768]: I0217 13:58:18.465715 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.465689239 podStartE2EDuration="2.465689239s" podCreationTimestamp="2026-02-17 13:58:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 13:58:18.454502236 +0000 UTC m=+1317.733888678" watchObservedRunningTime="2026-02-17 13:58:18.465689239 +0000 UTC m=+1317.745075691" Feb 17 13:58:20 crc kubenswrapper[4768]: I0217 13:58:20.164209 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 17 13:58:20 crc kubenswrapper[4768]: I0217 13:58:20.164658 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 17 13:58:21 crc kubenswrapper[4768]: I0217 13:58:21.781141 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Feb 17 13:58:22 crc kubenswrapper[4768]: I0217 13:58:22.740936 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 17 13:58:22 crc kubenswrapper[4768]: I0217 13:58:22.741280 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 17 13:58:23 crc kubenswrapper[4768]: I0217 13:58:23.755292 4768 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="8fd17ba2-d7f2-4af3-a8e7-078578b6c8fe" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.0.205:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 17 13:58:23 crc kubenswrapper[4768]: I0217 13:58:23.755311 4768 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="8fd17ba2-d7f2-4af3-a8e7-078578b6c8fe" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.0.205:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 17 13:58:25 crc kubenswrapper[4768]: I0217 13:58:25.164657 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Feb 17 13:58:25 crc kubenswrapper[4768]: I0217 13:58:25.164914 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Feb 17 13:58:26 crc kubenswrapper[4768]: I0217 13:58:26.179305 4768 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="4a274ef1-85cc-4456-960d-079fe7c8ea6d" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.206:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 17 13:58:26 crc kubenswrapper[4768]: I0217 13:58:26.179325 4768 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="4a274ef1-85cc-4456-960d-079fe7c8ea6d" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.206:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 17 13:58:26 crc kubenswrapper[4768]: I0217 13:58:26.781196 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Feb 17 13:58:26 crc kubenswrapper[4768]: I0217 13:58:26.813063 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Feb 17 13:58:27 crc kubenswrapper[4768]: I0217 13:58:27.566409 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Feb 17 13:58:28 crc kubenswrapper[4768]: I0217 13:58:28.060400 4768 patch_prober.go:28] interesting pod/machine-config-daemon-p97z4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 13:58:28 crc kubenswrapper[4768]: I0217 13:58:28.060476 4768 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-p97z4" podUID="10c685ba-8fe0-425c-958c-3fb6754d3d84" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 13:58:28 crc kubenswrapper[4768]: I0217 13:58:28.060538 4768 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-p97z4" Feb 17 13:58:28 crc kubenswrapper[4768]: I0217 13:58:28.061336 4768 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"f9b51566c32baca16b7c982a1f5be2bc77d96745c6b89bf249154277d12b15c6"} pod="openshift-machine-config-operator/machine-config-daemon-p97z4" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 17 13:58:28 crc kubenswrapper[4768]: I0217 13:58:28.061396 4768 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-p97z4" podUID="10c685ba-8fe0-425c-958c-3fb6754d3d84" containerName="machine-config-daemon" containerID="cri-o://f9b51566c32baca16b7c982a1f5be2bc77d96745c6b89bf249154277d12b15c6" gracePeriod=600 Feb 17 13:58:28 crc kubenswrapper[4768]: I0217 13:58:28.533623 4768 generic.go:334] "Generic (PLEG): container finished" podID="10c685ba-8fe0-425c-958c-3fb6754d3d84" containerID="f9b51566c32baca16b7c982a1f5be2bc77d96745c6b89bf249154277d12b15c6" exitCode=0 Feb 17 13:58:28 crc kubenswrapper[4768]: I0217 13:58:28.533717 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-p97z4" event={"ID":"10c685ba-8fe0-425c-958c-3fb6754d3d84","Type":"ContainerDied","Data":"f9b51566c32baca16b7c982a1f5be2bc77d96745c6b89bf249154277d12b15c6"} Feb 17 13:58:28 crc kubenswrapper[4768]: I0217 13:58:28.534299 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-p97z4" event={"ID":"10c685ba-8fe0-425c-958c-3fb6754d3d84","Type":"ContainerStarted","Data":"1a561dcdfe0c02cbee0707f068057b98f49569f2f0f313ebde599d9ef2366bc7"} Feb 17 13:58:28 crc kubenswrapper[4768]: I0217 13:58:28.534329 4768 scope.go:117] "RemoveContainer" containerID="8c7bac4bbfa7a551b4bc123db2f23e406ad5c1983352def084482a277bb70005" Feb 17 13:58:29 crc kubenswrapper[4768]: I0217 13:58:29.848981 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Feb 17 13:58:32 crc kubenswrapper[4768]: I0217 13:58:32.747898 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Feb 17 13:58:32 crc kubenswrapper[4768]: I0217 13:58:32.748544 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Feb 17 13:58:32 crc kubenswrapper[4768]: I0217 13:58:32.748647 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Feb 17 13:58:32 crc kubenswrapper[4768]: I0217 13:58:32.754193 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Feb 17 13:58:33 crc kubenswrapper[4768]: I0217 13:58:33.607057 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Feb 17 13:58:33 crc kubenswrapper[4768]: I0217 13:58:33.613555 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Feb 17 13:58:35 crc kubenswrapper[4768]: I0217 13:58:35.174071 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Feb 17 13:58:35 crc kubenswrapper[4768]: I0217 13:58:35.175173 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Feb 17 13:58:35 crc kubenswrapper[4768]: I0217 13:58:35.182235 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Feb 17 13:58:35 crc kubenswrapper[4768]: I0217 13:58:35.633465 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Feb 17 13:58:43 crc kubenswrapper[4768]: I0217 13:58:43.698697 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 17 13:58:44 crc kubenswrapper[4768]: I0217 13:58:44.432899 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 17 13:58:47 crc kubenswrapper[4768]: I0217 13:58:47.629509 4768 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-server-0" podUID="9615d9e4-113e-4282-a091-a8c69a0c7968" containerName="rabbitmq" containerID="cri-o://d89e9e7f992f67ca081114ce9e3f958251d8095b8b48f2d57f1b38ff3926adbb" gracePeriod=604797 Feb 17 13:58:47 crc kubenswrapper[4768]: I0217 13:58:47.721746 4768 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-0" podUID="9615d9e4-113e-4282-a091-a8c69a0c7968" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.97:5671: connect: connection refused" Feb 17 13:58:48 crc kubenswrapper[4768]: I0217 13:58:48.666574 4768 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-cell1-server-0" podUID="7d5df4be-f003-429d-8a84-81a239db88c0" containerName="rabbitmq" containerID="cri-o://ddaeb65503c006bae90a4e798bb20a1953508e688ef15a8bfd1b5a49cdcdc151" gracePeriod=604796 Feb 17 13:58:54 crc kubenswrapper[4768]: I0217 13:58:54.277397 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 17 13:58:54 crc kubenswrapper[4768]: I0217 13:58:54.428289 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/9615d9e4-113e-4282-a091-a8c69a0c7968-rabbitmq-erlang-cookie\") pod \"9615d9e4-113e-4282-a091-a8c69a0c7968\" (UID: \"9615d9e4-113e-4282-a091-a8c69a0c7968\") " Feb 17 13:58:54 crc kubenswrapper[4768]: I0217 13:58:54.428681 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/9615d9e4-113e-4282-a091-a8c69a0c7968-rabbitmq-confd\") pod \"9615d9e4-113e-4282-a091-a8c69a0c7968\" (UID: \"9615d9e4-113e-4282-a091-a8c69a0c7968\") " Feb 17 13:58:54 crc kubenswrapper[4768]: I0217 13:58:54.428721 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"9615d9e4-113e-4282-a091-a8c69a0c7968\" (UID: \"9615d9e4-113e-4282-a091-a8c69a0c7968\") " Feb 17 13:58:54 crc kubenswrapper[4768]: I0217 13:58:54.428752 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/9615d9e4-113e-4282-a091-a8c69a0c7968-rabbitmq-tls\") pod \"9615d9e4-113e-4282-a091-a8c69a0c7968\" (UID: \"9615d9e4-113e-4282-a091-a8c69a0c7968\") " Feb 17 13:58:54 crc kubenswrapper[4768]: I0217 13:58:54.428835 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vbcbv\" (UniqueName: \"kubernetes.io/projected/9615d9e4-113e-4282-a091-a8c69a0c7968-kube-api-access-vbcbv\") pod \"9615d9e4-113e-4282-a091-a8c69a0c7968\" (UID: \"9615d9e4-113e-4282-a091-a8c69a0c7968\") " Feb 17 13:58:54 crc kubenswrapper[4768]: I0217 13:58:54.428918 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/9615d9e4-113e-4282-a091-a8c69a0c7968-rabbitmq-plugins\") pod \"9615d9e4-113e-4282-a091-a8c69a0c7968\" (UID: \"9615d9e4-113e-4282-a091-a8c69a0c7968\") " Feb 17 13:58:54 crc kubenswrapper[4768]: I0217 13:58:54.428969 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/9615d9e4-113e-4282-a091-a8c69a0c7968-plugins-conf\") pod \"9615d9e4-113e-4282-a091-a8c69a0c7968\" (UID: \"9615d9e4-113e-4282-a091-a8c69a0c7968\") " Feb 17 13:58:54 crc kubenswrapper[4768]: I0217 13:58:54.429007 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/9615d9e4-113e-4282-a091-a8c69a0c7968-erlang-cookie-secret\") pod \"9615d9e4-113e-4282-a091-a8c69a0c7968\" (UID: \"9615d9e4-113e-4282-a091-a8c69a0c7968\") " Feb 17 13:58:54 crc kubenswrapper[4768]: I0217 13:58:54.428999 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9615d9e4-113e-4282-a091-a8c69a0c7968-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "9615d9e4-113e-4282-a091-a8c69a0c7968" (UID: "9615d9e4-113e-4282-a091-a8c69a0c7968"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 13:58:54 crc kubenswrapper[4768]: I0217 13:58:54.429532 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9615d9e4-113e-4282-a091-a8c69a0c7968-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "9615d9e4-113e-4282-a091-a8c69a0c7968" (UID: "9615d9e4-113e-4282-a091-a8c69a0c7968"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 13:58:54 crc kubenswrapper[4768]: I0217 13:58:54.429540 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9615d9e4-113e-4282-a091-a8c69a0c7968-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "9615d9e4-113e-4282-a091-a8c69a0c7968" (UID: "9615d9e4-113e-4282-a091-a8c69a0c7968"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 13:58:54 crc kubenswrapper[4768]: I0217 13:58:54.429041 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/9615d9e4-113e-4282-a091-a8c69a0c7968-pod-info\") pod \"9615d9e4-113e-4282-a091-a8c69a0c7968\" (UID: \"9615d9e4-113e-4282-a091-a8c69a0c7968\") " Feb 17 13:58:54 crc kubenswrapper[4768]: I0217 13:58:54.429735 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9615d9e4-113e-4282-a091-a8c69a0c7968-config-data\") pod \"9615d9e4-113e-4282-a091-a8c69a0c7968\" (UID: \"9615d9e4-113e-4282-a091-a8c69a0c7968\") " Feb 17 13:58:54 crc kubenswrapper[4768]: I0217 13:58:54.430065 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/9615d9e4-113e-4282-a091-a8c69a0c7968-server-conf\") pod \"9615d9e4-113e-4282-a091-a8c69a0c7968\" (UID: \"9615d9e4-113e-4282-a091-a8c69a0c7968\") " Feb 17 13:58:54 crc kubenswrapper[4768]: I0217 13:58:54.430894 4768 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/9615d9e4-113e-4282-a091-a8c69a0c7968-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Feb 17 13:58:54 crc kubenswrapper[4768]: I0217 13:58:54.430922 4768 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/9615d9e4-113e-4282-a091-a8c69a0c7968-plugins-conf\") on node \"crc\" DevicePath \"\"" Feb 17 13:58:54 crc kubenswrapper[4768]: I0217 13:58:54.430934 4768 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/9615d9e4-113e-4282-a091-a8c69a0c7968-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Feb 17 13:58:54 crc kubenswrapper[4768]: I0217 13:58:54.435531 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/9615d9e4-113e-4282-a091-a8c69a0c7968-pod-info" (OuterVolumeSpecName: "pod-info") pod "9615d9e4-113e-4282-a091-a8c69a0c7968" (UID: "9615d9e4-113e-4282-a091-a8c69a0c7968"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Feb 17 13:58:54 crc kubenswrapper[4768]: I0217 13:58:54.435946 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9615d9e4-113e-4282-a091-a8c69a0c7968-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "9615d9e4-113e-4282-a091-a8c69a0c7968" (UID: "9615d9e4-113e-4282-a091-a8c69a0c7968"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 13:58:54 crc kubenswrapper[4768]: I0217 13:58:54.437028 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage07-crc" (OuterVolumeSpecName: "persistence") pod "9615d9e4-113e-4282-a091-a8c69a0c7968" (UID: "9615d9e4-113e-4282-a091-a8c69a0c7968"). InnerVolumeSpecName "local-storage07-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Feb 17 13:58:54 crc kubenswrapper[4768]: I0217 13:58:54.439313 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9615d9e4-113e-4282-a091-a8c69a0c7968-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "9615d9e4-113e-4282-a091-a8c69a0c7968" (UID: "9615d9e4-113e-4282-a091-a8c69a0c7968"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 13:58:54 crc kubenswrapper[4768]: I0217 13:58:54.460395 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9615d9e4-113e-4282-a091-a8c69a0c7968-kube-api-access-vbcbv" (OuterVolumeSpecName: "kube-api-access-vbcbv") pod "9615d9e4-113e-4282-a091-a8c69a0c7968" (UID: "9615d9e4-113e-4282-a091-a8c69a0c7968"). InnerVolumeSpecName "kube-api-access-vbcbv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 13:58:54 crc kubenswrapper[4768]: I0217 13:58:54.490526 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9615d9e4-113e-4282-a091-a8c69a0c7968-config-data" (OuterVolumeSpecName: "config-data") pod "9615d9e4-113e-4282-a091-a8c69a0c7968" (UID: "9615d9e4-113e-4282-a091-a8c69a0c7968"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 13:58:54 crc kubenswrapper[4768]: I0217 13:58:54.506803 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9615d9e4-113e-4282-a091-a8c69a0c7968-server-conf" (OuterVolumeSpecName: "server-conf") pod "9615d9e4-113e-4282-a091-a8c69a0c7968" (UID: "9615d9e4-113e-4282-a091-a8c69a0c7968"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 13:58:54 crc kubenswrapper[4768]: I0217 13:58:54.533193 4768 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") on node \"crc\" " Feb 17 13:58:54 crc kubenswrapper[4768]: I0217 13:58:54.533224 4768 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/9615d9e4-113e-4282-a091-a8c69a0c7968-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Feb 17 13:58:54 crc kubenswrapper[4768]: I0217 13:58:54.533237 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vbcbv\" (UniqueName: \"kubernetes.io/projected/9615d9e4-113e-4282-a091-a8c69a0c7968-kube-api-access-vbcbv\") on node \"crc\" DevicePath \"\"" Feb 17 13:58:54 crc kubenswrapper[4768]: I0217 13:58:54.533247 4768 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/9615d9e4-113e-4282-a091-a8c69a0c7968-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Feb 17 13:58:54 crc kubenswrapper[4768]: I0217 13:58:54.533256 4768 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/9615d9e4-113e-4282-a091-a8c69a0c7968-pod-info\") on node \"crc\" DevicePath \"\"" Feb 17 13:58:54 crc kubenswrapper[4768]: I0217 13:58:54.533266 4768 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9615d9e4-113e-4282-a091-a8c69a0c7968-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 13:58:54 crc kubenswrapper[4768]: I0217 13:58:54.533274 4768 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/9615d9e4-113e-4282-a091-a8c69a0c7968-server-conf\") on node \"crc\" DevicePath \"\"" Feb 17 13:58:54 crc kubenswrapper[4768]: I0217 13:58:54.561824 4768 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage07-crc" (UniqueName: "kubernetes.io/local-volume/local-storage07-crc") on node "crc" Feb 17 13:58:54 crc kubenswrapper[4768]: I0217 13:58:54.575572 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9615d9e4-113e-4282-a091-a8c69a0c7968-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "9615d9e4-113e-4282-a091-a8c69a0c7968" (UID: "9615d9e4-113e-4282-a091-a8c69a0c7968"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 13:58:54 crc kubenswrapper[4768]: I0217 13:58:54.635540 4768 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/9615d9e4-113e-4282-a091-a8c69a0c7968-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Feb 17 13:58:54 crc kubenswrapper[4768]: I0217 13:58:54.635587 4768 reconciler_common.go:293] "Volume detached for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") on node \"crc\" DevicePath \"\"" Feb 17 13:58:54 crc kubenswrapper[4768]: I0217 13:58:54.793199 4768 generic.go:334] "Generic (PLEG): container finished" podID="9615d9e4-113e-4282-a091-a8c69a0c7968" containerID="d89e9e7f992f67ca081114ce9e3f958251d8095b8b48f2d57f1b38ff3926adbb" exitCode=0 Feb 17 13:58:54 crc kubenswrapper[4768]: I0217 13:58:54.793246 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"9615d9e4-113e-4282-a091-a8c69a0c7968","Type":"ContainerDied","Data":"d89e9e7f992f67ca081114ce9e3f958251d8095b8b48f2d57f1b38ff3926adbb"} Feb 17 13:58:54 crc kubenswrapper[4768]: I0217 13:58:54.793281 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"9615d9e4-113e-4282-a091-a8c69a0c7968","Type":"ContainerDied","Data":"0b19f20b6877db679d6f3b9fbe72c05ab41581e600a66a3c7b76c954adf7b1c8"} Feb 17 13:58:54 crc kubenswrapper[4768]: I0217 13:58:54.793303 4768 scope.go:117] "RemoveContainer" containerID="d89e9e7f992f67ca081114ce9e3f958251d8095b8b48f2d57f1b38ff3926adbb" Feb 17 13:58:54 crc kubenswrapper[4768]: I0217 13:58:54.793438 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 17 13:58:54 crc kubenswrapper[4768]: I0217 13:58:54.841476 4768 scope.go:117] "RemoveContainer" containerID="e1a6eaa5057abc76e416b61565b243db97335cdb11555ea5038bf95b5c8e8de4" Feb 17 13:58:54 crc kubenswrapper[4768]: I0217 13:58:54.848540 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 17 13:58:54 crc kubenswrapper[4768]: I0217 13:58:54.860870 4768 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 17 13:58:54 crc kubenswrapper[4768]: I0217 13:58:54.896311 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Feb 17 13:58:54 crc kubenswrapper[4768]: E0217 13:58:54.898592 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9615d9e4-113e-4282-a091-a8c69a0c7968" containerName="setup-container" Feb 17 13:58:54 crc kubenswrapper[4768]: I0217 13:58:54.898623 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="9615d9e4-113e-4282-a091-a8c69a0c7968" containerName="setup-container" Feb 17 13:58:54 crc kubenswrapper[4768]: E0217 13:58:54.898702 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9615d9e4-113e-4282-a091-a8c69a0c7968" containerName="rabbitmq" Feb 17 13:58:54 crc kubenswrapper[4768]: I0217 13:58:54.898713 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="9615d9e4-113e-4282-a091-a8c69a0c7968" containerName="rabbitmq" Feb 17 13:58:54 crc kubenswrapper[4768]: I0217 13:58:54.899250 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="9615d9e4-113e-4282-a091-a8c69a0c7968" containerName="rabbitmq" Feb 17 13:58:54 crc kubenswrapper[4768]: I0217 13:58:54.901444 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 17 13:58:54 crc kubenswrapper[4768]: I0217 13:58:54.904063 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-bm4g4" Feb 17 13:58:54 crc kubenswrapper[4768]: I0217 13:58:54.904367 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Feb 17 13:58:54 crc kubenswrapper[4768]: I0217 13:58:54.905566 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Feb 17 13:58:54 crc kubenswrapper[4768]: I0217 13:58:54.905673 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Feb 17 13:58:54 crc kubenswrapper[4768]: I0217 13:58:54.909185 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Feb 17 13:58:54 crc kubenswrapper[4768]: I0217 13:58:54.909833 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Feb 17 13:58:54 crc kubenswrapper[4768]: I0217 13:58:54.912263 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Feb 17 13:58:54 crc kubenswrapper[4768]: I0217 13:58:54.916176 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 17 13:58:55 crc kubenswrapper[4768]: I0217 13:58:55.008933 4768 scope.go:117] "RemoveContainer" containerID="d89e9e7f992f67ca081114ce9e3f958251d8095b8b48f2d57f1b38ff3926adbb" Feb 17 13:58:55 crc kubenswrapper[4768]: E0217 13:58:55.009966 4768 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d89e9e7f992f67ca081114ce9e3f958251d8095b8b48f2d57f1b38ff3926adbb\": container with ID starting with d89e9e7f992f67ca081114ce9e3f958251d8095b8b48f2d57f1b38ff3926adbb not found: ID does not exist" containerID="d89e9e7f992f67ca081114ce9e3f958251d8095b8b48f2d57f1b38ff3926adbb" Feb 17 13:58:55 crc kubenswrapper[4768]: I0217 13:58:55.010025 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d89e9e7f992f67ca081114ce9e3f958251d8095b8b48f2d57f1b38ff3926adbb"} err="failed to get container status \"d89e9e7f992f67ca081114ce9e3f958251d8095b8b48f2d57f1b38ff3926adbb\": rpc error: code = NotFound desc = could not find container \"d89e9e7f992f67ca081114ce9e3f958251d8095b8b48f2d57f1b38ff3926adbb\": container with ID starting with d89e9e7f992f67ca081114ce9e3f958251d8095b8b48f2d57f1b38ff3926adbb not found: ID does not exist" Feb 17 13:58:55 crc kubenswrapper[4768]: I0217 13:58:55.010048 4768 scope.go:117] "RemoveContainer" containerID="e1a6eaa5057abc76e416b61565b243db97335cdb11555ea5038bf95b5c8e8de4" Feb 17 13:58:55 crc kubenswrapper[4768]: E0217 13:58:55.010913 4768 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e1a6eaa5057abc76e416b61565b243db97335cdb11555ea5038bf95b5c8e8de4\": container with ID starting with e1a6eaa5057abc76e416b61565b243db97335cdb11555ea5038bf95b5c8e8de4 not found: ID does not exist" containerID="e1a6eaa5057abc76e416b61565b243db97335cdb11555ea5038bf95b5c8e8de4" Feb 17 13:58:55 crc kubenswrapper[4768]: I0217 13:58:55.011051 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e1a6eaa5057abc76e416b61565b243db97335cdb11555ea5038bf95b5c8e8de4"} err="failed to get container status \"e1a6eaa5057abc76e416b61565b243db97335cdb11555ea5038bf95b5c8e8de4\": rpc error: code = NotFound desc = could not find container \"e1a6eaa5057abc76e416b61565b243db97335cdb11555ea5038bf95b5c8e8de4\": container with ID starting with e1a6eaa5057abc76e416b61565b243db97335cdb11555ea5038bf95b5c8e8de4 not found: ID does not exist" Feb 17 13:58:55 crc kubenswrapper[4768]: I0217 13:58:55.042563 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/78830acd-378f-4199-8615-9884cdca4154-server-conf\") pod \"rabbitmq-server-0\" (UID: \"78830acd-378f-4199-8615-9884cdca4154\") " pod="openstack/rabbitmq-server-0" Feb 17 13:58:55 crc kubenswrapper[4768]: I0217 13:58:55.042613 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/78830acd-378f-4199-8615-9884cdca4154-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"78830acd-378f-4199-8615-9884cdca4154\") " pod="openstack/rabbitmq-server-0" Feb 17 13:58:55 crc kubenswrapper[4768]: I0217 13:58:55.042641 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/78830acd-378f-4199-8615-9884cdca4154-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"78830acd-378f-4199-8615-9884cdca4154\") " pod="openstack/rabbitmq-server-0" Feb 17 13:58:55 crc kubenswrapper[4768]: I0217 13:58:55.042682 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/78830acd-378f-4199-8615-9884cdca4154-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"78830acd-378f-4199-8615-9884cdca4154\") " pod="openstack/rabbitmq-server-0" Feb 17 13:58:55 crc kubenswrapper[4768]: I0217 13:58:55.042727 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/78830acd-378f-4199-8615-9884cdca4154-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"78830acd-378f-4199-8615-9884cdca4154\") " pod="openstack/rabbitmq-server-0" Feb 17 13:58:55 crc kubenswrapper[4768]: I0217 13:58:55.042752 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/78830acd-378f-4199-8615-9884cdca4154-pod-info\") pod \"rabbitmq-server-0\" (UID: \"78830acd-378f-4199-8615-9884cdca4154\") " pod="openstack/rabbitmq-server-0" Feb 17 13:58:55 crc kubenswrapper[4768]: I0217 13:58:55.042775 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/78830acd-378f-4199-8615-9884cdca4154-config-data\") pod \"rabbitmq-server-0\" (UID: \"78830acd-378f-4199-8615-9884cdca4154\") " pod="openstack/rabbitmq-server-0" Feb 17 13:58:55 crc kubenswrapper[4768]: I0217 13:58:55.042801 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"rabbitmq-server-0\" (UID: \"78830acd-378f-4199-8615-9884cdca4154\") " pod="openstack/rabbitmq-server-0" Feb 17 13:58:55 crc kubenswrapper[4768]: I0217 13:58:55.042858 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/78830acd-378f-4199-8615-9884cdca4154-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"78830acd-378f-4199-8615-9884cdca4154\") " pod="openstack/rabbitmq-server-0" Feb 17 13:58:55 crc kubenswrapper[4768]: I0217 13:58:55.042879 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/78830acd-378f-4199-8615-9884cdca4154-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"78830acd-378f-4199-8615-9884cdca4154\") " pod="openstack/rabbitmq-server-0" Feb 17 13:58:55 crc kubenswrapper[4768]: I0217 13:58:55.042902 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x52qs\" (UniqueName: \"kubernetes.io/projected/78830acd-378f-4199-8615-9884cdca4154-kube-api-access-x52qs\") pod \"rabbitmq-server-0\" (UID: \"78830acd-378f-4199-8615-9884cdca4154\") " pod="openstack/rabbitmq-server-0" Feb 17 13:58:55 crc kubenswrapper[4768]: I0217 13:58:55.143919 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/78830acd-378f-4199-8615-9884cdca4154-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"78830acd-378f-4199-8615-9884cdca4154\") " pod="openstack/rabbitmq-server-0" Feb 17 13:58:55 crc kubenswrapper[4768]: I0217 13:58:55.143961 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/78830acd-378f-4199-8615-9884cdca4154-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"78830acd-378f-4199-8615-9884cdca4154\") " pod="openstack/rabbitmq-server-0" Feb 17 13:58:55 crc kubenswrapper[4768]: I0217 13:58:55.143988 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x52qs\" (UniqueName: \"kubernetes.io/projected/78830acd-378f-4199-8615-9884cdca4154-kube-api-access-x52qs\") pod \"rabbitmq-server-0\" (UID: \"78830acd-378f-4199-8615-9884cdca4154\") " pod="openstack/rabbitmq-server-0" Feb 17 13:58:55 crc kubenswrapper[4768]: I0217 13:58:55.144011 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/78830acd-378f-4199-8615-9884cdca4154-server-conf\") pod \"rabbitmq-server-0\" (UID: \"78830acd-378f-4199-8615-9884cdca4154\") " pod="openstack/rabbitmq-server-0" Feb 17 13:58:55 crc kubenswrapper[4768]: I0217 13:58:55.144033 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/78830acd-378f-4199-8615-9884cdca4154-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"78830acd-378f-4199-8615-9884cdca4154\") " pod="openstack/rabbitmq-server-0" Feb 17 13:58:55 crc kubenswrapper[4768]: I0217 13:58:55.144057 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/78830acd-378f-4199-8615-9884cdca4154-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"78830acd-378f-4199-8615-9884cdca4154\") " pod="openstack/rabbitmq-server-0" Feb 17 13:58:55 crc kubenswrapper[4768]: I0217 13:58:55.144094 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/78830acd-378f-4199-8615-9884cdca4154-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"78830acd-378f-4199-8615-9884cdca4154\") " pod="openstack/rabbitmq-server-0" Feb 17 13:58:55 crc kubenswrapper[4768]: I0217 13:58:55.144140 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/78830acd-378f-4199-8615-9884cdca4154-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"78830acd-378f-4199-8615-9884cdca4154\") " pod="openstack/rabbitmq-server-0" Feb 17 13:58:55 crc kubenswrapper[4768]: I0217 13:58:55.144160 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/78830acd-378f-4199-8615-9884cdca4154-pod-info\") pod \"rabbitmq-server-0\" (UID: \"78830acd-378f-4199-8615-9884cdca4154\") " pod="openstack/rabbitmq-server-0" Feb 17 13:58:55 crc kubenswrapper[4768]: I0217 13:58:55.144184 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/78830acd-378f-4199-8615-9884cdca4154-config-data\") pod \"rabbitmq-server-0\" (UID: \"78830acd-378f-4199-8615-9884cdca4154\") " pod="openstack/rabbitmq-server-0" Feb 17 13:58:55 crc kubenswrapper[4768]: I0217 13:58:55.144211 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"rabbitmq-server-0\" (UID: \"78830acd-378f-4199-8615-9884cdca4154\") " pod="openstack/rabbitmq-server-0" Feb 17 13:58:55 crc kubenswrapper[4768]: I0217 13:58:55.144505 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/78830acd-378f-4199-8615-9884cdca4154-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"78830acd-378f-4199-8615-9884cdca4154\") " pod="openstack/rabbitmq-server-0" Feb 17 13:58:55 crc kubenswrapper[4768]: I0217 13:58:55.144695 4768 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"rabbitmq-server-0\" (UID: \"78830acd-378f-4199-8615-9884cdca4154\") device mount path \"/mnt/openstack/pv07\"" pod="openstack/rabbitmq-server-0" Feb 17 13:58:55 crc kubenswrapper[4768]: I0217 13:58:55.147307 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/78830acd-378f-4199-8615-9884cdca4154-server-conf\") pod \"rabbitmq-server-0\" (UID: \"78830acd-378f-4199-8615-9884cdca4154\") " pod="openstack/rabbitmq-server-0" Feb 17 13:58:55 crc kubenswrapper[4768]: I0217 13:58:55.149351 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/78830acd-378f-4199-8615-9884cdca4154-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"78830acd-378f-4199-8615-9884cdca4154\") " pod="openstack/rabbitmq-server-0" Feb 17 13:58:55 crc kubenswrapper[4768]: I0217 13:58:55.149740 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/78830acd-378f-4199-8615-9884cdca4154-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"78830acd-378f-4199-8615-9884cdca4154\") " pod="openstack/rabbitmq-server-0" Feb 17 13:58:55 crc kubenswrapper[4768]: I0217 13:58:55.150368 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/78830acd-378f-4199-8615-9884cdca4154-config-data\") pod \"rabbitmq-server-0\" (UID: \"78830acd-378f-4199-8615-9884cdca4154\") " pod="openstack/rabbitmq-server-0" Feb 17 13:58:55 crc kubenswrapper[4768]: I0217 13:58:55.161365 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/78830acd-378f-4199-8615-9884cdca4154-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"78830acd-378f-4199-8615-9884cdca4154\") " pod="openstack/rabbitmq-server-0" Feb 17 13:58:55 crc kubenswrapper[4768]: I0217 13:58:55.162973 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/78830acd-378f-4199-8615-9884cdca4154-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"78830acd-378f-4199-8615-9884cdca4154\") " pod="openstack/rabbitmq-server-0" Feb 17 13:58:55 crc kubenswrapper[4768]: I0217 13:58:55.165009 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/78830acd-378f-4199-8615-9884cdca4154-pod-info\") pod \"rabbitmq-server-0\" (UID: \"78830acd-378f-4199-8615-9884cdca4154\") " pod="openstack/rabbitmq-server-0" Feb 17 13:58:55 crc kubenswrapper[4768]: I0217 13:58:55.165138 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/78830acd-378f-4199-8615-9884cdca4154-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"78830acd-378f-4199-8615-9884cdca4154\") " pod="openstack/rabbitmq-server-0" Feb 17 13:58:55 crc kubenswrapper[4768]: I0217 13:58:55.165310 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x52qs\" (UniqueName: \"kubernetes.io/projected/78830acd-378f-4199-8615-9884cdca4154-kube-api-access-x52qs\") pod \"rabbitmq-server-0\" (UID: \"78830acd-378f-4199-8615-9884cdca4154\") " pod="openstack/rabbitmq-server-0" Feb 17 13:58:55 crc kubenswrapper[4768]: I0217 13:58:55.187435 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"rabbitmq-server-0\" (UID: \"78830acd-378f-4199-8615-9884cdca4154\") " pod="openstack/rabbitmq-server-0" Feb 17 13:58:55 crc kubenswrapper[4768]: I0217 13:58:55.279200 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 17 13:58:55 crc kubenswrapper[4768]: I0217 13:58:55.356368 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 17 13:58:55 crc kubenswrapper[4768]: I0217 13:58:55.449902 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/7d5df4be-f003-429d-8a84-81a239db88c0-plugins-conf\") pod \"7d5df4be-f003-429d-8a84-81a239db88c0\" (UID: \"7d5df4be-f003-429d-8a84-81a239db88c0\") " Feb 17 13:58:55 crc kubenswrapper[4768]: I0217 13:58:55.449956 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/7d5df4be-f003-429d-8a84-81a239db88c0-pod-info\") pod \"7d5df4be-f003-429d-8a84-81a239db88c0\" (UID: \"7d5df4be-f003-429d-8a84-81a239db88c0\") " Feb 17 13:58:55 crc kubenswrapper[4768]: I0217 13:58:55.449987 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/7d5df4be-f003-429d-8a84-81a239db88c0-rabbitmq-plugins\") pod \"7d5df4be-f003-429d-8a84-81a239db88c0\" (UID: \"7d5df4be-f003-429d-8a84-81a239db88c0\") " Feb 17 13:58:55 crc kubenswrapper[4768]: I0217 13:58:55.450009 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"7d5df4be-f003-429d-8a84-81a239db88c0\" (UID: \"7d5df4be-f003-429d-8a84-81a239db88c0\") " Feb 17 13:58:55 crc kubenswrapper[4768]: I0217 13:58:55.450119 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/7d5df4be-f003-429d-8a84-81a239db88c0-erlang-cookie-secret\") pod \"7d5df4be-f003-429d-8a84-81a239db88c0\" (UID: \"7d5df4be-f003-429d-8a84-81a239db88c0\") " Feb 17 13:58:55 crc kubenswrapper[4768]: I0217 13:58:55.450156 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/7d5df4be-f003-429d-8a84-81a239db88c0-server-conf\") pod \"7d5df4be-f003-429d-8a84-81a239db88c0\" (UID: \"7d5df4be-f003-429d-8a84-81a239db88c0\") " Feb 17 13:58:55 crc kubenswrapper[4768]: I0217 13:58:55.450223 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/7d5df4be-f003-429d-8a84-81a239db88c0-config-data\") pod \"7d5df4be-f003-429d-8a84-81a239db88c0\" (UID: \"7d5df4be-f003-429d-8a84-81a239db88c0\") " Feb 17 13:58:55 crc kubenswrapper[4768]: I0217 13:58:55.450259 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t6nk2\" (UniqueName: \"kubernetes.io/projected/7d5df4be-f003-429d-8a84-81a239db88c0-kube-api-access-t6nk2\") pod \"7d5df4be-f003-429d-8a84-81a239db88c0\" (UID: \"7d5df4be-f003-429d-8a84-81a239db88c0\") " Feb 17 13:58:55 crc kubenswrapper[4768]: I0217 13:58:55.450352 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/7d5df4be-f003-429d-8a84-81a239db88c0-rabbitmq-tls\") pod \"7d5df4be-f003-429d-8a84-81a239db88c0\" (UID: \"7d5df4be-f003-429d-8a84-81a239db88c0\") " Feb 17 13:58:55 crc kubenswrapper[4768]: I0217 13:58:55.450417 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/7d5df4be-f003-429d-8a84-81a239db88c0-rabbitmq-erlang-cookie\") pod \"7d5df4be-f003-429d-8a84-81a239db88c0\" (UID: \"7d5df4be-f003-429d-8a84-81a239db88c0\") " Feb 17 13:58:55 crc kubenswrapper[4768]: I0217 13:58:55.450455 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/7d5df4be-f003-429d-8a84-81a239db88c0-rabbitmq-confd\") pod \"7d5df4be-f003-429d-8a84-81a239db88c0\" (UID: \"7d5df4be-f003-429d-8a84-81a239db88c0\") " Feb 17 13:58:55 crc kubenswrapper[4768]: I0217 13:58:55.453660 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7d5df4be-f003-429d-8a84-81a239db88c0-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "7d5df4be-f003-429d-8a84-81a239db88c0" (UID: "7d5df4be-f003-429d-8a84-81a239db88c0"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 13:58:55 crc kubenswrapper[4768]: I0217 13:58:55.455566 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7d5df4be-f003-429d-8a84-81a239db88c0-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "7d5df4be-f003-429d-8a84-81a239db88c0" (UID: "7d5df4be-f003-429d-8a84-81a239db88c0"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 13:58:55 crc kubenswrapper[4768]: I0217 13:58:55.455869 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7d5df4be-f003-429d-8a84-81a239db88c0-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "7d5df4be-f003-429d-8a84-81a239db88c0" (UID: "7d5df4be-f003-429d-8a84-81a239db88c0"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 13:58:55 crc kubenswrapper[4768]: I0217 13:58:55.459246 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7d5df4be-f003-429d-8a84-81a239db88c0-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "7d5df4be-f003-429d-8a84-81a239db88c0" (UID: "7d5df4be-f003-429d-8a84-81a239db88c0"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 13:58:55 crc kubenswrapper[4768]: I0217 13:58:55.460446 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/7d5df4be-f003-429d-8a84-81a239db88c0-pod-info" (OuterVolumeSpecName: "pod-info") pod "7d5df4be-f003-429d-8a84-81a239db88c0" (UID: "7d5df4be-f003-429d-8a84-81a239db88c0"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Feb 17 13:58:55 crc kubenswrapper[4768]: I0217 13:58:55.460981 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7d5df4be-f003-429d-8a84-81a239db88c0-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "7d5df4be-f003-429d-8a84-81a239db88c0" (UID: "7d5df4be-f003-429d-8a84-81a239db88c0"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 13:58:55 crc kubenswrapper[4768]: I0217 13:58:55.461868 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage03-crc" (OuterVolumeSpecName: "persistence") pod "7d5df4be-f003-429d-8a84-81a239db88c0" (UID: "7d5df4be-f003-429d-8a84-81a239db88c0"). InnerVolumeSpecName "local-storage03-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Feb 17 13:58:55 crc kubenswrapper[4768]: I0217 13:58:55.463299 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7d5df4be-f003-429d-8a84-81a239db88c0-kube-api-access-t6nk2" (OuterVolumeSpecName: "kube-api-access-t6nk2") pod "7d5df4be-f003-429d-8a84-81a239db88c0" (UID: "7d5df4be-f003-429d-8a84-81a239db88c0"). InnerVolumeSpecName "kube-api-access-t6nk2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 13:58:55 crc kubenswrapper[4768]: I0217 13:58:55.480990 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7d5df4be-f003-429d-8a84-81a239db88c0-config-data" (OuterVolumeSpecName: "config-data") pod "7d5df4be-f003-429d-8a84-81a239db88c0" (UID: "7d5df4be-f003-429d-8a84-81a239db88c0"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 13:58:55 crc kubenswrapper[4768]: I0217 13:58:55.521899 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7d5df4be-f003-429d-8a84-81a239db88c0-server-conf" (OuterVolumeSpecName: "server-conf") pod "7d5df4be-f003-429d-8a84-81a239db88c0" (UID: "7d5df4be-f003-429d-8a84-81a239db88c0"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 13:58:55 crc kubenswrapper[4768]: I0217 13:58:55.552596 4768 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/7d5df4be-f003-429d-8a84-81a239db88c0-plugins-conf\") on node \"crc\" DevicePath \"\"" Feb 17 13:58:55 crc kubenswrapper[4768]: I0217 13:58:55.552818 4768 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/7d5df4be-f003-429d-8a84-81a239db88c0-pod-info\") on node \"crc\" DevicePath \"\"" Feb 17 13:58:55 crc kubenswrapper[4768]: I0217 13:58:55.552878 4768 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/7d5df4be-f003-429d-8a84-81a239db88c0-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Feb 17 13:58:55 crc kubenswrapper[4768]: I0217 13:58:55.552975 4768 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") on node \"crc\" " Feb 17 13:58:55 crc kubenswrapper[4768]: I0217 13:58:55.553038 4768 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/7d5df4be-f003-429d-8a84-81a239db88c0-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Feb 17 13:58:55 crc kubenswrapper[4768]: I0217 13:58:55.553116 4768 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/7d5df4be-f003-429d-8a84-81a239db88c0-server-conf\") on node \"crc\" DevicePath \"\"" Feb 17 13:58:55 crc kubenswrapper[4768]: I0217 13:58:55.553185 4768 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/7d5df4be-f003-429d-8a84-81a239db88c0-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 13:58:55 crc kubenswrapper[4768]: I0217 13:58:55.553244 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t6nk2\" (UniqueName: \"kubernetes.io/projected/7d5df4be-f003-429d-8a84-81a239db88c0-kube-api-access-t6nk2\") on node \"crc\" DevicePath \"\"" Feb 17 13:58:55 crc kubenswrapper[4768]: I0217 13:58:55.553299 4768 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/7d5df4be-f003-429d-8a84-81a239db88c0-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Feb 17 13:58:55 crc kubenswrapper[4768]: I0217 13:58:55.553352 4768 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/7d5df4be-f003-429d-8a84-81a239db88c0-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Feb 17 13:58:55 crc kubenswrapper[4768]: I0217 13:58:55.554959 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9615d9e4-113e-4282-a091-a8c69a0c7968" path="/var/lib/kubelet/pods/9615d9e4-113e-4282-a091-a8c69a0c7968/volumes" Feb 17 13:58:55 crc kubenswrapper[4768]: I0217 13:58:55.582142 4768 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage03-crc" (UniqueName: "kubernetes.io/local-volume/local-storage03-crc") on node "crc" Feb 17 13:58:55 crc kubenswrapper[4768]: I0217 13:58:55.592293 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7d5df4be-f003-429d-8a84-81a239db88c0-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "7d5df4be-f003-429d-8a84-81a239db88c0" (UID: "7d5df4be-f003-429d-8a84-81a239db88c0"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 13:58:55 crc kubenswrapper[4768]: I0217 13:58:55.655431 4768 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/7d5df4be-f003-429d-8a84-81a239db88c0-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Feb 17 13:58:55 crc kubenswrapper[4768]: I0217 13:58:55.655466 4768 reconciler_common.go:293] "Volume detached for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") on node \"crc\" DevicePath \"\"" Feb 17 13:58:55 crc kubenswrapper[4768]: I0217 13:58:55.805074 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 17 13:58:55 crc kubenswrapper[4768]: I0217 13:58:55.808936 4768 generic.go:334] "Generic (PLEG): container finished" podID="7d5df4be-f003-429d-8a84-81a239db88c0" containerID="ddaeb65503c006bae90a4e798bb20a1953508e688ef15a8bfd1b5a49cdcdc151" exitCode=0 Feb 17 13:58:55 crc kubenswrapper[4768]: I0217 13:58:55.808985 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 17 13:58:55 crc kubenswrapper[4768]: I0217 13:58:55.809019 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"7d5df4be-f003-429d-8a84-81a239db88c0","Type":"ContainerDied","Data":"ddaeb65503c006bae90a4e798bb20a1953508e688ef15a8bfd1b5a49cdcdc151"} Feb 17 13:58:55 crc kubenswrapper[4768]: I0217 13:58:55.809051 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"7d5df4be-f003-429d-8a84-81a239db88c0","Type":"ContainerDied","Data":"025e73479229fbdeae9d1d20002858466f4b18906e6618e10f641a478641e6a0"} Feb 17 13:58:55 crc kubenswrapper[4768]: I0217 13:58:55.809072 4768 scope.go:117] "RemoveContainer" containerID="ddaeb65503c006bae90a4e798bb20a1953508e688ef15a8bfd1b5a49cdcdc151" Feb 17 13:58:55 crc kubenswrapper[4768]: I0217 13:58:55.858300 4768 scope.go:117] "RemoveContainer" containerID="16e2b6d9fb217b4e0023a2a85ee90118aa1eb1227cce38b0a37ae2b1cbcf7add" Feb 17 13:58:55 crc kubenswrapper[4768]: I0217 13:58:55.895832 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 17 13:58:55 crc kubenswrapper[4768]: I0217 13:58:55.909079 4768 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 17 13:58:55 crc kubenswrapper[4768]: I0217 13:58:55.927455 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 17 13:58:55 crc kubenswrapper[4768]: E0217 13:58:55.928052 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7d5df4be-f003-429d-8a84-81a239db88c0" containerName="setup-container" Feb 17 13:58:55 crc kubenswrapper[4768]: I0217 13:58:55.928071 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="7d5df4be-f003-429d-8a84-81a239db88c0" containerName="setup-container" Feb 17 13:58:55 crc kubenswrapper[4768]: E0217 13:58:55.928118 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7d5df4be-f003-429d-8a84-81a239db88c0" containerName="rabbitmq" Feb 17 13:58:55 crc kubenswrapper[4768]: I0217 13:58:55.928124 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="7d5df4be-f003-429d-8a84-81a239db88c0" containerName="rabbitmq" Feb 17 13:58:55 crc kubenswrapper[4768]: I0217 13:58:55.928384 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="7d5df4be-f003-429d-8a84-81a239db88c0" containerName="rabbitmq" Feb 17 13:58:55 crc kubenswrapper[4768]: I0217 13:58:55.929773 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 17 13:58:55 crc kubenswrapper[4768]: I0217 13:58:55.930010 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 17 13:58:55 crc kubenswrapper[4768]: I0217 13:58:55.933577 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Feb 17 13:58:55 crc kubenswrapper[4768]: I0217 13:58:55.933738 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Feb 17 13:58:55 crc kubenswrapper[4768]: I0217 13:58:55.934870 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-r6btj" Feb 17 13:58:55 crc kubenswrapper[4768]: I0217 13:58:55.935007 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Feb 17 13:58:55 crc kubenswrapper[4768]: I0217 13:58:55.935162 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Feb 17 13:58:55 crc kubenswrapper[4768]: I0217 13:58:55.935303 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Feb 17 13:58:55 crc kubenswrapper[4768]: I0217 13:58:55.935458 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Feb 17 13:58:55 crc kubenswrapper[4768]: I0217 13:58:55.952788 4768 scope.go:117] "RemoveContainer" containerID="ddaeb65503c006bae90a4e798bb20a1953508e688ef15a8bfd1b5a49cdcdc151" Feb 17 13:58:55 crc kubenswrapper[4768]: E0217 13:58:55.954226 4768 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ddaeb65503c006bae90a4e798bb20a1953508e688ef15a8bfd1b5a49cdcdc151\": container with ID starting with ddaeb65503c006bae90a4e798bb20a1953508e688ef15a8bfd1b5a49cdcdc151 not found: ID does not exist" containerID="ddaeb65503c006bae90a4e798bb20a1953508e688ef15a8bfd1b5a49cdcdc151" Feb 17 13:58:55 crc kubenswrapper[4768]: I0217 13:58:55.954269 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ddaeb65503c006bae90a4e798bb20a1953508e688ef15a8bfd1b5a49cdcdc151"} err="failed to get container status \"ddaeb65503c006bae90a4e798bb20a1953508e688ef15a8bfd1b5a49cdcdc151\": rpc error: code = NotFound desc = could not find container \"ddaeb65503c006bae90a4e798bb20a1953508e688ef15a8bfd1b5a49cdcdc151\": container with ID starting with ddaeb65503c006bae90a4e798bb20a1953508e688ef15a8bfd1b5a49cdcdc151 not found: ID does not exist" Feb 17 13:58:55 crc kubenswrapper[4768]: I0217 13:58:55.954306 4768 scope.go:117] "RemoveContainer" containerID="16e2b6d9fb217b4e0023a2a85ee90118aa1eb1227cce38b0a37ae2b1cbcf7add" Feb 17 13:58:55 crc kubenswrapper[4768]: E0217 13:58:55.966233 4768 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"16e2b6d9fb217b4e0023a2a85ee90118aa1eb1227cce38b0a37ae2b1cbcf7add\": container with ID starting with 16e2b6d9fb217b4e0023a2a85ee90118aa1eb1227cce38b0a37ae2b1cbcf7add not found: ID does not exist" containerID="16e2b6d9fb217b4e0023a2a85ee90118aa1eb1227cce38b0a37ae2b1cbcf7add" Feb 17 13:58:55 crc kubenswrapper[4768]: I0217 13:58:55.966832 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"16e2b6d9fb217b4e0023a2a85ee90118aa1eb1227cce38b0a37ae2b1cbcf7add"} err="failed to get container status \"16e2b6d9fb217b4e0023a2a85ee90118aa1eb1227cce38b0a37ae2b1cbcf7add\": rpc error: code = NotFound desc = could not find container \"16e2b6d9fb217b4e0023a2a85ee90118aa1eb1227cce38b0a37ae2b1cbcf7add\": container with ID starting with 16e2b6d9fb217b4e0023a2a85ee90118aa1eb1227cce38b0a37ae2b1cbcf7add not found: ID does not exist" Feb 17 13:58:56 crc kubenswrapper[4768]: I0217 13:58:56.080201 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/edccbc8c-a38a-4c5d-b31a-a3b55f182ffa-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"edccbc8c-a38a-4c5d-b31a-a3b55f182ffa\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 13:58:56 crc kubenswrapper[4768]: I0217 13:58:56.080321 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/edccbc8c-a38a-4c5d-b31a-a3b55f182ffa-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"edccbc8c-a38a-4c5d-b31a-a3b55f182ffa\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 13:58:56 crc kubenswrapper[4768]: I0217 13:58:56.080370 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/edccbc8c-a38a-4c5d-b31a-a3b55f182ffa-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"edccbc8c-a38a-4c5d-b31a-a3b55f182ffa\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 13:58:56 crc kubenswrapper[4768]: I0217 13:58:56.080450 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fcd52\" (UniqueName: \"kubernetes.io/projected/edccbc8c-a38a-4c5d-b31a-a3b55f182ffa-kube-api-access-fcd52\") pod \"rabbitmq-cell1-server-0\" (UID: \"edccbc8c-a38a-4c5d-b31a-a3b55f182ffa\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 13:58:56 crc kubenswrapper[4768]: I0217 13:58:56.080500 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/edccbc8c-a38a-4c5d-b31a-a3b55f182ffa-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"edccbc8c-a38a-4c5d-b31a-a3b55f182ffa\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 13:58:56 crc kubenswrapper[4768]: I0217 13:58:56.080563 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/edccbc8c-a38a-4c5d-b31a-a3b55f182ffa-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"edccbc8c-a38a-4c5d-b31a-a3b55f182ffa\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 13:58:56 crc kubenswrapper[4768]: I0217 13:58:56.080633 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/edccbc8c-a38a-4c5d-b31a-a3b55f182ffa-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"edccbc8c-a38a-4c5d-b31a-a3b55f182ffa\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 13:58:56 crc kubenswrapper[4768]: I0217 13:58:56.080656 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"edccbc8c-a38a-4c5d-b31a-a3b55f182ffa\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 13:58:56 crc kubenswrapper[4768]: I0217 13:58:56.080701 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/edccbc8c-a38a-4c5d-b31a-a3b55f182ffa-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"edccbc8c-a38a-4c5d-b31a-a3b55f182ffa\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 13:58:56 crc kubenswrapper[4768]: I0217 13:58:56.080730 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/edccbc8c-a38a-4c5d-b31a-a3b55f182ffa-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"edccbc8c-a38a-4c5d-b31a-a3b55f182ffa\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 13:58:56 crc kubenswrapper[4768]: I0217 13:58:56.080812 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/edccbc8c-a38a-4c5d-b31a-a3b55f182ffa-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"edccbc8c-a38a-4c5d-b31a-a3b55f182ffa\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 13:58:56 crc kubenswrapper[4768]: I0217 13:58:56.182853 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/edccbc8c-a38a-4c5d-b31a-a3b55f182ffa-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"edccbc8c-a38a-4c5d-b31a-a3b55f182ffa\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 13:58:56 crc kubenswrapper[4768]: I0217 13:58:56.182932 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/edccbc8c-a38a-4c5d-b31a-a3b55f182ffa-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"edccbc8c-a38a-4c5d-b31a-a3b55f182ffa\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 13:58:56 crc kubenswrapper[4768]: I0217 13:58:56.182992 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fcd52\" (UniqueName: \"kubernetes.io/projected/edccbc8c-a38a-4c5d-b31a-a3b55f182ffa-kube-api-access-fcd52\") pod \"rabbitmq-cell1-server-0\" (UID: \"edccbc8c-a38a-4c5d-b31a-a3b55f182ffa\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 13:58:56 crc kubenswrapper[4768]: I0217 13:58:56.183015 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/edccbc8c-a38a-4c5d-b31a-a3b55f182ffa-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"edccbc8c-a38a-4c5d-b31a-a3b55f182ffa\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 13:58:56 crc kubenswrapper[4768]: I0217 13:58:56.183047 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/edccbc8c-a38a-4c5d-b31a-a3b55f182ffa-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"edccbc8c-a38a-4c5d-b31a-a3b55f182ffa\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 13:58:56 crc kubenswrapper[4768]: I0217 13:58:56.183078 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/edccbc8c-a38a-4c5d-b31a-a3b55f182ffa-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"edccbc8c-a38a-4c5d-b31a-a3b55f182ffa\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 13:58:56 crc kubenswrapper[4768]: I0217 13:58:56.183120 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"edccbc8c-a38a-4c5d-b31a-a3b55f182ffa\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 13:58:56 crc kubenswrapper[4768]: I0217 13:58:56.183142 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/edccbc8c-a38a-4c5d-b31a-a3b55f182ffa-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"edccbc8c-a38a-4c5d-b31a-a3b55f182ffa\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 13:58:56 crc kubenswrapper[4768]: I0217 13:58:56.183165 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/edccbc8c-a38a-4c5d-b31a-a3b55f182ffa-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"edccbc8c-a38a-4c5d-b31a-a3b55f182ffa\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 13:58:56 crc kubenswrapper[4768]: I0217 13:58:56.183204 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/edccbc8c-a38a-4c5d-b31a-a3b55f182ffa-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"edccbc8c-a38a-4c5d-b31a-a3b55f182ffa\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 13:58:56 crc kubenswrapper[4768]: I0217 13:58:56.183264 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/edccbc8c-a38a-4c5d-b31a-a3b55f182ffa-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"edccbc8c-a38a-4c5d-b31a-a3b55f182ffa\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 13:58:56 crc kubenswrapper[4768]: I0217 13:58:56.183511 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/edccbc8c-a38a-4c5d-b31a-a3b55f182ffa-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"edccbc8c-a38a-4c5d-b31a-a3b55f182ffa\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 13:58:56 crc kubenswrapper[4768]: I0217 13:58:56.184221 4768 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"edccbc8c-a38a-4c5d-b31a-a3b55f182ffa\") device mount path \"/mnt/openstack/pv03\"" pod="openstack/rabbitmq-cell1-server-0" Feb 17 13:58:56 crc kubenswrapper[4768]: I0217 13:58:56.184411 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/edccbc8c-a38a-4c5d-b31a-a3b55f182ffa-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"edccbc8c-a38a-4c5d-b31a-a3b55f182ffa\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 13:58:56 crc kubenswrapper[4768]: I0217 13:58:56.184441 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/edccbc8c-a38a-4c5d-b31a-a3b55f182ffa-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"edccbc8c-a38a-4c5d-b31a-a3b55f182ffa\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 13:58:56 crc kubenswrapper[4768]: I0217 13:58:56.184720 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/edccbc8c-a38a-4c5d-b31a-a3b55f182ffa-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"edccbc8c-a38a-4c5d-b31a-a3b55f182ffa\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 13:58:56 crc kubenswrapper[4768]: I0217 13:58:56.185128 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/edccbc8c-a38a-4c5d-b31a-a3b55f182ffa-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"edccbc8c-a38a-4c5d-b31a-a3b55f182ffa\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 13:58:56 crc kubenswrapper[4768]: I0217 13:58:56.192034 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/edccbc8c-a38a-4c5d-b31a-a3b55f182ffa-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"edccbc8c-a38a-4c5d-b31a-a3b55f182ffa\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 13:58:56 crc kubenswrapper[4768]: I0217 13:58:56.192658 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/edccbc8c-a38a-4c5d-b31a-a3b55f182ffa-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"edccbc8c-a38a-4c5d-b31a-a3b55f182ffa\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 13:58:56 crc kubenswrapper[4768]: I0217 13:58:56.196446 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/edccbc8c-a38a-4c5d-b31a-a3b55f182ffa-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"edccbc8c-a38a-4c5d-b31a-a3b55f182ffa\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 13:58:56 crc kubenswrapper[4768]: I0217 13:58:56.199205 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/edccbc8c-a38a-4c5d-b31a-a3b55f182ffa-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"edccbc8c-a38a-4c5d-b31a-a3b55f182ffa\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 13:58:56 crc kubenswrapper[4768]: I0217 13:58:56.207249 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fcd52\" (UniqueName: \"kubernetes.io/projected/edccbc8c-a38a-4c5d-b31a-a3b55f182ffa-kube-api-access-fcd52\") pod \"rabbitmq-cell1-server-0\" (UID: \"edccbc8c-a38a-4c5d-b31a-a3b55f182ffa\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 13:58:56 crc kubenswrapper[4768]: I0217 13:58:56.219530 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"edccbc8c-a38a-4c5d-b31a-a3b55f182ffa\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 13:58:56 crc kubenswrapper[4768]: I0217 13:58:56.304800 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 17 13:58:56 crc kubenswrapper[4768]: I0217 13:58:56.750375 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 17 13:58:56 crc kubenswrapper[4768]: W0217 13:58:56.755648 4768 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podedccbc8c_a38a_4c5d_b31a_a3b55f182ffa.slice/crio-fca18ad8e46624d084291d3ba8fdf8676fbced5ceebaf8f10d0287c587f3fa5e WatchSource:0}: Error finding container fca18ad8e46624d084291d3ba8fdf8676fbced5ceebaf8f10d0287c587f3fa5e: Status 404 returned error can't find the container with id fca18ad8e46624d084291d3ba8fdf8676fbced5ceebaf8f10d0287c587f3fa5e Feb 17 13:58:56 crc kubenswrapper[4768]: I0217 13:58:56.820493 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"edccbc8c-a38a-4c5d-b31a-a3b55f182ffa","Type":"ContainerStarted","Data":"fca18ad8e46624d084291d3ba8fdf8676fbced5ceebaf8f10d0287c587f3fa5e"} Feb 17 13:58:56 crc kubenswrapper[4768]: I0217 13:58:56.821686 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"78830acd-378f-4199-8615-9884cdca4154","Type":"ContainerStarted","Data":"2936172c4a4a8df99c13d913fc8313176fbc178c0da6e945de144a4c842f6436"} Feb 17 13:58:57 crc kubenswrapper[4768]: I0217 13:58:57.426996 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-79bd4cc8c9-ckmnk"] Feb 17 13:58:57 crc kubenswrapper[4768]: I0217 13:58:57.429634 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-79bd4cc8c9-ckmnk" Feb 17 13:58:57 crc kubenswrapper[4768]: I0217 13:58:57.431462 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-edpm-ipam" Feb 17 13:58:57 crc kubenswrapper[4768]: I0217 13:58:57.445030 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-79bd4cc8c9-ckmnk"] Feb 17 13:58:57 crc kubenswrapper[4768]: I0217 13:58:57.556633 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7d5df4be-f003-429d-8a84-81a239db88c0" path="/var/lib/kubelet/pods/7d5df4be-f003-429d-8a84-81a239db88c0/volumes" Feb 17 13:58:57 crc kubenswrapper[4768]: I0217 13:58:57.609622 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jnwwk\" (UniqueName: \"kubernetes.io/projected/809a5161-aafa-4472-8bc9-1d251b55e815-kube-api-access-jnwwk\") pod \"dnsmasq-dns-79bd4cc8c9-ckmnk\" (UID: \"809a5161-aafa-4472-8bc9-1d251b55e815\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-ckmnk" Feb 17 13:58:57 crc kubenswrapper[4768]: I0217 13:58:57.609817 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/809a5161-aafa-4472-8bc9-1d251b55e815-dns-swift-storage-0\") pod \"dnsmasq-dns-79bd4cc8c9-ckmnk\" (UID: \"809a5161-aafa-4472-8bc9-1d251b55e815\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-ckmnk" Feb 17 13:58:57 crc kubenswrapper[4768]: I0217 13:58:57.609894 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/809a5161-aafa-4472-8bc9-1d251b55e815-config\") pod \"dnsmasq-dns-79bd4cc8c9-ckmnk\" (UID: \"809a5161-aafa-4472-8bc9-1d251b55e815\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-ckmnk" Feb 17 13:58:57 crc kubenswrapper[4768]: I0217 13:58:57.610143 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/809a5161-aafa-4472-8bc9-1d251b55e815-ovsdbserver-nb\") pod \"dnsmasq-dns-79bd4cc8c9-ckmnk\" (UID: \"809a5161-aafa-4472-8bc9-1d251b55e815\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-ckmnk" Feb 17 13:58:57 crc kubenswrapper[4768]: I0217 13:58:57.610311 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/809a5161-aafa-4472-8bc9-1d251b55e815-ovsdbserver-sb\") pod \"dnsmasq-dns-79bd4cc8c9-ckmnk\" (UID: \"809a5161-aafa-4472-8bc9-1d251b55e815\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-ckmnk" Feb 17 13:58:57 crc kubenswrapper[4768]: I0217 13:58:57.610382 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/809a5161-aafa-4472-8bc9-1d251b55e815-openstack-edpm-ipam\") pod \"dnsmasq-dns-79bd4cc8c9-ckmnk\" (UID: \"809a5161-aafa-4472-8bc9-1d251b55e815\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-ckmnk" Feb 17 13:58:57 crc kubenswrapper[4768]: I0217 13:58:57.610418 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/809a5161-aafa-4472-8bc9-1d251b55e815-dns-svc\") pod \"dnsmasq-dns-79bd4cc8c9-ckmnk\" (UID: \"809a5161-aafa-4472-8bc9-1d251b55e815\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-ckmnk" Feb 17 13:58:57 crc kubenswrapper[4768]: I0217 13:58:57.713116 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/809a5161-aafa-4472-8bc9-1d251b55e815-ovsdbserver-nb\") pod \"dnsmasq-dns-79bd4cc8c9-ckmnk\" (UID: \"809a5161-aafa-4472-8bc9-1d251b55e815\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-ckmnk" Feb 17 13:58:57 crc kubenswrapper[4768]: I0217 13:58:57.713208 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/809a5161-aafa-4472-8bc9-1d251b55e815-ovsdbserver-sb\") pod \"dnsmasq-dns-79bd4cc8c9-ckmnk\" (UID: \"809a5161-aafa-4472-8bc9-1d251b55e815\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-ckmnk" Feb 17 13:58:57 crc kubenswrapper[4768]: I0217 13:58:57.713279 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/809a5161-aafa-4472-8bc9-1d251b55e815-openstack-edpm-ipam\") pod \"dnsmasq-dns-79bd4cc8c9-ckmnk\" (UID: \"809a5161-aafa-4472-8bc9-1d251b55e815\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-ckmnk" Feb 17 13:58:57 crc kubenswrapper[4768]: I0217 13:58:57.713305 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/809a5161-aafa-4472-8bc9-1d251b55e815-dns-svc\") pod \"dnsmasq-dns-79bd4cc8c9-ckmnk\" (UID: \"809a5161-aafa-4472-8bc9-1d251b55e815\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-ckmnk" Feb 17 13:58:57 crc kubenswrapper[4768]: I0217 13:58:57.714186 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/809a5161-aafa-4472-8bc9-1d251b55e815-ovsdbserver-nb\") pod \"dnsmasq-dns-79bd4cc8c9-ckmnk\" (UID: \"809a5161-aafa-4472-8bc9-1d251b55e815\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-ckmnk" Feb 17 13:58:57 crc kubenswrapper[4768]: I0217 13:58:57.714223 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/809a5161-aafa-4472-8bc9-1d251b55e815-openstack-edpm-ipam\") pod \"dnsmasq-dns-79bd4cc8c9-ckmnk\" (UID: \"809a5161-aafa-4472-8bc9-1d251b55e815\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-ckmnk" Feb 17 13:58:57 crc kubenswrapper[4768]: I0217 13:58:57.714550 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/809a5161-aafa-4472-8bc9-1d251b55e815-dns-svc\") pod \"dnsmasq-dns-79bd4cc8c9-ckmnk\" (UID: \"809a5161-aafa-4472-8bc9-1d251b55e815\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-ckmnk" Feb 17 13:58:57 crc kubenswrapper[4768]: I0217 13:58:57.714905 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/809a5161-aafa-4472-8bc9-1d251b55e815-ovsdbserver-sb\") pod \"dnsmasq-dns-79bd4cc8c9-ckmnk\" (UID: \"809a5161-aafa-4472-8bc9-1d251b55e815\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-ckmnk" Feb 17 13:58:57 crc kubenswrapper[4768]: I0217 13:58:57.715887 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jnwwk\" (UniqueName: \"kubernetes.io/projected/809a5161-aafa-4472-8bc9-1d251b55e815-kube-api-access-jnwwk\") pod \"dnsmasq-dns-79bd4cc8c9-ckmnk\" (UID: \"809a5161-aafa-4472-8bc9-1d251b55e815\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-ckmnk" Feb 17 13:58:57 crc kubenswrapper[4768]: I0217 13:58:57.716904 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/809a5161-aafa-4472-8bc9-1d251b55e815-dns-swift-storage-0\") pod \"dnsmasq-dns-79bd4cc8c9-ckmnk\" (UID: \"809a5161-aafa-4472-8bc9-1d251b55e815\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-ckmnk" Feb 17 13:58:57 crc kubenswrapper[4768]: I0217 13:58:57.716994 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/809a5161-aafa-4472-8bc9-1d251b55e815-config\") pod \"dnsmasq-dns-79bd4cc8c9-ckmnk\" (UID: \"809a5161-aafa-4472-8bc9-1d251b55e815\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-ckmnk" Feb 17 13:58:57 crc kubenswrapper[4768]: I0217 13:58:57.717137 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/809a5161-aafa-4472-8bc9-1d251b55e815-dns-swift-storage-0\") pod \"dnsmasq-dns-79bd4cc8c9-ckmnk\" (UID: \"809a5161-aafa-4472-8bc9-1d251b55e815\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-ckmnk" Feb 17 13:58:57 crc kubenswrapper[4768]: I0217 13:58:57.717642 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/809a5161-aafa-4472-8bc9-1d251b55e815-config\") pod \"dnsmasq-dns-79bd4cc8c9-ckmnk\" (UID: \"809a5161-aafa-4472-8bc9-1d251b55e815\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-ckmnk" Feb 17 13:58:57 crc kubenswrapper[4768]: I0217 13:58:57.737350 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jnwwk\" (UniqueName: \"kubernetes.io/projected/809a5161-aafa-4472-8bc9-1d251b55e815-kube-api-access-jnwwk\") pod \"dnsmasq-dns-79bd4cc8c9-ckmnk\" (UID: \"809a5161-aafa-4472-8bc9-1d251b55e815\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-ckmnk" Feb 17 13:58:57 crc kubenswrapper[4768]: I0217 13:58:57.748615 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-79bd4cc8c9-ckmnk" Feb 17 13:58:57 crc kubenswrapper[4768]: I0217 13:58:57.830993 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"78830acd-378f-4199-8615-9884cdca4154","Type":"ContainerStarted","Data":"6b02cd8f6176361481a32a9e9035f2e9c3ff4e7f1d33d836b411f888989d4c55"} Feb 17 13:58:58 crc kubenswrapper[4768]: W0217 13:58:58.226720 4768 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod809a5161_aafa_4472_8bc9_1d251b55e815.slice/crio-ee6e0f2c12e6b51149f1e59ef1518dd76db380bc7230f000daf350f62f0b83e0 WatchSource:0}: Error finding container ee6e0f2c12e6b51149f1e59ef1518dd76db380bc7230f000daf350f62f0b83e0: Status 404 returned error can't find the container with id ee6e0f2c12e6b51149f1e59ef1518dd76db380bc7230f000daf350f62f0b83e0 Feb 17 13:58:58 crc kubenswrapper[4768]: I0217 13:58:58.232392 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-79bd4cc8c9-ckmnk"] Feb 17 13:58:58 crc kubenswrapper[4768]: I0217 13:58:58.844079 4768 generic.go:334] "Generic (PLEG): container finished" podID="809a5161-aafa-4472-8bc9-1d251b55e815" containerID="2647d1f694e6bdc5a47e2760157a205cb359b53a627733dc007d6201106744c1" exitCode=0 Feb 17 13:58:58 crc kubenswrapper[4768]: I0217 13:58:58.845186 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-79bd4cc8c9-ckmnk" event={"ID":"809a5161-aafa-4472-8bc9-1d251b55e815","Type":"ContainerDied","Data":"2647d1f694e6bdc5a47e2760157a205cb359b53a627733dc007d6201106744c1"} Feb 17 13:58:58 crc kubenswrapper[4768]: I0217 13:58:58.845217 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-79bd4cc8c9-ckmnk" event={"ID":"809a5161-aafa-4472-8bc9-1d251b55e815","Type":"ContainerStarted","Data":"ee6e0f2c12e6b51149f1e59ef1518dd76db380bc7230f000daf350f62f0b83e0"} Feb 17 13:58:58 crc kubenswrapper[4768]: I0217 13:58:58.846889 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"edccbc8c-a38a-4c5d-b31a-a3b55f182ffa","Type":"ContainerStarted","Data":"d3001ec0a5e84b1ccc2980606006ed1931cda8897382a488472aa679b9cde307"} Feb 17 13:58:59 crc kubenswrapper[4768]: I0217 13:58:59.863163 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-79bd4cc8c9-ckmnk" event={"ID":"809a5161-aafa-4472-8bc9-1d251b55e815","Type":"ContainerStarted","Data":"7c153f79f7e09f2aca1d101d2dc141e67f5de4c29170a0918282ccc0befcfce7"} Feb 17 13:58:59 crc kubenswrapper[4768]: I0217 13:58:59.863988 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-79bd4cc8c9-ckmnk" Feb 17 13:58:59 crc kubenswrapper[4768]: I0217 13:58:59.888481 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-79bd4cc8c9-ckmnk" podStartSLOduration=2.888451841 podStartE2EDuration="2.888451841s" podCreationTimestamp="2026-02-17 13:58:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 13:58:59.883646672 +0000 UTC m=+1359.163033124" watchObservedRunningTime="2026-02-17 13:58:59.888451841 +0000 UTC m=+1359.167838273" Feb 17 13:59:07 crc kubenswrapper[4768]: I0217 13:59:07.751311 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-79bd4cc8c9-ckmnk" Feb 17 13:59:07 crc kubenswrapper[4768]: I0217 13:59:07.845339 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-89c5cd4d5-mmnn5"] Feb 17 13:59:07 crc kubenswrapper[4768]: I0217 13:59:07.845978 4768 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-89c5cd4d5-mmnn5" podUID="07e49b07-273e-48ae-8c45-c523632d87fe" containerName="dnsmasq-dns" containerID="cri-o://c7a4e728edc44ba31a85cda8b55e994cd92607634f0ab3e64fc925a009789710" gracePeriod=10 Feb 17 13:59:08 crc kubenswrapper[4768]: I0217 13:59:08.027198 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-55478c4467-rbrnb"] Feb 17 13:59:08 crc kubenswrapper[4768]: I0217 13:59:08.029122 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55478c4467-rbrnb" Feb 17 13:59:08 crc kubenswrapper[4768]: I0217 13:59:08.040817 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-55478c4467-rbrnb"] Feb 17 13:59:08 crc kubenswrapper[4768]: I0217 13:59:08.117683 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22064d12-d9c4-45c2-927e-77ce03c906bb-config\") pod \"dnsmasq-dns-55478c4467-rbrnb\" (UID: \"22064d12-d9c4-45c2-927e-77ce03c906bb\") " pod="openstack/dnsmasq-dns-55478c4467-rbrnb" Feb 17 13:59:08 crc kubenswrapper[4768]: I0217 13:59:08.117729 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/22064d12-d9c4-45c2-927e-77ce03c906bb-dns-swift-storage-0\") pod \"dnsmasq-dns-55478c4467-rbrnb\" (UID: \"22064d12-d9c4-45c2-927e-77ce03c906bb\") " pod="openstack/dnsmasq-dns-55478c4467-rbrnb" Feb 17 13:59:08 crc kubenswrapper[4768]: I0217 13:59:08.117769 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/22064d12-d9c4-45c2-927e-77ce03c906bb-dns-svc\") pod \"dnsmasq-dns-55478c4467-rbrnb\" (UID: \"22064d12-d9c4-45c2-927e-77ce03c906bb\") " pod="openstack/dnsmasq-dns-55478c4467-rbrnb" Feb 17 13:59:08 crc kubenswrapper[4768]: I0217 13:59:08.117956 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/22064d12-d9c4-45c2-927e-77ce03c906bb-ovsdbserver-nb\") pod \"dnsmasq-dns-55478c4467-rbrnb\" (UID: \"22064d12-d9c4-45c2-927e-77ce03c906bb\") " pod="openstack/dnsmasq-dns-55478c4467-rbrnb" Feb 17 13:59:08 crc kubenswrapper[4768]: I0217 13:59:08.118117 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/22064d12-d9c4-45c2-927e-77ce03c906bb-openstack-edpm-ipam\") pod \"dnsmasq-dns-55478c4467-rbrnb\" (UID: \"22064d12-d9c4-45c2-927e-77ce03c906bb\") " pod="openstack/dnsmasq-dns-55478c4467-rbrnb" Feb 17 13:59:08 crc kubenswrapper[4768]: I0217 13:59:08.118188 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/22064d12-d9c4-45c2-927e-77ce03c906bb-ovsdbserver-sb\") pod \"dnsmasq-dns-55478c4467-rbrnb\" (UID: \"22064d12-d9c4-45c2-927e-77ce03c906bb\") " pod="openstack/dnsmasq-dns-55478c4467-rbrnb" Feb 17 13:59:08 crc kubenswrapper[4768]: I0217 13:59:08.118338 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c8fz2\" (UniqueName: \"kubernetes.io/projected/22064d12-d9c4-45c2-927e-77ce03c906bb-kube-api-access-c8fz2\") pod \"dnsmasq-dns-55478c4467-rbrnb\" (UID: \"22064d12-d9c4-45c2-927e-77ce03c906bb\") " pod="openstack/dnsmasq-dns-55478c4467-rbrnb" Feb 17 13:59:08 crc kubenswrapper[4768]: I0217 13:59:08.220248 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/22064d12-d9c4-45c2-927e-77ce03c906bb-openstack-edpm-ipam\") pod \"dnsmasq-dns-55478c4467-rbrnb\" (UID: \"22064d12-d9c4-45c2-927e-77ce03c906bb\") " pod="openstack/dnsmasq-dns-55478c4467-rbrnb" Feb 17 13:59:08 crc kubenswrapper[4768]: I0217 13:59:08.220306 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/22064d12-d9c4-45c2-927e-77ce03c906bb-ovsdbserver-sb\") pod \"dnsmasq-dns-55478c4467-rbrnb\" (UID: \"22064d12-d9c4-45c2-927e-77ce03c906bb\") " pod="openstack/dnsmasq-dns-55478c4467-rbrnb" Feb 17 13:59:08 crc kubenswrapper[4768]: I0217 13:59:08.220340 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c8fz2\" (UniqueName: \"kubernetes.io/projected/22064d12-d9c4-45c2-927e-77ce03c906bb-kube-api-access-c8fz2\") pod \"dnsmasq-dns-55478c4467-rbrnb\" (UID: \"22064d12-d9c4-45c2-927e-77ce03c906bb\") " pod="openstack/dnsmasq-dns-55478c4467-rbrnb" Feb 17 13:59:08 crc kubenswrapper[4768]: I0217 13:59:08.220399 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22064d12-d9c4-45c2-927e-77ce03c906bb-config\") pod \"dnsmasq-dns-55478c4467-rbrnb\" (UID: \"22064d12-d9c4-45c2-927e-77ce03c906bb\") " pod="openstack/dnsmasq-dns-55478c4467-rbrnb" Feb 17 13:59:08 crc kubenswrapper[4768]: I0217 13:59:08.220417 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/22064d12-d9c4-45c2-927e-77ce03c906bb-dns-swift-storage-0\") pod \"dnsmasq-dns-55478c4467-rbrnb\" (UID: \"22064d12-d9c4-45c2-927e-77ce03c906bb\") " pod="openstack/dnsmasq-dns-55478c4467-rbrnb" Feb 17 13:59:08 crc kubenswrapper[4768]: I0217 13:59:08.220445 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/22064d12-d9c4-45c2-927e-77ce03c906bb-dns-svc\") pod \"dnsmasq-dns-55478c4467-rbrnb\" (UID: \"22064d12-d9c4-45c2-927e-77ce03c906bb\") " pod="openstack/dnsmasq-dns-55478c4467-rbrnb" Feb 17 13:59:08 crc kubenswrapper[4768]: I0217 13:59:08.220482 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/22064d12-d9c4-45c2-927e-77ce03c906bb-ovsdbserver-nb\") pod \"dnsmasq-dns-55478c4467-rbrnb\" (UID: \"22064d12-d9c4-45c2-927e-77ce03c906bb\") " pod="openstack/dnsmasq-dns-55478c4467-rbrnb" Feb 17 13:59:08 crc kubenswrapper[4768]: I0217 13:59:08.221282 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/22064d12-d9c4-45c2-927e-77ce03c906bb-ovsdbserver-nb\") pod \"dnsmasq-dns-55478c4467-rbrnb\" (UID: \"22064d12-d9c4-45c2-927e-77ce03c906bb\") " pod="openstack/dnsmasq-dns-55478c4467-rbrnb" Feb 17 13:59:08 crc kubenswrapper[4768]: I0217 13:59:08.221841 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/22064d12-d9c4-45c2-927e-77ce03c906bb-openstack-edpm-ipam\") pod \"dnsmasq-dns-55478c4467-rbrnb\" (UID: \"22064d12-d9c4-45c2-927e-77ce03c906bb\") " pod="openstack/dnsmasq-dns-55478c4467-rbrnb" Feb 17 13:59:08 crc kubenswrapper[4768]: I0217 13:59:08.222424 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/22064d12-d9c4-45c2-927e-77ce03c906bb-ovsdbserver-sb\") pod \"dnsmasq-dns-55478c4467-rbrnb\" (UID: \"22064d12-d9c4-45c2-927e-77ce03c906bb\") " pod="openstack/dnsmasq-dns-55478c4467-rbrnb" Feb 17 13:59:08 crc kubenswrapper[4768]: I0217 13:59:08.223414 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22064d12-d9c4-45c2-927e-77ce03c906bb-config\") pod \"dnsmasq-dns-55478c4467-rbrnb\" (UID: \"22064d12-d9c4-45c2-927e-77ce03c906bb\") " pod="openstack/dnsmasq-dns-55478c4467-rbrnb" Feb 17 13:59:08 crc kubenswrapper[4768]: I0217 13:59:08.224085 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/22064d12-d9c4-45c2-927e-77ce03c906bb-dns-swift-storage-0\") pod \"dnsmasq-dns-55478c4467-rbrnb\" (UID: \"22064d12-d9c4-45c2-927e-77ce03c906bb\") " pod="openstack/dnsmasq-dns-55478c4467-rbrnb" Feb 17 13:59:08 crc kubenswrapper[4768]: I0217 13:59:08.224661 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/22064d12-d9c4-45c2-927e-77ce03c906bb-dns-svc\") pod \"dnsmasq-dns-55478c4467-rbrnb\" (UID: \"22064d12-d9c4-45c2-927e-77ce03c906bb\") " pod="openstack/dnsmasq-dns-55478c4467-rbrnb" Feb 17 13:59:08 crc kubenswrapper[4768]: I0217 13:59:08.266568 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c8fz2\" (UniqueName: \"kubernetes.io/projected/22064d12-d9c4-45c2-927e-77ce03c906bb-kube-api-access-c8fz2\") pod \"dnsmasq-dns-55478c4467-rbrnb\" (UID: \"22064d12-d9c4-45c2-927e-77ce03c906bb\") " pod="openstack/dnsmasq-dns-55478c4467-rbrnb" Feb 17 13:59:08 crc kubenswrapper[4768]: I0217 13:59:08.359780 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55478c4467-rbrnb" Feb 17 13:59:08 crc kubenswrapper[4768]: I0217 13:59:08.368896 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-89c5cd4d5-mmnn5" Feb 17 13:59:08 crc kubenswrapper[4768]: I0217 13:59:08.525364 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/07e49b07-273e-48ae-8c45-c523632d87fe-ovsdbserver-sb\") pod \"07e49b07-273e-48ae-8c45-c523632d87fe\" (UID: \"07e49b07-273e-48ae-8c45-c523632d87fe\") " Feb 17 13:59:08 crc kubenswrapper[4768]: I0217 13:59:08.525427 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/07e49b07-273e-48ae-8c45-c523632d87fe-config\") pod \"07e49b07-273e-48ae-8c45-c523632d87fe\" (UID: \"07e49b07-273e-48ae-8c45-c523632d87fe\") " Feb 17 13:59:08 crc kubenswrapper[4768]: I0217 13:59:08.525531 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/07e49b07-273e-48ae-8c45-c523632d87fe-dns-svc\") pod \"07e49b07-273e-48ae-8c45-c523632d87fe\" (UID: \"07e49b07-273e-48ae-8c45-c523632d87fe\") " Feb 17 13:59:08 crc kubenswrapper[4768]: I0217 13:59:08.525590 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/07e49b07-273e-48ae-8c45-c523632d87fe-dns-swift-storage-0\") pod \"07e49b07-273e-48ae-8c45-c523632d87fe\" (UID: \"07e49b07-273e-48ae-8c45-c523632d87fe\") " Feb 17 13:59:08 crc kubenswrapper[4768]: I0217 13:59:08.525692 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sn2jc\" (UniqueName: \"kubernetes.io/projected/07e49b07-273e-48ae-8c45-c523632d87fe-kube-api-access-sn2jc\") pod \"07e49b07-273e-48ae-8c45-c523632d87fe\" (UID: \"07e49b07-273e-48ae-8c45-c523632d87fe\") " Feb 17 13:59:08 crc kubenswrapper[4768]: I0217 13:59:08.525720 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/07e49b07-273e-48ae-8c45-c523632d87fe-ovsdbserver-nb\") pod \"07e49b07-273e-48ae-8c45-c523632d87fe\" (UID: \"07e49b07-273e-48ae-8c45-c523632d87fe\") " Feb 17 13:59:08 crc kubenswrapper[4768]: I0217 13:59:08.547338 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/07e49b07-273e-48ae-8c45-c523632d87fe-kube-api-access-sn2jc" (OuterVolumeSpecName: "kube-api-access-sn2jc") pod "07e49b07-273e-48ae-8c45-c523632d87fe" (UID: "07e49b07-273e-48ae-8c45-c523632d87fe"). InnerVolumeSpecName "kube-api-access-sn2jc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 13:59:08 crc kubenswrapper[4768]: I0217 13:59:08.597163 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/07e49b07-273e-48ae-8c45-c523632d87fe-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "07e49b07-273e-48ae-8c45-c523632d87fe" (UID: "07e49b07-273e-48ae-8c45-c523632d87fe"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 13:59:08 crc kubenswrapper[4768]: I0217 13:59:08.597838 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/07e49b07-273e-48ae-8c45-c523632d87fe-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "07e49b07-273e-48ae-8c45-c523632d87fe" (UID: "07e49b07-273e-48ae-8c45-c523632d87fe"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 13:59:08 crc kubenswrapper[4768]: I0217 13:59:08.603021 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/07e49b07-273e-48ae-8c45-c523632d87fe-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "07e49b07-273e-48ae-8c45-c523632d87fe" (UID: "07e49b07-273e-48ae-8c45-c523632d87fe"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 13:59:08 crc kubenswrapper[4768]: I0217 13:59:08.603457 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/07e49b07-273e-48ae-8c45-c523632d87fe-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "07e49b07-273e-48ae-8c45-c523632d87fe" (UID: "07e49b07-273e-48ae-8c45-c523632d87fe"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 13:59:08 crc kubenswrapper[4768]: I0217 13:59:08.624259 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/07e49b07-273e-48ae-8c45-c523632d87fe-config" (OuterVolumeSpecName: "config") pod "07e49b07-273e-48ae-8c45-c523632d87fe" (UID: "07e49b07-273e-48ae-8c45-c523632d87fe"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 13:59:08 crc kubenswrapper[4768]: I0217 13:59:08.627974 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sn2jc\" (UniqueName: \"kubernetes.io/projected/07e49b07-273e-48ae-8c45-c523632d87fe-kube-api-access-sn2jc\") on node \"crc\" DevicePath \"\"" Feb 17 13:59:08 crc kubenswrapper[4768]: I0217 13:59:08.627998 4768 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/07e49b07-273e-48ae-8c45-c523632d87fe-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 17 13:59:08 crc kubenswrapper[4768]: I0217 13:59:08.628007 4768 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/07e49b07-273e-48ae-8c45-c523632d87fe-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 17 13:59:08 crc kubenswrapper[4768]: I0217 13:59:08.628015 4768 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/07e49b07-273e-48ae-8c45-c523632d87fe-config\") on node \"crc\" DevicePath \"\"" Feb 17 13:59:08 crc kubenswrapper[4768]: I0217 13:59:08.628026 4768 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/07e49b07-273e-48ae-8c45-c523632d87fe-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 17 13:59:08 crc kubenswrapper[4768]: I0217 13:59:08.628036 4768 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/07e49b07-273e-48ae-8c45-c523632d87fe-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 17 13:59:08 crc kubenswrapper[4768]: W0217 13:59:08.814812 4768 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod22064d12_d9c4_45c2_927e_77ce03c906bb.slice/crio-425f070b05fea6f0372d69f39337c5c2644acf8323d1d16ec566b84b569029c0 WatchSource:0}: Error finding container 425f070b05fea6f0372d69f39337c5c2644acf8323d1d16ec566b84b569029c0: Status 404 returned error can't find the container with id 425f070b05fea6f0372d69f39337c5c2644acf8323d1d16ec566b84b569029c0 Feb 17 13:59:08 crc kubenswrapper[4768]: I0217 13:59:08.815752 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-55478c4467-rbrnb"] Feb 17 13:59:08 crc kubenswrapper[4768]: I0217 13:59:08.949010 4768 generic.go:334] "Generic (PLEG): container finished" podID="07e49b07-273e-48ae-8c45-c523632d87fe" containerID="c7a4e728edc44ba31a85cda8b55e994cd92607634f0ab3e64fc925a009789710" exitCode=0 Feb 17 13:59:08 crc kubenswrapper[4768]: I0217 13:59:08.949097 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-89c5cd4d5-mmnn5" event={"ID":"07e49b07-273e-48ae-8c45-c523632d87fe","Type":"ContainerDied","Data":"c7a4e728edc44ba31a85cda8b55e994cd92607634f0ab3e64fc925a009789710"} Feb 17 13:59:08 crc kubenswrapper[4768]: I0217 13:59:08.949120 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-89c5cd4d5-mmnn5" Feb 17 13:59:08 crc kubenswrapper[4768]: I0217 13:59:08.949156 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-89c5cd4d5-mmnn5" event={"ID":"07e49b07-273e-48ae-8c45-c523632d87fe","Type":"ContainerDied","Data":"836ef9b8b4f0f35fd118f369b2222738da3a57c0b92f27ba0e237ff2db752847"} Feb 17 13:59:08 crc kubenswrapper[4768]: I0217 13:59:08.949183 4768 scope.go:117] "RemoveContainer" containerID="c7a4e728edc44ba31a85cda8b55e994cd92607634f0ab3e64fc925a009789710" Feb 17 13:59:08 crc kubenswrapper[4768]: I0217 13:59:08.951064 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55478c4467-rbrnb" event={"ID":"22064d12-d9c4-45c2-927e-77ce03c906bb","Type":"ContainerStarted","Data":"425f070b05fea6f0372d69f39337c5c2644acf8323d1d16ec566b84b569029c0"} Feb 17 13:59:09 crc kubenswrapper[4768]: I0217 13:59:09.032476 4768 scope.go:117] "RemoveContainer" containerID="191a0da1b91a29d7334b6e4503f20c8013eaef02f143f61e5fcaaeb4123524e5" Feb 17 13:59:09 crc kubenswrapper[4768]: I0217 13:59:09.103195 4768 scope.go:117] "RemoveContainer" containerID="c7a4e728edc44ba31a85cda8b55e994cd92607634f0ab3e64fc925a009789710" Feb 17 13:59:09 crc kubenswrapper[4768]: E0217 13:59:09.103588 4768 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c7a4e728edc44ba31a85cda8b55e994cd92607634f0ab3e64fc925a009789710\": container with ID starting with c7a4e728edc44ba31a85cda8b55e994cd92607634f0ab3e64fc925a009789710 not found: ID does not exist" containerID="c7a4e728edc44ba31a85cda8b55e994cd92607634f0ab3e64fc925a009789710" Feb 17 13:59:09 crc kubenswrapper[4768]: I0217 13:59:09.103627 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c7a4e728edc44ba31a85cda8b55e994cd92607634f0ab3e64fc925a009789710"} err="failed to get container status \"c7a4e728edc44ba31a85cda8b55e994cd92607634f0ab3e64fc925a009789710\": rpc error: code = NotFound desc = could not find container \"c7a4e728edc44ba31a85cda8b55e994cd92607634f0ab3e64fc925a009789710\": container with ID starting with c7a4e728edc44ba31a85cda8b55e994cd92607634f0ab3e64fc925a009789710 not found: ID does not exist" Feb 17 13:59:09 crc kubenswrapper[4768]: I0217 13:59:09.103652 4768 scope.go:117] "RemoveContainer" containerID="191a0da1b91a29d7334b6e4503f20c8013eaef02f143f61e5fcaaeb4123524e5" Feb 17 13:59:09 crc kubenswrapper[4768]: E0217 13:59:09.104014 4768 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"191a0da1b91a29d7334b6e4503f20c8013eaef02f143f61e5fcaaeb4123524e5\": container with ID starting with 191a0da1b91a29d7334b6e4503f20c8013eaef02f143f61e5fcaaeb4123524e5 not found: ID does not exist" containerID="191a0da1b91a29d7334b6e4503f20c8013eaef02f143f61e5fcaaeb4123524e5" Feb 17 13:59:09 crc kubenswrapper[4768]: I0217 13:59:09.104035 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"191a0da1b91a29d7334b6e4503f20c8013eaef02f143f61e5fcaaeb4123524e5"} err="failed to get container status \"191a0da1b91a29d7334b6e4503f20c8013eaef02f143f61e5fcaaeb4123524e5\": rpc error: code = NotFound desc = could not find container \"191a0da1b91a29d7334b6e4503f20c8013eaef02f143f61e5fcaaeb4123524e5\": container with ID starting with 191a0da1b91a29d7334b6e4503f20c8013eaef02f143f61e5fcaaeb4123524e5 not found: ID does not exist" Feb 17 13:59:09 crc kubenswrapper[4768]: I0217 13:59:09.165566 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-89c5cd4d5-mmnn5"] Feb 17 13:59:09 crc kubenswrapper[4768]: I0217 13:59:09.175074 4768 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-89c5cd4d5-mmnn5"] Feb 17 13:59:09 crc kubenswrapper[4768]: I0217 13:59:09.546273 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="07e49b07-273e-48ae-8c45-c523632d87fe" path="/var/lib/kubelet/pods/07e49b07-273e-48ae-8c45-c523632d87fe/volumes" Feb 17 13:59:09 crc kubenswrapper[4768]: I0217 13:59:09.963609 4768 generic.go:334] "Generic (PLEG): container finished" podID="22064d12-d9c4-45c2-927e-77ce03c906bb" containerID="c52a5c02504a2a40ed0cf621c0d255ccc85e69b374c1f1d2936c5d3b436e37c4" exitCode=0 Feb 17 13:59:09 crc kubenswrapper[4768]: I0217 13:59:09.963652 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55478c4467-rbrnb" event={"ID":"22064d12-d9c4-45c2-927e-77ce03c906bb","Type":"ContainerDied","Data":"c52a5c02504a2a40ed0cf621c0d255ccc85e69b374c1f1d2936c5d3b436e37c4"} Feb 17 13:59:10 crc kubenswrapper[4768]: I0217 13:59:10.974036 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55478c4467-rbrnb" event={"ID":"22064d12-d9c4-45c2-927e-77ce03c906bb","Type":"ContainerStarted","Data":"a293142e139f7db98a3bf01fdaa4d6e125ea8c5bad16778af7533d5538b9279e"} Feb 17 13:59:10 crc kubenswrapper[4768]: I0217 13:59:10.974495 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-55478c4467-rbrnb" Feb 17 13:59:11 crc kubenswrapper[4768]: I0217 13:59:11.005886 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-55478c4467-rbrnb" podStartSLOduration=4.005863058 podStartE2EDuration="4.005863058s" podCreationTimestamp="2026-02-17 13:59:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 13:59:10.999981769 +0000 UTC m=+1370.279368251" watchObservedRunningTime="2026-02-17 13:59:11.005863058 +0000 UTC m=+1370.285249510" Feb 17 13:59:13 crc kubenswrapper[4768]: I0217 13:59:13.134614 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-8gcsc"] Feb 17 13:59:13 crc kubenswrapper[4768]: E0217 13:59:13.135456 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="07e49b07-273e-48ae-8c45-c523632d87fe" containerName="init" Feb 17 13:59:13 crc kubenswrapper[4768]: I0217 13:59:13.135472 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="07e49b07-273e-48ae-8c45-c523632d87fe" containerName="init" Feb 17 13:59:13 crc kubenswrapper[4768]: E0217 13:59:13.135497 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="07e49b07-273e-48ae-8c45-c523632d87fe" containerName="dnsmasq-dns" Feb 17 13:59:13 crc kubenswrapper[4768]: I0217 13:59:13.135505 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="07e49b07-273e-48ae-8c45-c523632d87fe" containerName="dnsmasq-dns" Feb 17 13:59:13 crc kubenswrapper[4768]: I0217 13:59:13.135755 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="07e49b07-273e-48ae-8c45-c523632d87fe" containerName="dnsmasq-dns" Feb 17 13:59:13 crc kubenswrapper[4768]: I0217 13:59:13.137488 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-8gcsc" Feb 17 13:59:13 crc kubenswrapper[4768]: I0217 13:59:13.146915 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-8gcsc"] Feb 17 13:59:13 crc kubenswrapper[4768]: I0217 13:59:13.236610 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-65zld\" (UniqueName: \"kubernetes.io/projected/9250eb8a-2239-4d0f-8290-040a4c37bbea-kube-api-access-65zld\") pod \"redhat-operators-8gcsc\" (UID: \"9250eb8a-2239-4d0f-8290-040a4c37bbea\") " pod="openshift-marketplace/redhat-operators-8gcsc" Feb 17 13:59:13 crc kubenswrapper[4768]: I0217 13:59:13.236655 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9250eb8a-2239-4d0f-8290-040a4c37bbea-catalog-content\") pod \"redhat-operators-8gcsc\" (UID: \"9250eb8a-2239-4d0f-8290-040a4c37bbea\") " pod="openshift-marketplace/redhat-operators-8gcsc" Feb 17 13:59:13 crc kubenswrapper[4768]: I0217 13:59:13.236685 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9250eb8a-2239-4d0f-8290-040a4c37bbea-utilities\") pod \"redhat-operators-8gcsc\" (UID: \"9250eb8a-2239-4d0f-8290-040a4c37bbea\") " pod="openshift-marketplace/redhat-operators-8gcsc" Feb 17 13:59:13 crc kubenswrapper[4768]: I0217 13:59:13.338480 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-65zld\" (UniqueName: \"kubernetes.io/projected/9250eb8a-2239-4d0f-8290-040a4c37bbea-kube-api-access-65zld\") pod \"redhat-operators-8gcsc\" (UID: \"9250eb8a-2239-4d0f-8290-040a4c37bbea\") " pod="openshift-marketplace/redhat-operators-8gcsc" Feb 17 13:59:13 crc kubenswrapper[4768]: I0217 13:59:13.338572 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9250eb8a-2239-4d0f-8290-040a4c37bbea-catalog-content\") pod \"redhat-operators-8gcsc\" (UID: \"9250eb8a-2239-4d0f-8290-040a4c37bbea\") " pod="openshift-marketplace/redhat-operators-8gcsc" Feb 17 13:59:13 crc kubenswrapper[4768]: I0217 13:59:13.338617 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9250eb8a-2239-4d0f-8290-040a4c37bbea-utilities\") pod \"redhat-operators-8gcsc\" (UID: \"9250eb8a-2239-4d0f-8290-040a4c37bbea\") " pod="openshift-marketplace/redhat-operators-8gcsc" Feb 17 13:59:13 crc kubenswrapper[4768]: I0217 13:59:13.339209 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9250eb8a-2239-4d0f-8290-040a4c37bbea-catalog-content\") pod \"redhat-operators-8gcsc\" (UID: \"9250eb8a-2239-4d0f-8290-040a4c37bbea\") " pod="openshift-marketplace/redhat-operators-8gcsc" Feb 17 13:59:13 crc kubenswrapper[4768]: I0217 13:59:13.339246 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9250eb8a-2239-4d0f-8290-040a4c37bbea-utilities\") pod \"redhat-operators-8gcsc\" (UID: \"9250eb8a-2239-4d0f-8290-040a4c37bbea\") " pod="openshift-marketplace/redhat-operators-8gcsc" Feb 17 13:59:13 crc kubenswrapper[4768]: I0217 13:59:13.357612 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-65zld\" (UniqueName: \"kubernetes.io/projected/9250eb8a-2239-4d0f-8290-040a4c37bbea-kube-api-access-65zld\") pod \"redhat-operators-8gcsc\" (UID: \"9250eb8a-2239-4d0f-8290-040a4c37bbea\") " pod="openshift-marketplace/redhat-operators-8gcsc" Feb 17 13:59:13 crc kubenswrapper[4768]: I0217 13:59:13.463440 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-8gcsc" Feb 17 13:59:13 crc kubenswrapper[4768]: I0217 13:59:13.913853 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-8gcsc"] Feb 17 13:59:14 crc kubenswrapper[4768]: I0217 13:59:14.006264 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8gcsc" event={"ID":"9250eb8a-2239-4d0f-8290-040a4c37bbea","Type":"ContainerStarted","Data":"79f0ab925b9ebf2c38af0a5c5b131e3fd61f93df4bd1f936efae564f0434ac1e"} Feb 17 13:59:15 crc kubenswrapper[4768]: I0217 13:59:15.019177 4768 generic.go:334] "Generic (PLEG): container finished" podID="9250eb8a-2239-4d0f-8290-040a4c37bbea" containerID="71f5d13ecd19c1a83bc0aea2fdc9edaada6c667eba2eca612854a7ce8a95dcea" exitCode=0 Feb 17 13:59:15 crc kubenswrapper[4768]: I0217 13:59:15.019347 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8gcsc" event={"ID":"9250eb8a-2239-4d0f-8290-040a4c37bbea","Type":"ContainerDied","Data":"71f5d13ecd19c1a83bc0aea2fdc9edaada6c667eba2eca612854a7ce8a95dcea"} Feb 17 13:59:15 crc kubenswrapper[4768]: I0217 13:59:15.021407 4768 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 17 13:59:16 crc kubenswrapper[4768]: I0217 13:59:16.030058 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8gcsc" event={"ID":"9250eb8a-2239-4d0f-8290-040a4c37bbea","Type":"ContainerStarted","Data":"863e7dbda47ed92cc910c0003ccdf83f3e9252e54dd3a196734c0014c622d407"} Feb 17 13:59:17 crc kubenswrapper[4768]: I0217 13:59:17.041124 4768 generic.go:334] "Generic (PLEG): container finished" podID="9250eb8a-2239-4d0f-8290-040a4c37bbea" containerID="863e7dbda47ed92cc910c0003ccdf83f3e9252e54dd3a196734c0014c622d407" exitCode=0 Feb 17 13:59:17 crc kubenswrapper[4768]: I0217 13:59:17.041210 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8gcsc" event={"ID":"9250eb8a-2239-4d0f-8290-040a4c37bbea","Type":"ContainerDied","Data":"863e7dbda47ed92cc910c0003ccdf83f3e9252e54dd3a196734c0014c622d407"} Feb 17 13:59:18 crc kubenswrapper[4768]: I0217 13:59:18.052868 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8gcsc" event={"ID":"9250eb8a-2239-4d0f-8290-040a4c37bbea","Type":"ContainerStarted","Data":"6fde12fa78545f1888cdec8c0a4d5c24057315d10226d96d0c115d5717767c07"} Feb 17 13:59:18 crc kubenswrapper[4768]: I0217 13:59:18.091439 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-8gcsc" podStartSLOduration=2.707366528 podStartE2EDuration="5.091416809s" podCreationTimestamp="2026-02-17 13:59:13 +0000 UTC" firstStartedPulling="2026-02-17 13:59:15.020902393 +0000 UTC m=+1374.300288865" lastFinishedPulling="2026-02-17 13:59:17.404952704 +0000 UTC m=+1376.684339146" observedRunningTime="2026-02-17 13:59:18.080500133 +0000 UTC m=+1377.359886645" watchObservedRunningTime="2026-02-17 13:59:18.091416809 +0000 UTC m=+1377.370803271" Feb 17 13:59:18 crc kubenswrapper[4768]: I0217 13:59:18.361249 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-55478c4467-rbrnb" Feb 17 13:59:18 crc kubenswrapper[4768]: I0217 13:59:18.446006 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-79bd4cc8c9-ckmnk"] Feb 17 13:59:18 crc kubenswrapper[4768]: I0217 13:59:18.446405 4768 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-79bd4cc8c9-ckmnk" podUID="809a5161-aafa-4472-8bc9-1d251b55e815" containerName="dnsmasq-dns" containerID="cri-o://7c153f79f7e09f2aca1d101d2dc141e67f5de4c29170a0918282ccc0befcfce7" gracePeriod=10 Feb 17 13:59:18 crc kubenswrapper[4768]: I0217 13:59:18.947259 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-79bd4cc8c9-ckmnk" Feb 17 13:59:18 crc kubenswrapper[4768]: I0217 13:59:18.972416 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jnwwk\" (UniqueName: \"kubernetes.io/projected/809a5161-aafa-4472-8bc9-1d251b55e815-kube-api-access-jnwwk\") pod \"809a5161-aafa-4472-8bc9-1d251b55e815\" (UID: \"809a5161-aafa-4472-8bc9-1d251b55e815\") " Feb 17 13:59:18 crc kubenswrapper[4768]: I0217 13:59:18.972516 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/809a5161-aafa-4472-8bc9-1d251b55e815-dns-swift-storage-0\") pod \"809a5161-aafa-4472-8bc9-1d251b55e815\" (UID: \"809a5161-aafa-4472-8bc9-1d251b55e815\") " Feb 17 13:59:18 crc kubenswrapper[4768]: I0217 13:59:18.972561 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/809a5161-aafa-4472-8bc9-1d251b55e815-config\") pod \"809a5161-aafa-4472-8bc9-1d251b55e815\" (UID: \"809a5161-aafa-4472-8bc9-1d251b55e815\") " Feb 17 13:59:18 crc kubenswrapper[4768]: I0217 13:59:18.972582 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/809a5161-aafa-4472-8bc9-1d251b55e815-ovsdbserver-nb\") pod \"809a5161-aafa-4472-8bc9-1d251b55e815\" (UID: \"809a5161-aafa-4472-8bc9-1d251b55e815\") " Feb 17 13:59:18 crc kubenswrapper[4768]: I0217 13:59:18.972661 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/809a5161-aafa-4472-8bc9-1d251b55e815-dns-svc\") pod \"809a5161-aafa-4472-8bc9-1d251b55e815\" (UID: \"809a5161-aafa-4472-8bc9-1d251b55e815\") " Feb 17 13:59:18 crc kubenswrapper[4768]: I0217 13:59:18.972684 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/809a5161-aafa-4472-8bc9-1d251b55e815-openstack-edpm-ipam\") pod \"809a5161-aafa-4472-8bc9-1d251b55e815\" (UID: \"809a5161-aafa-4472-8bc9-1d251b55e815\") " Feb 17 13:59:18 crc kubenswrapper[4768]: I0217 13:59:18.972712 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/809a5161-aafa-4472-8bc9-1d251b55e815-ovsdbserver-sb\") pod \"809a5161-aafa-4472-8bc9-1d251b55e815\" (UID: \"809a5161-aafa-4472-8bc9-1d251b55e815\") " Feb 17 13:59:19 crc kubenswrapper[4768]: I0217 13:59:19.017887 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/809a5161-aafa-4472-8bc9-1d251b55e815-kube-api-access-jnwwk" (OuterVolumeSpecName: "kube-api-access-jnwwk") pod "809a5161-aafa-4472-8bc9-1d251b55e815" (UID: "809a5161-aafa-4472-8bc9-1d251b55e815"). InnerVolumeSpecName "kube-api-access-jnwwk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 13:59:19 crc kubenswrapper[4768]: I0217 13:59:19.046025 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/809a5161-aafa-4472-8bc9-1d251b55e815-config" (OuterVolumeSpecName: "config") pod "809a5161-aafa-4472-8bc9-1d251b55e815" (UID: "809a5161-aafa-4472-8bc9-1d251b55e815"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 13:59:19 crc kubenswrapper[4768]: I0217 13:59:19.062473 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/809a5161-aafa-4472-8bc9-1d251b55e815-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "809a5161-aafa-4472-8bc9-1d251b55e815" (UID: "809a5161-aafa-4472-8bc9-1d251b55e815"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 13:59:19 crc kubenswrapper[4768]: I0217 13:59:19.065037 4768 generic.go:334] "Generic (PLEG): container finished" podID="809a5161-aafa-4472-8bc9-1d251b55e815" containerID="7c153f79f7e09f2aca1d101d2dc141e67f5de4c29170a0918282ccc0befcfce7" exitCode=0 Feb 17 13:59:19 crc kubenswrapper[4768]: I0217 13:59:19.065872 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/809a5161-aafa-4472-8bc9-1d251b55e815-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "809a5161-aafa-4472-8bc9-1d251b55e815" (UID: "809a5161-aafa-4472-8bc9-1d251b55e815"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 13:59:19 crc kubenswrapper[4768]: I0217 13:59:19.066042 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-79bd4cc8c9-ckmnk" Feb 17 13:59:19 crc kubenswrapper[4768]: I0217 13:59:19.066505 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/809a5161-aafa-4472-8bc9-1d251b55e815-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "809a5161-aafa-4472-8bc9-1d251b55e815" (UID: "809a5161-aafa-4472-8bc9-1d251b55e815"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 13:59:19 crc kubenswrapper[4768]: I0217 13:59:19.066581 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-79bd4cc8c9-ckmnk" event={"ID":"809a5161-aafa-4472-8bc9-1d251b55e815","Type":"ContainerDied","Data":"7c153f79f7e09f2aca1d101d2dc141e67f5de4c29170a0918282ccc0befcfce7"} Feb 17 13:59:19 crc kubenswrapper[4768]: I0217 13:59:19.066629 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-79bd4cc8c9-ckmnk" event={"ID":"809a5161-aafa-4472-8bc9-1d251b55e815","Type":"ContainerDied","Data":"ee6e0f2c12e6b51149f1e59ef1518dd76db380bc7230f000daf350f62f0b83e0"} Feb 17 13:59:19 crc kubenswrapper[4768]: I0217 13:59:19.066648 4768 scope.go:117] "RemoveContainer" containerID="7c153f79f7e09f2aca1d101d2dc141e67f5de4c29170a0918282ccc0befcfce7" Feb 17 13:59:19 crc kubenswrapper[4768]: I0217 13:59:19.068299 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/809a5161-aafa-4472-8bc9-1d251b55e815-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "809a5161-aafa-4472-8bc9-1d251b55e815" (UID: "809a5161-aafa-4472-8bc9-1d251b55e815"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 13:59:19 crc kubenswrapper[4768]: I0217 13:59:19.073584 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/809a5161-aafa-4472-8bc9-1d251b55e815-openstack-edpm-ipam" (OuterVolumeSpecName: "openstack-edpm-ipam") pod "809a5161-aafa-4472-8bc9-1d251b55e815" (UID: "809a5161-aafa-4472-8bc9-1d251b55e815"). InnerVolumeSpecName "openstack-edpm-ipam". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 13:59:19 crc kubenswrapper[4768]: I0217 13:59:19.074813 4768 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/809a5161-aafa-4472-8bc9-1d251b55e815-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 17 13:59:19 crc kubenswrapper[4768]: I0217 13:59:19.074834 4768 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/809a5161-aafa-4472-8bc9-1d251b55e815-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 17 13:59:19 crc kubenswrapper[4768]: I0217 13:59:19.074845 4768 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/809a5161-aafa-4472-8bc9-1d251b55e815-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 17 13:59:19 crc kubenswrapper[4768]: I0217 13:59:19.074854 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jnwwk\" (UniqueName: \"kubernetes.io/projected/809a5161-aafa-4472-8bc9-1d251b55e815-kube-api-access-jnwwk\") on node \"crc\" DevicePath \"\"" Feb 17 13:59:19 crc kubenswrapper[4768]: I0217 13:59:19.074862 4768 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/809a5161-aafa-4472-8bc9-1d251b55e815-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 17 13:59:19 crc kubenswrapper[4768]: I0217 13:59:19.074872 4768 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/809a5161-aafa-4472-8bc9-1d251b55e815-config\") on node \"crc\" DevicePath \"\"" Feb 17 13:59:19 crc kubenswrapper[4768]: I0217 13:59:19.074880 4768 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/809a5161-aafa-4472-8bc9-1d251b55e815-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 17 13:59:19 crc kubenswrapper[4768]: I0217 13:59:19.130475 4768 scope.go:117] "RemoveContainer" containerID="2647d1f694e6bdc5a47e2760157a205cb359b53a627733dc007d6201106744c1" Feb 17 13:59:19 crc kubenswrapper[4768]: I0217 13:59:19.161008 4768 scope.go:117] "RemoveContainer" containerID="7c153f79f7e09f2aca1d101d2dc141e67f5de4c29170a0918282ccc0befcfce7" Feb 17 13:59:19 crc kubenswrapper[4768]: E0217 13:59:19.162355 4768 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7c153f79f7e09f2aca1d101d2dc141e67f5de4c29170a0918282ccc0befcfce7\": container with ID starting with 7c153f79f7e09f2aca1d101d2dc141e67f5de4c29170a0918282ccc0befcfce7 not found: ID does not exist" containerID="7c153f79f7e09f2aca1d101d2dc141e67f5de4c29170a0918282ccc0befcfce7" Feb 17 13:59:19 crc kubenswrapper[4768]: I0217 13:59:19.162414 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7c153f79f7e09f2aca1d101d2dc141e67f5de4c29170a0918282ccc0befcfce7"} err="failed to get container status \"7c153f79f7e09f2aca1d101d2dc141e67f5de4c29170a0918282ccc0befcfce7\": rpc error: code = NotFound desc = could not find container \"7c153f79f7e09f2aca1d101d2dc141e67f5de4c29170a0918282ccc0befcfce7\": container with ID starting with 7c153f79f7e09f2aca1d101d2dc141e67f5de4c29170a0918282ccc0befcfce7 not found: ID does not exist" Feb 17 13:59:19 crc kubenswrapper[4768]: I0217 13:59:19.162448 4768 scope.go:117] "RemoveContainer" containerID="2647d1f694e6bdc5a47e2760157a205cb359b53a627733dc007d6201106744c1" Feb 17 13:59:19 crc kubenswrapper[4768]: E0217 13:59:19.164863 4768 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2647d1f694e6bdc5a47e2760157a205cb359b53a627733dc007d6201106744c1\": container with ID starting with 2647d1f694e6bdc5a47e2760157a205cb359b53a627733dc007d6201106744c1 not found: ID does not exist" containerID="2647d1f694e6bdc5a47e2760157a205cb359b53a627733dc007d6201106744c1" Feb 17 13:59:19 crc kubenswrapper[4768]: I0217 13:59:19.164902 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2647d1f694e6bdc5a47e2760157a205cb359b53a627733dc007d6201106744c1"} err="failed to get container status \"2647d1f694e6bdc5a47e2760157a205cb359b53a627733dc007d6201106744c1\": rpc error: code = NotFound desc = could not find container \"2647d1f694e6bdc5a47e2760157a205cb359b53a627733dc007d6201106744c1\": container with ID starting with 2647d1f694e6bdc5a47e2760157a205cb359b53a627733dc007d6201106744c1 not found: ID does not exist" Feb 17 13:59:19 crc kubenswrapper[4768]: I0217 13:59:19.415971 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-79bd4cc8c9-ckmnk"] Feb 17 13:59:19 crc kubenswrapper[4768]: I0217 13:59:19.426145 4768 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-79bd4cc8c9-ckmnk"] Feb 17 13:59:19 crc kubenswrapper[4768]: I0217 13:59:19.545783 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="809a5161-aafa-4472-8bc9-1d251b55e815" path="/var/lib/kubelet/pods/809a5161-aafa-4472-8bc9-1d251b55e815/volumes" Feb 17 13:59:23 crc kubenswrapper[4768]: I0217 13:59:23.463617 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-8gcsc" Feb 17 13:59:23 crc kubenswrapper[4768]: I0217 13:59:23.463940 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-8gcsc" Feb 17 13:59:23 crc kubenswrapper[4768]: I0217 13:59:23.527988 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-8gcsc" Feb 17 13:59:24 crc kubenswrapper[4768]: I0217 13:59:24.173264 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-8gcsc" Feb 17 13:59:24 crc kubenswrapper[4768]: I0217 13:59:24.233476 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-8gcsc"] Feb 17 13:59:26 crc kubenswrapper[4768]: I0217 13:59:26.133319 4768 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-8gcsc" podUID="9250eb8a-2239-4d0f-8290-040a4c37bbea" containerName="registry-server" containerID="cri-o://6fde12fa78545f1888cdec8c0a4d5c24057315d10226d96d0c115d5717767c07" gracePeriod=2 Feb 17 13:59:26 crc kubenswrapper[4768]: I0217 13:59:26.578549 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-8gcsc" Feb 17 13:59:26 crc kubenswrapper[4768]: I0217 13:59:26.630702 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-65zld\" (UniqueName: \"kubernetes.io/projected/9250eb8a-2239-4d0f-8290-040a4c37bbea-kube-api-access-65zld\") pod \"9250eb8a-2239-4d0f-8290-040a4c37bbea\" (UID: \"9250eb8a-2239-4d0f-8290-040a4c37bbea\") " Feb 17 13:59:26 crc kubenswrapper[4768]: I0217 13:59:26.630767 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9250eb8a-2239-4d0f-8290-040a4c37bbea-catalog-content\") pod \"9250eb8a-2239-4d0f-8290-040a4c37bbea\" (UID: \"9250eb8a-2239-4d0f-8290-040a4c37bbea\") " Feb 17 13:59:26 crc kubenswrapper[4768]: I0217 13:59:26.630868 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9250eb8a-2239-4d0f-8290-040a4c37bbea-utilities\") pod \"9250eb8a-2239-4d0f-8290-040a4c37bbea\" (UID: \"9250eb8a-2239-4d0f-8290-040a4c37bbea\") " Feb 17 13:59:26 crc kubenswrapper[4768]: I0217 13:59:26.631816 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9250eb8a-2239-4d0f-8290-040a4c37bbea-utilities" (OuterVolumeSpecName: "utilities") pod "9250eb8a-2239-4d0f-8290-040a4c37bbea" (UID: "9250eb8a-2239-4d0f-8290-040a4c37bbea"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 13:59:26 crc kubenswrapper[4768]: I0217 13:59:26.637634 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9250eb8a-2239-4d0f-8290-040a4c37bbea-kube-api-access-65zld" (OuterVolumeSpecName: "kube-api-access-65zld") pod "9250eb8a-2239-4d0f-8290-040a4c37bbea" (UID: "9250eb8a-2239-4d0f-8290-040a4c37bbea"). InnerVolumeSpecName "kube-api-access-65zld". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 13:59:26 crc kubenswrapper[4768]: I0217 13:59:26.733229 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-65zld\" (UniqueName: \"kubernetes.io/projected/9250eb8a-2239-4d0f-8290-040a4c37bbea-kube-api-access-65zld\") on node \"crc\" DevicePath \"\"" Feb 17 13:59:26 crc kubenswrapper[4768]: I0217 13:59:26.733255 4768 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9250eb8a-2239-4d0f-8290-040a4c37bbea-utilities\") on node \"crc\" DevicePath \"\"" Feb 17 13:59:26 crc kubenswrapper[4768]: I0217 13:59:26.758325 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9250eb8a-2239-4d0f-8290-040a4c37bbea-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9250eb8a-2239-4d0f-8290-040a4c37bbea" (UID: "9250eb8a-2239-4d0f-8290-040a4c37bbea"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 13:59:26 crc kubenswrapper[4768]: I0217 13:59:26.834385 4768 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9250eb8a-2239-4d0f-8290-040a4c37bbea-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 17 13:59:27 crc kubenswrapper[4768]: I0217 13:59:27.143517 4768 generic.go:334] "Generic (PLEG): container finished" podID="9250eb8a-2239-4d0f-8290-040a4c37bbea" containerID="6fde12fa78545f1888cdec8c0a4d5c24057315d10226d96d0c115d5717767c07" exitCode=0 Feb 17 13:59:27 crc kubenswrapper[4768]: I0217 13:59:27.143561 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8gcsc" event={"ID":"9250eb8a-2239-4d0f-8290-040a4c37bbea","Type":"ContainerDied","Data":"6fde12fa78545f1888cdec8c0a4d5c24057315d10226d96d0c115d5717767c07"} Feb 17 13:59:27 crc kubenswrapper[4768]: I0217 13:59:27.143625 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8gcsc" event={"ID":"9250eb8a-2239-4d0f-8290-040a4c37bbea","Type":"ContainerDied","Data":"79f0ab925b9ebf2c38af0a5c5b131e3fd61f93df4bd1f936efae564f0434ac1e"} Feb 17 13:59:27 crc kubenswrapper[4768]: I0217 13:59:27.143652 4768 scope.go:117] "RemoveContainer" containerID="6fde12fa78545f1888cdec8c0a4d5c24057315d10226d96d0c115d5717767c07" Feb 17 13:59:27 crc kubenswrapper[4768]: I0217 13:59:27.143666 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-8gcsc" Feb 17 13:59:27 crc kubenswrapper[4768]: I0217 13:59:27.168567 4768 scope.go:117] "RemoveContainer" containerID="863e7dbda47ed92cc910c0003ccdf83f3e9252e54dd3a196734c0014c622d407" Feb 17 13:59:27 crc kubenswrapper[4768]: I0217 13:59:27.184299 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-8gcsc"] Feb 17 13:59:27 crc kubenswrapper[4768]: I0217 13:59:27.190357 4768 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-8gcsc"] Feb 17 13:59:27 crc kubenswrapper[4768]: I0217 13:59:27.214551 4768 scope.go:117] "RemoveContainer" containerID="71f5d13ecd19c1a83bc0aea2fdc9edaada6c667eba2eca612854a7ce8a95dcea" Feb 17 13:59:27 crc kubenswrapper[4768]: I0217 13:59:27.258736 4768 scope.go:117] "RemoveContainer" containerID="6fde12fa78545f1888cdec8c0a4d5c24057315d10226d96d0c115d5717767c07" Feb 17 13:59:27 crc kubenswrapper[4768]: E0217 13:59:27.259449 4768 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6fde12fa78545f1888cdec8c0a4d5c24057315d10226d96d0c115d5717767c07\": container with ID starting with 6fde12fa78545f1888cdec8c0a4d5c24057315d10226d96d0c115d5717767c07 not found: ID does not exist" containerID="6fde12fa78545f1888cdec8c0a4d5c24057315d10226d96d0c115d5717767c07" Feb 17 13:59:27 crc kubenswrapper[4768]: I0217 13:59:27.259503 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6fde12fa78545f1888cdec8c0a4d5c24057315d10226d96d0c115d5717767c07"} err="failed to get container status \"6fde12fa78545f1888cdec8c0a4d5c24057315d10226d96d0c115d5717767c07\": rpc error: code = NotFound desc = could not find container \"6fde12fa78545f1888cdec8c0a4d5c24057315d10226d96d0c115d5717767c07\": container with ID starting with 6fde12fa78545f1888cdec8c0a4d5c24057315d10226d96d0c115d5717767c07 not found: ID does not exist" Feb 17 13:59:27 crc kubenswrapper[4768]: I0217 13:59:27.259525 4768 scope.go:117] "RemoveContainer" containerID="863e7dbda47ed92cc910c0003ccdf83f3e9252e54dd3a196734c0014c622d407" Feb 17 13:59:27 crc kubenswrapper[4768]: E0217 13:59:27.259821 4768 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"863e7dbda47ed92cc910c0003ccdf83f3e9252e54dd3a196734c0014c622d407\": container with ID starting with 863e7dbda47ed92cc910c0003ccdf83f3e9252e54dd3a196734c0014c622d407 not found: ID does not exist" containerID="863e7dbda47ed92cc910c0003ccdf83f3e9252e54dd3a196734c0014c622d407" Feb 17 13:59:27 crc kubenswrapper[4768]: I0217 13:59:27.259876 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"863e7dbda47ed92cc910c0003ccdf83f3e9252e54dd3a196734c0014c622d407"} err="failed to get container status \"863e7dbda47ed92cc910c0003ccdf83f3e9252e54dd3a196734c0014c622d407\": rpc error: code = NotFound desc = could not find container \"863e7dbda47ed92cc910c0003ccdf83f3e9252e54dd3a196734c0014c622d407\": container with ID starting with 863e7dbda47ed92cc910c0003ccdf83f3e9252e54dd3a196734c0014c622d407 not found: ID does not exist" Feb 17 13:59:27 crc kubenswrapper[4768]: I0217 13:59:27.259910 4768 scope.go:117] "RemoveContainer" containerID="71f5d13ecd19c1a83bc0aea2fdc9edaada6c667eba2eca612854a7ce8a95dcea" Feb 17 13:59:27 crc kubenswrapper[4768]: E0217 13:59:27.260290 4768 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"71f5d13ecd19c1a83bc0aea2fdc9edaada6c667eba2eca612854a7ce8a95dcea\": container with ID starting with 71f5d13ecd19c1a83bc0aea2fdc9edaada6c667eba2eca612854a7ce8a95dcea not found: ID does not exist" containerID="71f5d13ecd19c1a83bc0aea2fdc9edaada6c667eba2eca612854a7ce8a95dcea" Feb 17 13:59:27 crc kubenswrapper[4768]: I0217 13:59:27.260323 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"71f5d13ecd19c1a83bc0aea2fdc9edaada6c667eba2eca612854a7ce8a95dcea"} err="failed to get container status \"71f5d13ecd19c1a83bc0aea2fdc9edaada6c667eba2eca612854a7ce8a95dcea\": rpc error: code = NotFound desc = could not find container \"71f5d13ecd19c1a83bc0aea2fdc9edaada6c667eba2eca612854a7ce8a95dcea\": container with ID starting with 71f5d13ecd19c1a83bc0aea2fdc9edaada6c667eba2eca612854a7ce8a95dcea not found: ID does not exist" Feb 17 13:59:27 crc kubenswrapper[4768]: I0217 13:59:27.290224 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-ttj4s"] Feb 17 13:59:27 crc kubenswrapper[4768]: E0217 13:59:27.290690 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9250eb8a-2239-4d0f-8290-040a4c37bbea" containerName="extract-content" Feb 17 13:59:27 crc kubenswrapper[4768]: I0217 13:59:27.290706 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="9250eb8a-2239-4d0f-8290-040a4c37bbea" containerName="extract-content" Feb 17 13:59:27 crc kubenswrapper[4768]: E0217 13:59:27.290728 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9250eb8a-2239-4d0f-8290-040a4c37bbea" containerName="registry-server" Feb 17 13:59:27 crc kubenswrapper[4768]: I0217 13:59:27.290738 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="9250eb8a-2239-4d0f-8290-040a4c37bbea" containerName="registry-server" Feb 17 13:59:27 crc kubenswrapper[4768]: E0217 13:59:27.290750 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="809a5161-aafa-4472-8bc9-1d251b55e815" containerName="dnsmasq-dns" Feb 17 13:59:27 crc kubenswrapper[4768]: I0217 13:59:27.290759 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="809a5161-aafa-4472-8bc9-1d251b55e815" containerName="dnsmasq-dns" Feb 17 13:59:27 crc kubenswrapper[4768]: E0217 13:59:27.290808 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9250eb8a-2239-4d0f-8290-040a4c37bbea" containerName="extract-utilities" Feb 17 13:59:27 crc kubenswrapper[4768]: I0217 13:59:27.290816 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="9250eb8a-2239-4d0f-8290-040a4c37bbea" containerName="extract-utilities" Feb 17 13:59:27 crc kubenswrapper[4768]: E0217 13:59:27.290834 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="809a5161-aafa-4472-8bc9-1d251b55e815" containerName="init" Feb 17 13:59:27 crc kubenswrapper[4768]: I0217 13:59:27.290843 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="809a5161-aafa-4472-8bc9-1d251b55e815" containerName="init" Feb 17 13:59:27 crc kubenswrapper[4768]: I0217 13:59:27.291084 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="9250eb8a-2239-4d0f-8290-040a4c37bbea" containerName="registry-server" Feb 17 13:59:27 crc kubenswrapper[4768]: I0217 13:59:27.291096 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="809a5161-aafa-4472-8bc9-1d251b55e815" containerName="dnsmasq-dns" Feb 17 13:59:27 crc kubenswrapper[4768]: I0217 13:59:27.292012 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-ttj4s" Feb 17 13:59:27 crc kubenswrapper[4768]: I0217 13:59:27.297990 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 17 13:59:27 crc kubenswrapper[4768]: I0217 13:59:27.298333 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 17 13:59:27 crc kubenswrapper[4768]: I0217 13:59:27.298491 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 17 13:59:27 crc kubenswrapper[4768]: I0217 13:59:27.298630 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-tr87q" Feb 17 13:59:27 crc kubenswrapper[4768]: I0217 13:59:27.314079 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-ttj4s"] Feb 17 13:59:27 crc kubenswrapper[4768]: I0217 13:59:27.343427 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c99d698a-1af3-46d2-97c5-0c33573adaca-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-ttj4s\" (UID: \"c99d698a-1af3-46d2-97c5-0c33573adaca\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-ttj4s" Feb 17 13:59:27 crc kubenswrapper[4768]: I0217 13:59:27.343567 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c99d698a-1af3-46d2-97c5-0c33573adaca-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-ttj4s\" (UID: \"c99d698a-1af3-46d2-97c5-0c33573adaca\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-ttj4s" Feb 17 13:59:27 crc kubenswrapper[4768]: I0217 13:59:27.343616 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cslsd\" (UniqueName: \"kubernetes.io/projected/c99d698a-1af3-46d2-97c5-0c33573adaca-kube-api-access-cslsd\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-ttj4s\" (UID: \"c99d698a-1af3-46d2-97c5-0c33573adaca\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-ttj4s" Feb 17 13:59:27 crc kubenswrapper[4768]: I0217 13:59:27.343663 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c99d698a-1af3-46d2-97c5-0c33573adaca-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-ttj4s\" (UID: \"c99d698a-1af3-46d2-97c5-0c33573adaca\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-ttj4s" Feb 17 13:59:27 crc kubenswrapper[4768]: I0217 13:59:27.446452 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c99d698a-1af3-46d2-97c5-0c33573adaca-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-ttj4s\" (UID: \"c99d698a-1af3-46d2-97c5-0c33573adaca\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-ttj4s" Feb 17 13:59:27 crc kubenswrapper[4768]: I0217 13:59:27.446659 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c99d698a-1af3-46d2-97c5-0c33573adaca-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-ttj4s\" (UID: \"c99d698a-1af3-46d2-97c5-0c33573adaca\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-ttj4s" Feb 17 13:59:27 crc kubenswrapper[4768]: I0217 13:59:27.446721 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cslsd\" (UniqueName: \"kubernetes.io/projected/c99d698a-1af3-46d2-97c5-0c33573adaca-kube-api-access-cslsd\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-ttj4s\" (UID: \"c99d698a-1af3-46d2-97c5-0c33573adaca\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-ttj4s" Feb 17 13:59:27 crc kubenswrapper[4768]: I0217 13:59:27.446788 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c99d698a-1af3-46d2-97c5-0c33573adaca-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-ttj4s\" (UID: \"c99d698a-1af3-46d2-97c5-0c33573adaca\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-ttj4s" Feb 17 13:59:27 crc kubenswrapper[4768]: I0217 13:59:27.451590 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c99d698a-1af3-46d2-97c5-0c33573adaca-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-ttj4s\" (UID: \"c99d698a-1af3-46d2-97c5-0c33573adaca\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-ttj4s" Feb 17 13:59:27 crc kubenswrapper[4768]: I0217 13:59:27.451626 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c99d698a-1af3-46d2-97c5-0c33573adaca-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-ttj4s\" (UID: \"c99d698a-1af3-46d2-97c5-0c33573adaca\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-ttj4s" Feb 17 13:59:27 crc kubenswrapper[4768]: I0217 13:59:27.453076 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c99d698a-1af3-46d2-97c5-0c33573adaca-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-ttj4s\" (UID: \"c99d698a-1af3-46d2-97c5-0c33573adaca\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-ttj4s" Feb 17 13:59:27 crc kubenswrapper[4768]: I0217 13:59:27.464619 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cslsd\" (UniqueName: \"kubernetes.io/projected/c99d698a-1af3-46d2-97c5-0c33573adaca-kube-api-access-cslsd\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-ttj4s\" (UID: \"c99d698a-1af3-46d2-97c5-0c33573adaca\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-ttj4s" Feb 17 13:59:27 crc kubenswrapper[4768]: I0217 13:59:27.546535 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9250eb8a-2239-4d0f-8290-040a4c37bbea" path="/var/lib/kubelet/pods/9250eb8a-2239-4d0f-8290-040a4c37bbea/volumes" Feb 17 13:59:27 crc kubenswrapper[4768]: I0217 13:59:27.621440 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-ttj4s" Feb 17 13:59:27 crc kubenswrapper[4768]: I0217 13:59:27.941964 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-ttj4s"] Feb 17 13:59:28 crc kubenswrapper[4768]: I0217 13:59:28.155510 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-ttj4s" event={"ID":"c99d698a-1af3-46d2-97c5-0c33573adaca","Type":"ContainerStarted","Data":"c057e51f2494bd2ecdcc4cf8a07e3a49755c8f11d2581f68f8431b6544fe5806"} Feb 17 13:59:30 crc kubenswrapper[4768]: I0217 13:59:30.177510 4768 generic.go:334] "Generic (PLEG): container finished" podID="78830acd-378f-4199-8615-9884cdca4154" containerID="6b02cd8f6176361481a32a9e9035f2e9c3ff4e7f1d33d836b411f888989d4c55" exitCode=0 Feb 17 13:59:30 crc kubenswrapper[4768]: I0217 13:59:30.177635 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"78830acd-378f-4199-8615-9884cdca4154","Type":"ContainerDied","Data":"6b02cd8f6176361481a32a9e9035f2e9c3ff4e7f1d33d836b411f888989d4c55"} Feb 17 13:59:31 crc kubenswrapper[4768]: I0217 13:59:31.191517 4768 generic.go:334] "Generic (PLEG): container finished" podID="edccbc8c-a38a-4c5d-b31a-a3b55f182ffa" containerID="d3001ec0a5e84b1ccc2980606006ed1931cda8897382a488472aa679b9cde307" exitCode=0 Feb 17 13:59:31 crc kubenswrapper[4768]: I0217 13:59:31.191565 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"edccbc8c-a38a-4c5d-b31a-a3b55f182ffa","Type":"ContainerDied","Data":"d3001ec0a5e84b1ccc2980606006ed1931cda8897382a488472aa679b9cde307"} Feb 17 13:59:31 crc kubenswrapper[4768]: I0217 13:59:31.195723 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"78830acd-378f-4199-8615-9884cdca4154","Type":"ContainerStarted","Data":"59e29376df48bf3e3774f7338b1f4648e2c2d26d79b25f6f02d03e1dba1a796d"} Feb 17 13:59:31 crc kubenswrapper[4768]: I0217 13:59:31.196036 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Feb 17 13:59:31 crc kubenswrapper[4768]: I0217 13:59:31.258425 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=37.258397775 podStartE2EDuration="37.258397775s" podCreationTimestamp="2026-02-17 13:58:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 13:59:31.241283561 +0000 UTC m=+1390.520670043" watchObservedRunningTime="2026-02-17 13:59:31.258397775 +0000 UTC m=+1390.537784247" Feb 17 13:59:37 crc kubenswrapper[4768]: I0217 13:59:37.263641 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-ttj4s" event={"ID":"c99d698a-1af3-46d2-97c5-0c33573adaca","Type":"ContainerStarted","Data":"e7cd2bed1fe08cbe75b916ffcbc0babdeecd6e26b5c3da33c23c952f17b7e1b5"} Feb 17 13:59:37 crc kubenswrapper[4768]: I0217 13:59:37.267150 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"edccbc8c-a38a-4c5d-b31a-a3b55f182ffa","Type":"ContainerStarted","Data":"b2a8fc4309bfde5596cf516cab335abb76743588b6d4c489f88c35807e7cf824"} Feb 17 13:59:37 crc kubenswrapper[4768]: I0217 13:59:37.267449 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Feb 17 13:59:37 crc kubenswrapper[4768]: I0217 13:59:37.282656 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-ttj4s" podStartSLOduration=2.030345175 podStartE2EDuration="10.28263923s" podCreationTimestamp="2026-02-17 13:59:27 +0000 UTC" firstStartedPulling="2026-02-17 13:59:27.94356932 +0000 UTC m=+1387.222955762" lastFinishedPulling="2026-02-17 13:59:36.195863375 +0000 UTC m=+1395.475249817" observedRunningTime="2026-02-17 13:59:37.28041917 +0000 UTC m=+1396.559805612" watchObservedRunningTime="2026-02-17 13:59:37.28263923 +0000 UTC m=+1396.562025672" Feb 17 13:59:37 crc kubenswrapper[4768]: I0217 13:59:37.325155 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=42.325110812 podStartE2EDuration="42.325110812s" podCreationTimestamp="2026-02-17 13:58:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 13:59:37.315756279 +0000 UTC m=+1396.595142731" watchObservedRunningTime="2026-02-17 13:59:37.325110812 +0000 UTC m=+1396.604497264" Feb 17 13:59:45 crc kubenswrapper[4768]: I0217 13:59:45.360589 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Feb 17 13:59:46 crc kubenswrapper[4768]: I0217 13:59:46.309309 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Feb 17 13:59:47 crc kubenswrapper[4768]: I0217 13:59:47.364680 4768 generic.go:334] "Generic (PLEG): container finished" podID="c99d698a-1af3-46d2-97c5-0c33573adaca" containerID="e7cd2bed1fe08cbe75b916ffcbc0babdeecd6e26b5c3da33c23c952f17b7e1b5" exitCode=0 Feb 17 13:59:47 crc kubenswrapper[4768]: I0217 13:59:47.364982 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-ttj4s" event={"ID":"c99d698a-1af3-46d2-97c5-0c33573adaca","Type":"ContainerDied","Data":"e7cd2bed1fe08cbe75b916ffcbc0babdeecd6e26b5c3da33c23c952f17b7e1b5"} Feb 17 13:59:49 crc kubenswrapper[4768]: I0217 13:59:49.382014 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-ttj4s" event={"ID":"c99d698a-1af3-46d2-97c5-0c33573adaca","Type":"ContainerDied","Data":"c057e51f2494bd2ecdcc4cf8a07e3a49755c8f11d2581f68f8431b6544fe5806"} Feb 17 13:59:49 crc kubenswrapper[4768]: I0217 13:59:49.382743 4768 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c057e51f2494bd2ecdcc4cf8a07e3a49755c8f11d2581f68f8431b6544fe5806" Feb 17 13:59:49 crc kubenswrapper[4768]: I0217 13:59:49.402810 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-ttj4s" Feb 17 13:59:49 crc kubenswrapper[4768]: I0217 13:59:49.569802 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c99d698a-1af3-46d2-97c5-0c33573adaca-repo-setup-combined-ca-bundle\") pod \"c99d698a-1af3-46d2-97c5-0c33573adaca\" (UID: \"c99d698a-1af3-46d2-97c5-0c33573adaca\") " Feb 17 13:59:49 crc kubenswrapper[4768]: I0217 13:59:49.570334 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c99d698a-1af3-46d2-97c5-0c33573adaca-inventory\") pod \"c99d698a-1af3-46d2-97c5-0c33573adaca\" (UID: \"c99d698a-1af3-46d2-97c5-0c33573adaca\") " Feb 17 13:59:49 crc kubenswrapper[4768]: I0217 13:59:49.570389 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c99d698a-1af3-46d2-97c5-0c33573adaca-ssh-key-openstack-edpm-ipam\") pod \"c99d698a-1af3-46d2-97c5-0c33573adaca\" (UID: \"c99d698a-1af3-46d2-97c5-0c33573adaca\") " Feb 17 13:59:49 crc kubenswrapper[4768]: I0217 13:59:49.570541 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cslsd\" (UniqueName: \"kubernetes.io/projected/c99d698a-1af3-46d2-97c5-0c33573adaca-kube-api-access-cslsd\") pod \"c99d698a-1af3-46d2-97c5-0c33573adaca\" (UID: \"c99d698a-1af3-46d2-97c5-0c33573adaca\") " Feb 17 13:59:49 crc kubenswrapper[4768]: I0217 13:59:49.577926 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c99d698a-1af3-46d2-97c5-0c33573adaca-kube-api-access-cslsd" (OuterVolumeSpecName: "kube-api-access-cslsd") pod "c99d698a-1af3-46d2-97c5-0c33573adaca" (UID: "c99d698a-1af3-46d2-97c5-0c33573adaca"). InnerVolumeSpecName "kube-api-access-cslsd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 13:59:49 crc kubenswrapper[4768]: I0217 13:59:49.579006 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c99d698a-1af3-46d2-97c5-0c33573adaca-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "c99d698a-1af3-46d2-97c5-0c33573adaca" (UID: "c99d698a-1af3-46d2-97c5-0c33573adaca"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 13:59:49 crc kubenswrapper[4768]: I0217 13:59:49.606284 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c99d698a-1af3-46d2-97c5-0c33573adaca-inventory" (OuterVolumeSpecName: "inventory") pod "c99d698a-1af3-46d2-97c5-0c33573adaca" (UID: "c99d698a-1af3-46d2-97c5-0c33573adaca"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 13:59:49 crc kubenswrapper[4768]: I0217 13:59:49.628262 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c99d698a-1af3-46d2-97c5-0c33573adaca-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "c99d698a-1af3-46d2-97c5-0c33573adaca" (UID: "c99d698a-1af3-46d2-97c5-0c33573adaca"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 13:59:49 crc kubenswrapper[4768]: I0217 13:59:49.674067 4768 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c99d698a-1af3-46d2-97c5-0c33573adaca-inventory\") on node \"crc\" DevicePath \"\"" Feb 17 13:59:49 crc kubenswrapper[4768]: I0217 13:59:49.674157 4768 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c99d698a-1af3-46d2-97c5-0c33573adaca-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 17 13:59:49 crc kubenswrapper[4768]: I0217 13:59:49.674182 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cslsd\" (UniqueName: \"kubernetes.io/projected/c99d698a-1af3-46d2-97c5-0c33573adaca-kube-api-access-cslsd\") on node \"crc\" DevicePath \"\"" Feb 17 13:59:49 crc kubenswrapper[4768]: I0217 13:59:49.674201 4768 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c99d698a-1af3-46d2-97c5-0c33573adaca-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 13:59:50 crc kubenswrapper[4768]: I0217 13:59:50.392520 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-ttj4s" Feb 17 13:59:50 crc kubenswrapper[4768]: I0217 13:59:50.610845 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-wsvlq"] Feb 17 13:59:50 crc kubenswrapper[4768]: E0217 13:59:50.612286 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c99d698a-1af3-46d2-97c5-0c33573adaca" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Feb 17 13:59:50 crc kubenswrapper[4768]: I0217 13:59:50.612334 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="c99d698a-1af3-46d2-97c5-0c33573adaca" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Feb 17 13:59:50 crc kubenswrapper[4768]: I0217 13:59:50.612675 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="c99d698a-1af3-46d2-97c5-0c33573adaca" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Feb 17 13:59:50 crc kubenswrapper[4768]: I0217 13:59:50.614398 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-wsvlq" Feb 17 13:59:50 crc kubenswrapper[4768]: I0217 13:59:50.616638 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 17 13:59:50 crc kubenswrapper[4768]: I0217 13:59:50.617021 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-tr87q" Feb 17 13:59:50 crc kubenswrapper[4768]: I0217 13:59:50.617362 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 17 13:59:50 crc kubenswrapper[4768]: I0217 13:59:50.617581 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 17 13:59:50 crc kubenswrapper[4768]: I0217 13:59:50.621966 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-wsvlq"] Feb 17 13:59:50 crc kubenswrapper[4768]: I0217 13:59:50.798614 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2kll9\" (UniqueName: \"kubernetes.io/projected/239d0b98-514d-42e7-8a8c-ac152e3410ed-kube-api-access-2kll9\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-wsvlq\" (UID: \"239d0b98-514d-42e7-8a8c-ac152e3410ed\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-wsvlq" Feb 17 13:59:50 crc kubenswrapper[4768]: I0217 13:59:50.798710 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/239d0b98-514d-42e7-8a8c-ac152e3410ed-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-wsvlq\" (UID: \"239d0b98-514d-42e7-8a8c-ac152e3410ed\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-wsvlq" Feb 17 13:59:50 crc kubenswrapper[4768]: I0217 13:59:50.799132 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/239d0b98-514d-42e7-8a8c-ac152e3410ed-ssh-key-openstack-edpm-ipam\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-wsvlq\" (UID: \"239d0b98-514d-42e7-8a8c-ac152e3410ed\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-wsvlq" Feb 17 13:59:50 crc kubenswrapper[4768]: I0217 13:59:50.901443 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2kll9\" (UniqueName: \"kubernetes.io/projected/239d0b98-514d-42e7-8a8c-ac152e3410ed-kube-api-access-2kll9\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-wsvlq\" (UID: \"239d0b98-514d-42e7-8a8c-ac152e3410ed\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-wsvlq" Feb 17 13:59:50 crc kubenswrapper[4768]: I0217 13:59:50.901528 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/239d0b98-514d-42e7-8a8c-ac152e3410ed-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-wsvlq\" (UID: \"239d0b98-514d-42e7-8a8c-ac152e3410ed\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-wsvlq" Feb 17 13:59:50 crc kubenswrapper[4768]: I0217 13:59:50.901641 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/239d0b98-514d-42e7-8a8c-ac152e3410ed-ssh-key-openstack-edpm-ipam\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-wsvlq\" (UID: \"239d0b98-514d-42e7-8a8c-ac152e3410ed\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-wsvlq" Feb 17 13:59:50 crc kubenswrapper[4768]: I0217 13:59:50.905958 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/239d0b98-514d-42e7-8a8c-ac152e3410ed-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-wsvlq\" (UID: \"239d0b98-514d-42e7-8a8c-ac152e3410ed\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-wsvlq" Feb 17 13:59:50 crc kubenswrapper[4768]: I0217 13:59:50.906636 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/239d0b98-514d-42e7-8a8c-ac152e3410ed-ssh-key-openstack-edpm-ipam\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-wsvlq\" (UID: \"239d0b98-514d-42e7-8a8c-ac152e3410ed\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-wsvlq" Feb 17 13:59:50 crc kubenswrapper[4768]: I0217 13:59:50.917720 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2kll9\" (UniqueName: \"kubernetes.io/projected/239d0b98-514d-42e7-8a8c-ac152e3410ed-kube-api-access-2kll9\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-wsvlq\" (UID: \"239d0b98-514d-42e7-8a8c-ac152e3410ed\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-wsvlq" Feb 17 13:59:50 crc kubenswrapper[4768]: I0217 13:59:50.948967 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-wsvlq" Feb 17 13:59:51 crc kubenswrapper[4768]: I0217 13:59:51.510387 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-wsvlq"] Feb 17 13:59:51 crc kubenswrapper[4768]: W0217 13:59:51.511816 4768 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod239d0b98_514d_42e7_8a8c_ac152e3410ed.slice/crio-ea00631a6c614f610e566c62ee76a88cb57c457bc07ff21f3ea30dd4cef8929a WatchSource:0}: Error finding container ea00631a6c614f610e566c62ee76a88cb57c457bc07ff21f3ea30dd4cef8929a: Status 404 returned error can't find the container with id ea00631a6c614f610e566c62ee76a88cb57c457bc07ff21f3ea30dd4cef8929a Feb 17 13:59:52 crc kubenswrapper[4768]: I0217 13:59:52.414173 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-wsvlq" event={"ID":"239d0b98-514d-42e7-8a8c-ac152e3410ed","Type":"ContainerStarted","Data":"ea84df5b2e02b48ecc2614b44efda594095a239163001f490b0188172eabcda4"} Feb 17 13:59:52 crc kubenswrapper[4768]: I0217 13:59:52.414500 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-wsvlq" event={"ID":"239d0b98-514d-42e7-8a8c-ac152e3410ed","Type":"ContainerStarted","Data":"ea00631a6c614f610e566c62ee76a88cb57c457bc07ff21f3ea30dd4cef8929a"} Feb 17 13:59:52 crc kubenswrapper[4768]: I0217 13:59:52.437049 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-wsvlq" podStartSLOduration=1.961840423 podStartE2EDuration="2.437029376s" podCreationTimestamp="2026-02-17 13:59:50 +0000 UTC" firstStartedPulling="2026-02-17 13:59:51.515479613 +0000 UTC m=+1410.794866065" lastFinishedPulling="2026-02-17 13:59:51.990668576 +0000 UTC m=+1411.270055018" observedRunningTime="2026-02-17 13:59:52.430819598 +0000 UTC m=+1411.710206050" watchObservedRunningTime="2026-02-17 13:59:52.437029376 +0000 UTC m=+1411.716415818" Feb 17 13:59:55 crc kubenswrapper[4768]: I0217 13:59:55.446799 4768 generic.go:334] "Generic (PLEG): container finished" podID="239d0b98-514d-42e7-8a8c-ac152e3410ed" containerID="ea84df5b2e02b48ecc2614b44efda594095a239163001f490b0188172eabcda4" exitCode=0 Feb 17 13:59:55 crc kubenswrapper[4768]: I0217 13:59:55.446943 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-wsvlq" event={"ID":"239d0b98-514d-42e7-8a8c-ac152e3410ed","Type":"ContainerDied","Data":"ea84df5b2e02b48ecc2614b44efda594095a239163001f490b0188172eabcda4"} Feb 17 13:59:56 crc kubenswrapper[4768]: I0217 13:59:56.884024 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-wsvlq" Feb 17 13:59:57 crc kubenswrapper[4768]: I0217 13:59:57.027680 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/239d0b98-514d-42e7-8a8c-ac152e3410ed-ssh-key-openstack-edpm-ipam\") pod \"239d0b98-514d-42e7-8a8c-ac152e3410ed\" (UID: \"239d0b98-514d-42e7-8a8c-ac152e3410ed\") " Feb 17 13:59:57 crc kubenswrapper[4768]: I0217 13:59:57.027793 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/239d0b98-514d-42e7-8a8c-ac152e3410ed-inventory\") pod \"239d0b98-514d-42e7-8a8c-ac152e3410ed\" (UID: \"239d0b98-514d-42e7-8a8c-ac152e3410ed\") " Feb 17 13:59:57 crc kubenswrapper[4768]: I0217 13:59:57.027842 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2kll9\" (UniqueName: \"kubernetes.io/projected/239d0b98-514d-42e7-8a8c-ac152e3410ed-kube-api-access-2kll9\") pod \"239d0b98-514d-42e7-8a8c-ac152e3410ed\" (UID: \"239d0b98-514d-42e7-8a8c-ac152e3410ed\") " Feb 17 13:59:57 crc kubenswrapper[4768]: I0217 13:59:57.039024 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/239d0b98-514d-42e7-8a8c-ac152e3410ed-kube-api-access-2kll9" (OuterVolumeSpecName: "kube-api-access-2kll9") pod "239d0b98-514d-42e7-8a8c-ac152e3410ed" (UID: "239d0b98-514d-42e7-8a8c-ac152e3410ed"). InnerVolumeSpecName "kube-api-access-2kll9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 13:59:57 crc kubenswrapper[4768]: I0217 13:59:57.052566 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/239d0b98-514d-42e7-8a8c-ac152e3410ed-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "239d0b98-514d-42e7-8a8c-ac152e3410ed" (UID: "239d0b98-514d-42e7-8a8c-ac152e3410ed"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 13:59:57 crc kubenswrapper[4768]: I0217 13:59:57.053015 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/239d0b98-514d-42e7-8a8c-ac152e3410ed-inventory" (OuterVolumeSpecName: "inventory") pod "239d0b98-514d-42e7-8a8c-ac152e3410ed" (UID: "239d0b98-514d-42e7-8a8c-ac152e3410ed"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 13:59:57 crc kubenswrapper[4768]: I0217 13:59:57.129871 4768 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/239d0b98-514d-42e7-8a8c-ac152e3410ed-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 17 13:59:57 crc kubenswrapper[4768]: I0217 13:59:57.129905 4768 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/239d0b98-514d-42e7-8a8c-ac152e3410ed-inventory\") on node \"crc\" DevicePath \"\"" Feb 17 13:59:57 crc kubenswrapper[4768]: I0217 13:59:57.129917 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2kll9\" (UniqueName: \"kubernetes.io/projected/239d0b98-514d-42e7-8a8c-ac152e3410ed-kube-api-access-2kll9\") on node \"crc\" DevicePath \"\"" Feb 17 13:59:57 crc kubenswrapper[4768]: I0217 13:59:57.470881 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-wsvlq" event={"ID":"239d0b98-514d-42e7-8a8c-ac152e3410ed","Type":"ContainerDied","Data":"ea00631a6c614f610e566c62ee76a88cb57c457bc07ff21f3ea30dd4cef8929a"} Feb 17 13:59:57 crc kubenswrapper[4768]: I0217 13:59:57.470929 4768 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ea00631a6c614f610e566c62ee76a88cb57c457bc07ff21f3ea30dd4cef8929a" Feb 17 13:59:57 crc kubenswrapper[4768]: I0217 13:59:57.471097 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-wsvlq" Feb 17 13:59:57 crc kubenswrapper[4768]: I0217 13:59:57.557637 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-qplfr"] Feb 17 13:59:57 crc kubenswrapper[4768]: E0217 13:59:57.558023 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="239d0b98-514d-42e7-8a8c-ac152e3410ed" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Feb 17 13:59:57 crc kubenswrapper[4768]: I0217 13:59:57.558050 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="239d0b98-514d-42e7-8a8c-ac152e3410ed" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Feb 17 13:59:57 crc kubenswrapper[4768]: I0217 13:59:57.558398 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="239d0b98-514d-42e7-8a8c-ac152e3410ed" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Feb 17 13:59:57 crc kubenswrapper[4768]: I0217 13:59:57.559186 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-qplfr" Feb 17 13:59:57 crc kubenswrapper[4768]: I0217 13:59:57.563023 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 17 13:59:57 crc kubenswrapper[4768]: I0217 13:59:57.563209 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-tr87q" Feb 17 13:59:57 crc kubenswrapper[4768]: I0217 13:59:57.563426 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 17 13:59:57 crc kubenswrapper[4768]: I0217 13:59:57.563589 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 17 13:59:57 crc kubenswrapper[4768]: I0217 13:59:57.572450 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-qplfr"] Feb 17 13:59:57 crc kubenswrapper[4768]: I0217 13:59:57.740970 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0cf2614c-dbfe-400c-a4ff-a19a96c2f9a0-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-qplfr\" (UID: \"0cf2614c-dbfe-400c-a4ff-a19a96c2f9a0\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-qplfr" Feb 17 13:59:57 crc kubenswrapper[4768]: I0217 13:59:57.741024 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0cf2614c-dbfe-400c-a4ff-a19a96c2f9a0-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-qplfr\" (UID: \"0cf2614c-dbfe-400c-a4ff-a19a96c2f9a0\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-qplfr" Feb 17 13:59:57 crc kubenswrapper[4768]: I0217 13:59:57.741069 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0cf2614c-dbfe-400c-a4ff-a19a96c2f9a0-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-qplfr\" (UID: \"0cf2614c-dbfe-400c-a4ff-a19a96c2f9a0\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-qplfr" Feb 17 13:59:57 crc kubenswrapper[4768]: I0217 13:59:57.741433 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g8n26\" (UniqueName: \"kubernetes.io/projected/0cf2614c-dbfe-400c-a4ff-a19a96c2f9a0-kube-api-access-g8n26\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-qplfr\" (UID: \"0cf2614c-dbfe-400c-a4ff-a19a96c2f9a0\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-qplfr" Feb 17 13:59:57 crc kubenswrapper[4768]: I0217 13:59:57.843408 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g8n26\" (UniqueName: \"kubernetes.io/projected/0cf2614c-dbfe-400c-a4ff-a19a96c2f9a0-kube-api-access-g8n26\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-qplfr\" (UID: \"0cf2614c-dbfe-400c-a4ff-a19a96c2f9a0\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-qplfr" Feb 17 13:59:57 crc kubenswrapper[4768]: I0217 13:59:57.843638 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0cf2614c-dbfe-400c-a4ff-a19a96c2f9a0-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-qplfr\" (UID: \"0cf2614c-dbfe-400c-a4ff-a19a96c2f9a0\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-qplfr" Feb 17 13:59:57 crc kubenswrapper[4768]: I0217 13:59:57.843715 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0cf2614c-dbfe-400c-a4ff-a19a96c2f9a0-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-qplfr\" (UID: \"0cf2614c-dbfe-400c-a4ff-a19a96c2f9a0\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-qplfr" Feb 17 13:59:57 crc kubenswrapper[4768]: I0217 13:59:57.843805 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0cf2614c-dbfe-400c-a4ff-a19a96c2f9a0-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-qplfr\" (UID: \"0cf2614c-dbfe-400c-a4ff-a19a96c2f9a0\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-qplfr" Feb 17 13:59:57 crc kubenswrapper[4768]: I0217 13:59:57.851945 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0cf2614c-dbfe-400c-a4ff-a19a96c2f9a0-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-qplfr\" (UID: \"0cf2614c-dbfe-400c-a4ff-a19a96c2f9a0\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-qplfr" Feb 17 13:59:57 crc kubenswrapper[4768]: I0217 13:59:57.853058 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0cf2614c-dbfe-400c-a4ff-a19a96c2f9a0-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-qplfr\" (UID: \"0cf2614c-dbfe-400c-a4ff-a19a96c2f9a0\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-qplfr" Feb 17 13:59:57 crc kubenswrapper[4768]: I0217 13:59:57.858539 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0cf2614c-dbfe-400c-a4ff-a19a96c2f9a0-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-qplfr\" (UID: \"0cf2614c-dbfe-400c-a4ff-a19a96c2f9a0\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-qplfr" Feb 17 13:59:57 crc kubenswrapper[4768]: I0217 13:59:57.868458 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g8n26\" (UniqueName: \"kubernetes.io/projected/0cf2614c-dbfe-400c-a4ff-a19a96c2f9a0-kube-api-access-g8n26\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-qplfr\" (UID: \"0cf2614c-dbfe-400c-a4ff-a19a96c2f9a0\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-qplfr" Feb 17 13:59:57 crc kubenswrapper[4768]: I0217 13:59:57.881319 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-qplfr" Feb 17 13:59:58 crc kubenswrapper[4768]: I0217 13:59:58.455925 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-qplfr"] Feb 17 13:59:58 crc kubenswrapper[4768]: W0217 13:59:58.458964 4768 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0cf2614c_dbfe_400c_a4ff_a19a96c2f9a0.slice/crio-15ae69a338f22552e6474e27da33a5c3fa7a81475910e528860dc92eab94e819 WatchSource:0}: Error finding container 15ae69a338f22552e6474e27da33a5c3fa7a81475910e528860dc92eab94e819: Status 404 returned error can't find the container with id 15ae69a338f22552e6474e27da33a5c3fa7a81475910e528860dc92eab94e819 Feb 17 13:59:58 crc kubenswrapper[4768]: I0217 13:59:58.491185 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-qplfr" event={"ID":"0cf2614c-dbfe-400c-a4ff-a19a96c2f9a0","Type":"ContainerStarted","Data":"15ae69a338f22552e6474e27da33a5c3fa7a81475910e528860dc92eab94e819"} Feb 17 13:59:59 crc kubenswrapper[4768]: I0217 13:59:59.502213 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-qplfr" event={"ID":"0cf2614c-dbfe-400c-a4ff-a19a96c2f9a0","Type":"ContainerStarted","Data":"80bffeccdc095e6711d83c2dbfd6007c1d5d29002d2e17165a09cc59273039b7"} Feb 17 13:59:59 crc kubenswrapper[4768]: I0217 13:59:59.524530 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-qplfr" podStartSLOduration=2.146997885 podStartE2EDuration="2.524510628s" podCreationTimestamp="2026-02-17 13:59:57 +0000 UTC" firstStartedPulling="2026-02-17 13:59:58.462199906 +0000 UTC m=+1417.741586358" lastFinishedPulling="2026-02-17 13:59:58.839712649 +0000 UTC m=+1418.119099101" observedRunningTime="2026-02-17 13:59:59.516147461 +0000 UTC m=+1418.795533903" watchObservedRunningTime="2026-02-17 13:59:59.524510628 +0000 UTC m=+1418.803897070" Feb 17 14:00:00 crc kubenswrapper[4768]: I0217 14:00:00.159556 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29522280-ghhtv"] Feb 17 14:00:00 crc kubenswrapper[4768]: I0217 14:00:00.161063 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29522280-ghhtv" Feb 17 14:00:00 crc kubenswrapper[4768]: I0217 14:00:00.165581 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 17 14:00:00 crc kubenswrapper[4768]: I0217 14:00:00.169769 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 17 14:00:00 crc kubenswrapper[4768]: I0217 14:00:00.174003 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29522280-ghhtv"] Feb 17 14:00:00 crc kubenswrapper[4768]: I0217 14:00:00.286052 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gxdgd\" (UniqueName: \"kubernetes.io/projected/5193ed8a-5a4b-4ae8-abf3-161f56ded5d0-kube-api-access-gxdgd\") pod \"collect-profiles-29522280-ghhtv\" (UID: \"5193ed8a-5a4b-4ae8-abf3-161f56ded5d0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522280-ghhtv" Feb 17 14:00:00 crc kubenswrapper[4768]: I0217 14:00:00.286550 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/5193ed8a-5a4b-4ae8-abf3-161f56ded5d0-secret-volume\") pod \"collect-profiles-29522280-ghhtv\" (UID: \"5193ed8a-5a4b-4ae8-abf3-161f56ded5d0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522280-ghhtv" Feb 17 14:00:00 crc kubenswrapper[4768]: I0217 14:00:00.286626 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5193ed8a-5a4b-4ae8-abf3-161f56ded5d0-config-volume\") pod \"collect-profiles-29522280-ghhtv\" (UID: \"5193ed8a-5a4b-4ae8-abf3-161f56ded5d0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522280-ghhtv" Feb 17 14:00:00 crc kubenswrapper[4768]: I0217 14:00:00.387846 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gxdgd\" (UniqueName: \"kubernetes.io/projected/5193ed8a-5a4b-4ae8-abf3-161f56ded5d0-kube-api-access-gxdgd\") pod \"collect-profiles-29522280-ghhtv\" (UID: \"5193ed8a-5a4b-4ae8-abf3-161f56ded5d0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522280-ghhtv" Feb 17 14:00:00 crc kubenswrapper[4768]: I0217 14:00:00.387930 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/5193ed8a-5a4b-4ae8-abf3-161f56ded5d0-secret-volume\") pod \"collect-profiles-29522280-ghhtv\" (UID: \"5193ed8a-5a4b-4ae8-abf3-161f56ded5d0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522280-ghhtv" Feb 17 14:00:00 crc kubenswrapper[4768]: I0217 14:00:00.388001 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5193ed8a-5a4b-4ae8-abf3-161f56ded5d0-config-volume\") pod \"collect-profiles-29522280-ghhtv\" (UID: \"5193ed8a-5a4b-4ae8-abf3-161f56ded5d0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522280-ghhtv" Feb 17 14:00:00 crc kubenswrapper[4768]: I0217 14:00:00.389060 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5193ed8a-5a4b-4ae8-abf3-161f56ded5d0-config-volume\") pod \"collect-profiles-29522280-ghhtv\" (UID: \"5193ed8a-5a4b-4ae8-abf3-161f56ded5d0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522280-ghhtv" Feb 17 14:00:00 crc kubenswrapper[4768]: I0217 14:00:00.394693 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/5193ed8a-5a4b-4ae8-abf3-161f56ded5d0-secret-volume\") pod \"collect-profiles-29522280-ghhtv\" (UID: \"5193ed8a-5a4b-4ae8-abf3-161f56ded5d0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522280-ghhtv" Feb 17 14:00:00 crc kubenswrapper[4768]: I0217 14:00:00.406557 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gxdgd\" (UniqueName: \"kubernetes.io/projected/5193ed8a-5a4b-4ae8-abf3-161f56ded5d0-kube-api-access-gxdgd\") pod \"collect-profiles-29522280-ghhtv\" (UID: \"5193ed8a-5a4b-4ae8-abf3-161f56ded5d0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522280-ghhtv" Feb 17 14:00:00 crc kubenswrapper[4768]: I0217 14:00:00.487494 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29522280-ghhtv" Feb 17 14:00:00 crc kubenswrapper[4768]: I0217 14:00:00.941692 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29522280-ghhtv"] Feb 17 14:00:00 crc kubenswrapper[4768]: W0217 14:00:00.945019 4768 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5193ed8a_5a4b_4ae8_abf3_161f56ded5d0.slice/crio-9d5e2ec2069758c387b8d20eb6838c3656d6e335f1754ce8ce288a4e71fb8aef WatchSource:0}: Error finding container 9d5e2ec2069758c387b8d20eb6838c3656d6e335f1754ce8ce288a4e71fb8aef: Status 404 returned error can't find the container with id 9d5e2ec2069758c387b8d20eb6838c3656d6e335f1754ce8ce288a4e71fb8aef Feb 17 14:00:01 crc kubenswrapper[4768]: I0217 14:00:01.519937 4768 generic.go:334] "Generic (PLEG): container finished" podID="5193ed8a-5a4b-4ae8-abf3-161f56ded5d0" containerID="cb3065dfe14be7ae452fb785f282f60d238c8cf2b7ae8eec5992bc26406f21d3" exitCode=0 Feb 17 14:00:01 crc kubenswrapper[4768]: I0217 14:00:01.520000 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29522280-ghhtv" event={"ID":"5193ed8a-5a4b-4ae8-abf3-161f56ded5d0","Type":"ContainerDied","Data":"cb3065dfe14be7ae452fb785f282f60d238c8cf2b7ae8eec5992bc26406f21d3"} Feb 17 14:00:01 crc kubenswrapper[4768]: I0217 14:00:01.520346 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29522280-ghhtv" event={"ID":"5193ed8a-5a4b-4ae8-abf3-161f56ded5d0","Type":"ContainerStarted","Data":"9d5e2ec2069758c387b8d20eb6838c3656d6e335f1754ce8ce288a4e71fb8aef"} Feb 17 14:00:02 crc kubenswrapper[4768]: I0217 14:00:02.917613 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29522280-ghhtv" Feb 17 14:00:03 crc kubenswrapper[4768]: I0217 14:00:03.071439 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gxdgd\" (UniqueName: \"kubernetes.io/projected/5193ed8a-5a4b-4ae8-abf3-161f56ded5d0-kube-api-access-gxdgd\") pod \"5193ed8a-5a4b-4ae8-abf3-161f56ded5d0\" (UID: \"5193ed8a-5a4b-4ae8-abf3-161f56ded5d0\") " Feb 17 14:00:03 crc kubenswrapper[4768]: I0217 14:00:03.071493 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/5193ed8a-5a4b-4ae8-abf3-161f56ded5d0-secret-volume\") pod \"5193ed8a-5a4b-4ae8-abf3-161f56ded5d0\" (UID: \"5193ed8a-5a4b-4ae8-abf3-161f56ded5d0\") " Feb 17 14:00:03 crc kubenswrapper[4768]: I0217 14:00:03.071532 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5193ed8a-5a4b-4ae8-abf3-161f56ded5d0-config-volume\") pod \"5193ed8a-5a4b-4ae8-abf3-161f56ded5d0\" (UID: \"5193ed8a-5a4b-4ae8-abf3-161f56ded5d0\") " Feb 17 14:00:03 crc kubenswrapper[4768]: I0217 14:00:03.072736 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5193ed8a-5a4b-4ae8-abf3-161f56ded5d0-config-volume" (OuterVolumeSpecName: "config-volume") pod "5193ed8a-5a4b-4ae8-abf3-161f56ded5d0" (UID: "5193ed8a-5a4b-4ae8-abf3-161f56ded5d0"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 14:00:03 crc kubenswrapper[4768]: I0217 14:00:03.077538 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5193ed8a-5a4b-4ae8-abf3-161f56ded5d0-kube-api-access-gxdgd" (OuterVolumeSpecName: "kube-api-access-gxdgd") pod "5193ed8a-5a4b-4ae8-abf3-161f56ded5d0" (UID: "5193ed8a-5a4b-4ae8-abf3-161f56ded5d0"). InnerVolumeSpecName "kube-api-access-gxdgd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 14:00:03 crc kubenswrapper[4768]: I0217 14:00:03.079314 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5193ed8a-5a4b-4ae8-abf3-161f56ded5d0-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "5193ed8a-5a4b-4ae8-abf3-161f56ded5d0" (UID: "5193ed8a-5a4b-4ae8-abf3-161f56ded5d0"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 14:00:03 crc kubenswrapper[4768]: I0217 14:00:03.175002 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gxdgd\" (UniqueName: \"kubernetes.io/projected/5193ed8a-5a4b-4ae8-abf3-161f56ded5d0-kube-api-access-gxdgd\") on node \"crc\" DevicePath \"\"" Feb 17 14:00:03 crc kubenswrapper[4768]: I0217 14:00:03.175058 4768 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/5193ed8a-5a4b-4ae8-abf3-161f56ded5d0-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 17 14:00:03 crc kubenswrapper[4768]: I0217 14:00:03.175070 4768 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5193ed8a-5a4b-4ae8-abf3-161f56ded5d0-config-volume\") on node \"crc\" DevicePath \"\"" Feb 17 14:00:03 crc kubenswrapper[4768]: I0217 14:00:03.544589 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29522280-ghhtv" Feb 17 14:00:03 crc kubenswrapper[4768]: I0217 14:00:03.561594 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29522280-ghhtv" event={"ID":"5193ed8a-5a4b-4ae8-abf3-161f56ded5d0","Type":"ContainerDied","Data":"9d5e2ec2069758c387b8d20eb6838c3656d6e335f1754ce8ce288a4e71fb8aef"} Feb 17 14:00:03 crc kubenswrapper[4768]: I0217 14:00:03.561667 4768 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9d5e2ec2069758c387b8d20eb6838c3656d6e335f1754ce8ce288a4e71fb8aef" Feb 17 14:00:27 crc kubenswrapper[4768]: I0217 14:00:27.348310 4768 scope.go:117] "RemoveContainer" containerID="8abc67fc73d38665c484d4c5e1f4ba6f1822919ebbb0332d72519dd155e51f39" Feb 17 14:00:27 crc kubenswrapper[4768]: I0217 14:00:27.381486 4768 scope.go:117] "RemoveContainer" containerID="b4194a847faac318eeaa30f98b816cfdd6e63015d5385b43fccd8b85282eacf8" Feb 17 14:00:28 crc kubenswrapper[4768]: I0217 14:00:28.060087 4768 patch_prober.go:28] interesting pod/machine-config-daemon-p97z4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 14:00:28 crc kubenswrapper[4768]: I0217 14:00:28.060391 4768 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-p97z4" podUID="10c685ba-8fe0-425c-958c-3fb6754d3d84" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 14:00:58 crc kubenswrapper[4768]: I0217 14:00:58.059821 4768 patch_prober.go:28] interesting pod/machine-config-daemon-p97z4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 14:00:58 crc kubenswrapper[4768]: I0217 14:00:58.060586 4768 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-p97z4" podUID="10c685ba-8fe0-425c-958c-3fb6754d3d84" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 14:01:00 crc kubenswrapper[4768]: I0217 14:01:00.168989 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-cron-29522281-zd786"] Feb 17 14:01:00 crc kubenswrapper[4768]: E0217 14:01:00.169658 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5193ed8a-5a4b-4ae8-abf3-161f56ded5d0" containerName="collect-profiles" Feb 17 14:01:00 crc kubenswrapper[4768]: I0217 14:01:00.169671 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="5193ed8a-5a4b-4ae8-abf3-161f56ded5d0" containerName="collect-profiles" Feb 17 14:01:00 crc kubenswrapper[4768]: I0217 14:01:00.169864 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="5193ed8a-5a4b-4ae8-abf3-161f56ded5d0" containerName="collect-profiles" Feb 17 14:01:00 crc kubenswrapper[4768]: I0217 14:01:00.170483 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29522281-zd786" Feb 17 14:01:00 crc kubenswrapper[4768]: I0217 14:01:00.180188 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29522281-zd786"] Feb 17 14:01:00 crc kubenswrapper[4768]: I0217 14:01:00.271001 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b890b491-00b8-4c5c-9eb9-95f403148371-combined-ca-bundle\") pod \"keystone-cron-29522281-zd786\" (UID: \"b890b491-00b8-4c5c-9eb9-95f403148371\") " pod="openstack/keystone-cron-29522281-zd786" Feb 17 14:01:00 crc kubenswrapper[4768]: I0217 14:01:00.271203 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xzpjn\" (UniqueName: \"kubernetes.io/projected/b890b491-00b8-4c5c-9eb9-95f403148371-kube-api-access-xzpjn\") pod \"keystone-cron-29522281-zd786\" (UID: \"b890b491-00b8-4c5c-9eb9-95f403148371\") " pod="openstack/keystone-cron-29522281-zd786" Feb 17 14:01:00 crc kubenswrapper[4768]: I0217 14:01:00.271367 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b890b491-00b8-4c5c-9eb9-95f403148371-config-data\") pod \"keystone-cron-29522281-zd786\" (UID: \"b890b491-00b8-4c5c-9eb9-95f403148371\") " pod="openstack/keystone-cron-29522281-zd786" Feb 17 14:01:00 crc kubenswrapper[4768]: I0217 14:01:00.271478 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/b890b491-00b8-4c5c-9eb9-95f403148371-fernet-keys\") pod \"keystone-cron-29522281-zd786\" (UID: \"b890b491-00b8-4c5c-9eb9-95f403148371\") " pod="openstack/keystone-cron-29522281-zd786" Feb 17 14:01:00 crc kubenswrapper[4768]: I0217 14:01:00.373844 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/b890b491-00b8-4c5c-9eb9-95f403148371-fernet-keys\") pod \"keystone-cron-29522281-zd786\" (UID: \"b890b491-00b8-4c5c-9eb9-95f403148371\") " pod="openstack/keystone-cron-29522281-zd786" Feb 17 14:01:00 crc kubenswrapper[4768]: I0217 14:01:00.374147 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b890b491-00b8-4c5c-9eb9-95f403148371-combined-ca-bundle\") pod \"keystone-cron-29522281-zd786\" (UID: \"b890b491-00b8-4c5c-9eb9-95f403148371\") " pod="openstack/keystone-cron-29522281-zd786" Feb 17 14:01:00 crc kubenswrapper[4768]: I0217 14:01:00.374251 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xzpjn\" (UniqueName: \"kubernetes.io/projected/b890b491-00b8-4c5c-9eb9-95f403148371-kube-api-access-xzpjn\") pod \"keystone-cron-29522281-zd786\" (UID: \"b890b491-00b8-4c5c-9eb9-95f403148371\") " pod="openstack/keystone-cron-29522281-zd786" Feb 17 14:01:00 crc kubenswrapper[4768]: I0217 14:01:00.374316 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b890b491-00b8-4c5c-9eb9-95f403148371-config-data\") pod \"keystone-cron-29522281-zd786\" (UID: \"b890b491-00b8-4c5c-9eb9-95f403148371\") " pod="openstack/keystone-cron-29522281-zd786" Feb 17 14:01:00 crc kubenswrapper[4768]: I0217 14:01:00.380663 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b890b491-00b8-4c5c-9eb9-95f403148371-combined-ca-bundle\") pod \"keystone-cron-29522281-zd786\" (UID: \"b890b491-00b8-4c5c-9eb9-95f403148371\") " pod="openstack/keystone-cron-29522281-zd786" Feb 17 14:01:00 crc kubenswrapper[4768]: I0217 14:01:00.384352 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b890b491-00b8-4c5c-9eb9-95f403148371-config-data\") pod \"keystone-cron-29522281-zd786\" (UID: \"b890b491-00b8-4c5c-9eb9-95f403148371\") " pod="openstack/keystone-cron-29522281-zd786" Feb 17 14:01:00 crc kubenswrapper[4768]: I0217 14:01:00.385161 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/b890b491-00b8-4c5c-9eb9-95f403148371-fernet-keys\") pod \"keystone-cron-29522281-zd786\" (UID: \"b890b491-00b8-4c5c-9eb9-95f403148371\") " pod="openstack/keystone-cron-29522281-zd786" Feb 17 14:01:00 crc kubenswrapper[4768]: I0217 14:01:00.394971 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xzpjn\" (UniqueName: \"kubernetes.io/projected/b890b491-00b8-4c5c-9eb9-95f403148371-kube-api-access-xzpjn\") pod \"keystone-cron-29522281-zd786\" (UID: \"b890b491-00b8-4c5c-9eb9-95f403148371\") " pod="openstack/keystone-cron-29522281-zd786" Feb 17 14:01:00 crc kubenswrapper[4768]: I0217 14:01:00.510300 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29522281-zd786" Feb 17 14:01:00 crc kubenswrapper[4768]: I0217 14:01:00.992070 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29522281-zd786"] Feb 17 14:01:01 crc kubenswrapper[4768]: I0217 14:01:01.144596 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29522281-zd786" event={"ID":"b890b491-00b8-4c5c-9eb9-95f403148371","Type":"ContainerStarted","Data":"e93d6d1af70b7e44e1d58545f62c653659f8e9ee3c2dd740b2fa5ae7512b519b"} Feb 17 14:01:02 crc kubenswrapper[4768]: I0217 14:01:02.158937 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29522281-zd786" event={"ID":"b890b491-00b8-4c5c-9eb9-95f403148371","Type":"ContainerStarted","Data":"0e7b07d037cb8fc605df34c05f391b93b592a2a523aff0ff2a1a96e538bfd411"} Feb 17 14:01:02 crc kubenswrapper[4768]: I0217 14:01:02.193076 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-cron-29522281-zd786" podStartSLOduration=2.193053647 podStartE2EDuration="2.193053647s" podCreationTimestamp="2026-02-17 14:01:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 14:01:02.183236991 +0000 UTC m=+1481.462623453" watchObservedRunningTime="2026-02-17 14:01:02.193053647 +0000 UTC m=+1481.472440099" Feb 17 14:01:04 crc kubenswrapper[4768]: I0217 14:01:04.177158 4768 generic.go:334] "Generic (PLEG): container finished" podID="b890b491-00b8-4c5c-9eb9-95f403148371" containerID="0e7b07d037cb8fc605df34c05f391b93b592a2a523aff0ff2a1a96e538bfd411" exitCode=0 Feb 17 14:01:04 crc kubenswrapper[4768]: I0217 14:01:04.177254 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29522281-zd786" event={"ID":"b890b491-00b8-4c5c-9eb9-95f403148371","Type":"ContainerDied","Data":"0e7b07d037cb8fc605df34c05f391b93b592a2a523aff0ff2a1a96e538bfd411"} Feb 17 14:01:05 crc kubenswrapper[4768]: I0217 14:01:05.541531 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29522281-zd786" Feb 17 14:01:05 crc kubenswrapper[4768]: I0217 14:01:05.687678 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/b890b491-00b8-4c5c-9eb9-95f403148371-fernet-keys\") pod \"b890b491-00b8-4c5c-9eb9-95f403148371\" (UID: \"b890b491-00b8-4c5c-9eb9-95f403148371\") " Feb 17 14:01:05 crc kubenswrapper[4768]: I0217 14:01:05.688304 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b890b491-00b8-4c5c-9eb9-95f403148371-config-data\") pod \"b890b491-00b8-4c5c-9eb9-95f403148371\" (UID: \"b890b491-00b8-4c5c-9eb9-95f403148371\") " Feb 17 14:01:05 crc kubenswrapper[4768]: I0217 14:01:05.688412 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xzpjn\" (UniqueName: \"kubernetes.io/projected/b890b491-00b8-4c5c-9eb9-95f403148371-kube-api-access-xzpjn\") pod \"b890b491-00b8-4c5c-9eb9-95f403148371\" (UID: \"b890b491-00b8-4c5c-9eb9-95f403148371\") " Feb 17 14:01:05 crc kubenswrapper[4768]: I0217 14:01:05.688463 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b890b491-00b8-4c5c-9eb9-95f403148371-combined-ca-bundle\") pod \"b890b491-00b8-4c5c-9eb9-95f403148371\" (UID: \"b890b491-00b8-4c5c-9eb9-95f403148371\") " Feb 17 14:01:07 crc kubenswrapper[4768]: I0217 14:01:07.013001 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b890b491-00b8-4c5c-9eb9-95f403148371-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "b890b491-00b8-4c5c-9eb9-95f403148371" (UID: "b890b491-00b8-4c5c-9eb9-95f403148371"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 14:01:07 crc kubenswrapper[4768]: I0217 14:01:07.019174 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b890b491-00b8-4c5c-9eb9-95f403148371-kube-api-access-xzpjn" (OuterVolumeSpecName: "kube-api-access-xzpjn") pod "b890b491-00b8-4c5c-9eb9-95f403148371" (UID: "b890b491-00b8-4c5c-9eb9-95f403148371"). InnerVolumeSpecName "kube-api-access-xzpjn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 14:01:07 crc kubenswrapper[4768]: I0217 14:01:07.027488 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29522281-zd786" event={"ID":"b890b491-00b8-4c5c-9eb9-95f403148371","Type":"ContainerDied","Data":"e93d6d1af70b7e44e1d58545f62c653659f8e9ee3c2dd740b2fa5ae7512b519b"} Feb 17 14:01:07 crc kubenswrapper[4768]: I0217 14:01:07.028326 4768 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e93d6d1af70b7e44e1d58545f62c653659f8e9ee3c2dd740b2fa5ae7512b519b" Feb 17 14:01:07 crc kubenswrapper[4768]: I0217 14:01:07.028521 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29522281-zd786" Feb 17 14:01:07 crc kubenswrapper[4768]: I0217 14:01:07.035365 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b890b491-00b8-4c5c-9eb9-95f403148371-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b890b491-00b8-4c5c-9eb9-95f403148371" (UID: "b890b491-00b8-4c5c-9eb9-95f403148371"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 14:01:07 crc kubenswrapper[4768]: I0217 14:01:07.084614 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xzpjn\" (UniqueName: \"kubernetes.io/projected/b890b491-00b8-4c5c-9eb9-95f403148371-kube-api-access-xzpjn\") on node \"crc\" DevicePath \"\"" Feb 17 14:01:07 crc kubenswrapper[4768]: I0217 14:01:07.084649 4768 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b890b491-00b8-4c5c-9eb9-95f403148371-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 14:01:07 crc kubenswrapper[4768]: I0217 14:01:07.084659 4768 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/b890b491-00b8-4c5c-9eb9-95f403148371-fernet-keys\") on node \"crc\" DevicePath \"\"" Feb 17 14:01:07 crc kubenswrapper[4768]: I0217 14:01:07.133198 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b890b491-00b8-4c5c-9eb9-95f403148371-config-data" (OuterVolumeSpecName: "config-data") pod "b890b491-00b8-4c5c-9eb9-95f403148371" (UID: "b890b491-00b8-4c5c-9eb9-95f403148371"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 14:01:07 crc kubenswrapper[4768]: I0217 14:01:07.185990 4768 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b890b491-00b8-4c5c-9eb9-95f403148371-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 14:01:10 crc kubenswrapper[4768]: I0217 14:01:10.258767 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-c5vgj"] Feb 17 14:01:10 crc kubenswrapper[4768]: E0217 14:01:10.259805 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b890b491-00b8-4c5c-9eb9-95f403148371" containerName="keystone-cron" Feb 17 14:01:10 crc kubenswrapper[4768]: I0217 14:01:10.259822 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="b890b491-00b8-4c5c-9eb9-95f403148371" containerName="keystone-cron" Feb 17 14:01:10 crc kubenswrapper[4768]: I0217 14:01:10.260031 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="b890b491-00b8-4c5c-9eb9-95f403148371" containerName="keystone-cron" Feb 17 14:01:10 crc kubenswrapper[4768]: I0217 14:01:10.261445 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-c5vgj" Feb 17 14:01:10 crc kubenswrapper[4768]: I0217 14:01:10.289310 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-c5vgj"] Feb 17 14:01:10 crc kubenswrapper[4768]: I0217 14:01:10.351304 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ddf1a368-b057-4838-9eab-cba6931c623b-utilities\") pod \"certified-operators-c5vgj\" (UID: \"ddf1a368-b057-4838-9eab-cba6931c623b\") " pod="openshift-marketplace/certified-operators-c5vgj" Feb 17 14:01:10 crc kubenswrapper[4768]: I0217 14:01:10.351414 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ddf1a368-b057-4838-9eab-cba6931c623b-catalog-content\") pod \"certified-operators-c5vgj\" (UID: \"ddf1a368-b057-4838-9eab-cba6931c623b\") " pod="openshift-marketplace/certified-operators-c5vgj" Feb 17 14:01:10 crc kubenswrapper[4768]: I0217 14:01:10.351539 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dngvq\" (UniqueName: \"kubernetes.io/projected/ddf1a368-b057-4838-9eab-cba6931c623b-kube-api-access-dngvq\") pod \"certified-operators-c5vgj\" (UID: \"ddf1a368-b057-4838-9eab-cba6931c623b\") " pod="openshift-marketplace/certified-operators-c5vgj" Feb 17 14:01:10 crc kubenswrapper[4768]: I0217 14:01:10.452909 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ddf1a368-b057-4838-9eab-cba6931c623b-catalog-content\") pod \"certified-operators-c5vgj\" (UID: \"ddf1a368-b057-4838-9eab-cba6931c623b\") " pod="openshift-marketplace/certified-operators-c5vgj" Feb 17 14:01:10 crc kubenswrapper[4768]: I0217 14:01:10.453017 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dngvq\" (UniqueName: \"kubernetes.io/projected/ddf1a368-b057-4838-9eab-cba6931c623b-kube-api-access-dngvq\") pod \"certified-operators-c5vgj\" (UID: \"ddf1a368-b057-4838-9eab-cba6931c623b\") " pod="openshift-marketplace/certified-operators-c5vgj" Feb 17 14:01:10 crc kubenswrapper[4768]: I0217 14:01:10.453183 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ddf1a368-b057-4838-9eab-cba6931c623b-utilities\") pod \"certified-operators-c5vgj\" (UID: \"ddf1a368-b057-4838-9eab-cba6931c623b\") " pod="openshift-marketplace/certified-operators-c5vgj" Feb 17 14:01:10 crc kubenswrapper[4768]: I0217 14:01:10.453522 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ddf1a368-b057-4838-9eab-cba6931c623b-catalog-content\") pod \"certified-operators-c5vgj\" (UID: \"ddf1a368-b057-4838-9eab-cba6931c623b\") " pod="openshift-marketplace/certified-operators-c5vgj" Feb 17 14:01:10 crc kubenswrapper[4768]: I0217 14:01:10.453743 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ddf1a368-b057-4838-9eab-cba6931c623b-utilities\") pod \"certified-operators-c5vgj\" (UID: \"ddf1a368-b057-4838-9eab-cba6931c623b\") " pod="openshift-marketplace/certified-operators-c5vgj" Feb 17 14:01:10 crc kubenswrapper[4768]: I0217 14:01:10.475239 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dngvq\" (UniqueName: \"kubernetes.io/projected/ddf1a368-b057-4838-9eab-cba6931c623b-kube-api-access-dngvq\") pod \"certified-operators-c5vgj\" (UID: \"ddf1a368-b057-4838-9eab-cba6931c623b\") " pod="openshift-marketplace/certified-operators-c5vgj" Feb 17 14:01:10 crc kubenswrapper[4768]: I0217 14:01:10.585994 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-c5vgj" Feb 17 14:01:11 crc kubenswrapper[4768]: I0217 14:01:11.132267 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-c5vgj"] Feb 17 14:01:12 crc kubenswrapper[4768]: I0217 14:01:12.081376 4768 generic.go:334] "Generic (PLEG): container finished" podID="ddf1a368-b057-4838-9eab-cba6931c623b" containerID="5d917d9758050a058365818325da991f6edbfda5c4b9e71d36b136718ebd8dc5" exitCode=0 Feb 17 14:01:12 crc kubenswrapper[4768]: I0217 14:01:12.081439 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-c5vgj" event={"ID":"ddf1a368-b057-4838-9eab-cba6931c623b","Type":"ContainerDied","Data":"5d917d9758050a058365818325da991f6edbfda5c4b9e71d36b136718ebd8dc5"} Feb 17 14:01:12 crc kubenswrapper[4768]: I0217 14:01:12.081830 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-c5vgj" event={"ID":"ddf1a368-b057-4838-9eab-cba6931c623b","Type":"ContainerStarted","Data":"ec764313a118167f983ace585d293fd8d2e6b1faaefc859436f190343197059f"} Feb 17 14:01:13 crc kubenswrapper[4768]: I0217 14:01:13.097573 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-c5vgj" event={"ID":"ddf1a368-b057-4838-9eab-cba6931c623b","Type":"ContainerStarted","Data":"4b09079242836482c297588206e4b4ff103ecc89b57a72f2f686228d0a56d401"} Feb 17 14:01:14 crc kubenswrapper[4768]: I0217 14:01:14.110321 4768 generic.go:334] "Generic (PLEG): container finished" podID="ddf1a368-b057-4838-9eab-cba6931c623b" containerID="4b09079242836482c297588206e4b4ff103ecc89b57a72f2f686228d0a56d401" exitCode=0 Feb 17 14:01:14 crc kubenswrapper[4768]: I0217 14:01:14.110485 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-c5vgj" event={"ID":"ddf1a368-b057-4838-9eab-cba6931c623b","Type":"ContainerDied","Data":"4b09079242836482c297588206e4b4ff103ecc89b57a72f2f686228d0a56d401"} Feb 17 14:01:15 crc kubenswrapper[4768]: I0217 14:01:15.124738 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-c5vgj" event={"ID":"ddf1a368-b057-4838-9eab-cba6931c623b","Type":"ContainerStarted","Data":"3d14c03e30bf00220cb61f8a77b79d3a33d21397dbf30538d36a113b76004ed2"} Feb 17 14:01:15 crc kubenswrapper[4768]: I0217 14:01:15.159487 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-c5vgj" podStartSLOduration=2.440841441 podStartE2EDuration="5.15946557s" podCreationTimestamp="2026-02-17 14:01:10 +0000 UTC" firstStartedPulling="2026-02-17 14:01:12.086712344 +0000 UTC m=+1491.366098826" lastFinishedPulling="2026-02-17 14:01:14.805336503 +0000 UTC m=+1494.084722955" observedRunningTime="2026-02-17 14:01:15.150678412 +0000 UTC m=+1494.430064884" watchObservedRunningTime="2026-02-17 14:01:15.15946557 +0000 UTC m=+1494.438852022" Feb 17 14:01:20 crc kubenswrapper[4768]: I0217 14:01:20.587251 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-c5vgj" Feb 17 14:01:20 crc kubenswrapper[4768]: I0217 14:01:20.587683 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-c5vgj" Feb 17 14:01:20 crc kubenswrapper[4768]: I0217 14:01:20.653057 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-c5vgj" Feb 17 14:01:21 crc kubenswrapper[4768]: I0217 14:01:21.241263 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-c5vgj" Feb 17 14:01:21 crc kubenswrapper[4768]: I0217 14:01:21.300429 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-c5vgj"] Feb 17 14:01:23 crc kubenswrapper[4768]: I0217 14:01:23.203546 4768 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-c5vgj" podUID="ddf1a368-b057-4838-9eab-cba6931c623b" containerName="registry-server" containerID="cri-o://3d14c03e30bf00220cb61f8a77b79d3a33d21397dbf30538d36a113b76004ed2" gracePeriod=2 Feb 17 14:01:23 crc kubenswrapper[4768]: I0217 14:01:23.681698 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-c5vgj" Feb 17 14:01:23 crc kubenswrapper[4768]: I0217 14:01:23.831202 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dngvq\" (UniqueName: \"kubernetes.io/projected/ddf1a368-b057-4838-9eab-cba6931c623b-kube-api-access-dngvq\") pod \"ddf1a368-b057-4838-9eab-cba6931c623b\" (UID: \"ddf1a368-b057-4838-9eab-cba6931c623b\") " Feb 17 14:01:23 crc kubenswrapper[4768]: I0217 14:01:23.831569 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ddf1a368-b057-4838-9eab-cba6931c623b-utilities\") pod \"ddf1a368-b057-4838-9eab-cba6931c623b\" (UID: \"ddf1a368-b057-4838-9eab-cba6931c623b\") " Feb 17 14:01:23 crc kubenswrapper[4768]: I0217 14:01:23.831639 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ddf1a368-b057-4838-9eab-cba6931c623b-catalog-content\") pod \"ddf1a368-b057-4838-9eab-cba6931c623b\" (UID: \"ddf1a368-b057-4838-9eab-cba6931c623b\") " Feb 17 14:01:23 crc kubenswrapper[4768]: I0217 14:01:23.832530 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ddf1a368-b057-4838-9eab-cba6931c623b-utilities" (OuterVolumeSpecName: "utilities") pod "ddf1a368-b057-4838-9eab-cba6931c623b" (UID: "ddf1a368-b057-4838-9eab-cba6931c623b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 14:01:23 crc kubenswrapper[4768]: I0217 14:01:23.841621 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ddf1a368-b057-4838-9eab-cba6931c623b-kube-api-access-dngvq" (OuterVolumeSpecName: "kube-api-access-dngvq") pod "ddf1a368-b057-4838-9eab-cba6931c623b" (UID: "ddf1a368-b057-4838-9eab-cba6931c623b"). InnerVolumeSpecName "kube-api-access-dngvq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 14:01:23 crc kubenswrapper[4768]: I0217 14:01:23.896254 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ddf1a368-b057-4838-9eab-cba6931c623b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ddf1a368-b057-4838-9eab-cba6931c623b" (UID: "ddf1a368-b057-4838-9eab-cba6931c623b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 14:01:23 crc kubenswrapper[4768]: I0217 14:01:23.934591 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dngvq\" (UniqueName: \"kubernetes.io/projected/ddf1a368-b057-4838-9eab-cba6931c623b-kube-api-access-dngvq\") on node \"crc\" DevicePath \"\"" Feb 17 14:01:23 crc kubenswrapper[4768]: I0217 14:01:23.934630 4768 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ddf1a368-b057-4838-9eab-cba6931c623b-utilities\") on node \"crc\" DevicePath \"\"" Feb 17 14:01:23 crc kubenswrapper[4768]: I0217 14:01:23.934641 4768 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ddf1a368-b057-4838-9eab-cba6931c623b-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 17 14:01:24 crc kubenswrapper[4768]: I0217 14:01:24.216003 4768 generic.go:334] "Generic (PLEG): container finished" podID="ddf1a368-b057-4838-9eab-cba6931c623b" containerID="3d14c03e30bf00220cb61f8a77b79d3a33d21397dbf30538d36a113b76004ed2" exitCode=0 Feb 17 14:01:24 crc kubenswrapper[4768]: I0217 14:01:24.216054 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-c5vgj" event={"ID":"ddf1a368-b057-4838-9eab-cba6931c623b","Type":"ContainerDied","Data":"3d14c03e30bf00220cb61f8a77b79d3a33d21397dbf30538d36a113b76004ed2"} Feb 17 14:01:24 crc kubenswrapper[4768]: I0217 14:01:24.216143 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-c5vgj" event={"ID":"ddf1a368-b057-4838-9eab-cba6931c623b","Type":"ContainerDied","Data":"ec764313a118167f983ace585d293fd8d2e6b1faaefc859436f190343197059f"} Feb 17 14:01:24 crc kubenswrapper[4768]: I0217 14:01:24.216170 4768 scope.go:117] "RemoveContainer" containerID="3d14c03e30bf00220cb61f8a77b79d3a33d21397dbf30538d36a113b76004ed2" Feb 17 14:01:24 crc kubenswrapper[4768]: I0217 14:01:24.216082 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-c5vgj" Feb 17 14:01:24 crc kubenswrapper[4768]: I0217 14:01:24.241669 4768 scope.go:117] "RemoveContainer" containerID="4b09079242836482c297588206e4b4ff103ecc89b57a72f2f686228d0a56d401" Feb 17 14:01:24 crc kubenswrapper[4768]: I0217 14:01:24.259566 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-c5vgj"] Feb 17 14:01:24 crc kubenswrapper[4768]: I0217 14:01:24.265778 4768 scope.go:117] "RemoveContainer" containerID="5d917d9758050a058365818325da991f6edbfda5c4b9e71d36b136718ebd8dc5" Feb 17 14:01:24 crc kubenswrapper[4768]: I0217 14:01:24.274061 4768 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-c5vgj"] Feb 17 14:01:24 crc kubenswrapper[4768]: I0217 14:01:24.324017 4768 scope.go:117] "RemoveContainer" containerID="3d14c03e30bf00220cb61f8a77b79d3a33d21397dbf30538d36a113b76004ed2" Feb 17 14:01:24 crc kubenswrapper[4768]: E0217 14:01:24.324579 4768 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3d14c03e30bf00220cb61f8a77b79d3a33d21397dbf30538d36a113b76004ed2\": container with ID starting with 3d14c03e30bf00220cb61f8a77b79d3a33d21397dbf30538d36a113b76004ed2 not found: ID does not exist" containerID="3d14c03e30bf00220cb61f8a77b79d3a33d21397dbf30538d36a113b76004ed2" Feb 17 14:01:24 crc kubenswrapper[4768]: I0217 14:01:24.324633 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3d14c03e30bf00220cb61f8a77b79d3a33d21397dbf30538d36a113b76004ed2"} err="failed to get container status \"3d14c03e30bf00220cb61f8a77b79d3a33d21397dbf30538d36a113b76004ed2\": rpc error: code = NotFound desc = could not find container \"3d14c03e30bf00220cb61f8a77b79d3a33d21397dbf30538d36a113b76004ed2\": container with ID starting with 3d14c03e30bf00220cb61f8a77b79d3a33d21397dbf30538d36a113b76004ed2 not found: ID does not exist" Feb 17 14:01:24 crc kubenswrapper[4768]: I0217 14:01:24.324664 4768 scope.go:117] "RemoveContainer" containerID="4b09079242836482c297588206e4b4ff103ecc89b57a72f2f686228d0a56d401" Feb 17 14:01:24 crc kubenswrapper[4768]: E0217 14:01:24.325001 4768 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4b09079242836482c297588206e4b4ff103ecc89b57a72f2f686228d0a56d401\": container with ID starting with 4b09079242836482c297588206e4b4ff103ecc89b57a72f2f686228d0a56d401 not found: ID does not exist" containerID="4b09079242836482c297588206e4b4ff103ecc89b57a72f2f686228d0a56d401" Feb 17 14:01:24 crc kubenswrapper[4768]: I0217 14:01:24.325028 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4b09079242836482c297588206e4b4ff103ecc89b57a72f2f686228d0a56d401"} err="failed to get container status \"4b09079242836482c297588206e4b4ff103ecc89b57a72f2f686228d0a56d401\": rpc error: code = NotFound desc = could not find container \"4b09079242836482c297588206e4b4ff103ecc89b57a72f2f686228d0a56d401\": container with ID starting with 4b09079242836482c297588206e4b4ff103ecc89b57a72f2f686228d0a56d401 not found: ID does not exist" Feb 17 14:01:24 crc kubenswrapper[4768]: I0217 14:01:24.325045 4768 scope.go:117] "RemoveContainer" containerID="5d917d9758050a058365818325da991f6edbfda5c4b9e71d36b136718ebd8dc5" Feb 17 14:01:24 crc kubenswrapper[4768]: E0217 14:01:24.325470 4768 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5d917d9758050a058365818325da991f6edbfda5c4b9e71d36b136718ebd8dc5\": container with ID starting with 5d917d9758050a058365818325da991f6edbfda5c4b9e71d36b136718ebd8dc5 not found: ID does not exist" containerID="5d917d9758050a058365818325da991f6edbfda5c4b9e71d36b136718ebd8dc5" Feb 17 14:01:24 crc kubenswrapper[4768]: I0217 14:01:24.325493 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5d917d9758050a058365818325da991f6edbfda5c4b9e71d36b136718ebd8dc5"} err="failed to get container status \"5d917d9758050a058365818325da991f6edbfda5c4b9e71d36b136718ebd8dc5\": rpc error: code = NotFound desc = could not find container \"5d917d9758050a058365818325da991f6edbfda5c4b9e71d36b136718ebd8dc5\": container with ID starting with 5d917d9758050a058365818325da991f6edbfda5c4b9e71d36b136718ebd8dc5 not found: ID does not exist" Feb 17 14:01:25 crc kubenswrapper[4768]: I0217 14:01:25.546337 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ddf1a368-b057-4838-9eab-cba6931c623b" path="/var/lib/kubelet/pods/ddf1a368-b057-4838-9eab-cba6931c623b/volumes" Feb 17 14:01:27 crc kubenswrapper[4768]: I0217 14:01:27.493367 4768 scope.go:117] "RemoveContainer" containerID="98248cbba561747411a17c66400e90ec2d76ff114d9555c928df29a6b7d4d6a1" Feb 17 14:01:28 crc kubenswrapper[4768]: I0217 14:01:28.060342 4768 patch_prober.go:28] interesting pod/machine-config-daemon-p97z4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 14:01:28 crc kubenswrapper[4768]: I0217 14:01:28.060412 4768 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-p97z4" podUID="10c685ba-8fe0-425c-958c-3fb6754d3d84" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 14:01:28 crc kubenswrapper[4768]: I0217 14:01:28.060456 4768 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-p97z4" Feb 17 14:01:28 crc kubenswrapper[4768]: I0217 14:01:28.061325 4768 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"1a561dcdfe0c02cbee0707f068057b98f49569f2f0f313ebde599d9ef2366bc7"} pod="openshift-machine-config-operator/machine-config-daemon-p97z4" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 17 14:01:28 crc kubenswrapper[4768]: I0217 14:01:28.061441 4768 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-p97z4" podUID="10c685ba-8fe0-425c-958c-3fb6754d3d84" containerName="machine-config-daemon" containerID="cri-o://1a561dcdfe0c02cbee0707f068057b98f49569f2f0f313ebde599d9ef2366bc7" gracePeriod=600 Feb 17 14:01:28 crc kubenswrapper[4768]: E0217 14:01:28.191495 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p97z4_openshift-machine-config-operator(10c685ba-8fe0-425c-958c-3fb6754d3d84)\"" pod="openshift-machine-config-operator/machine-config-daemon-p97z4" podUID="10c685ba-8fe0-425c-958c-3fb6754d3d84" Feb 17 14:01:28 crc kubenswrapper[4768]: I0217 14:01:28.256611 4768 generic.go:334] "Generic (PLEG): container finished" podID="10c685ba-8fe0-425c-958c-3fb6754d3d84" containerID="1a561dcdfe0c02cbee0707f068057b98f49569f2f0f313ebde599d9ef2366bc7" exitCode=0 Feb 17 14:01:28 crc kubenswrapper[4768]: I0217 14:01:28.256689 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-p97z4" event={"ID":"10c685ba-8fe0-425c-958c-3fb6754d3d84","Type":"ContainerDied","Data":"1a561dcdfe0c02cbee0707f068057b98f49569f2f0f313ebde599d9ef2366bc7"} Feb 17 14:01:28 crc kubenswrapper[4768]: I0217 14:01:28.256965 4768 scope.go:117] "RemoveContainer" containerID="f9b51566c32baca16b7c982a1f5be2bc77d96745c6b89bf249154277d12b15c6" Feb 17 14:01:28 crc kubenswrapper[4768]: I0217 14:01:28.257886 4768 scope.go:117] "RemoveContainer" containerID="1a561dcdfe0c02cbee0707f068057b98f49569f2f0f313ebde599d9ef2366bc7" Feb 17 14:01:28 crc kubenswrapper[4768]: E0217 14:01:28.258383 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p97z4_openshift-machine-config-operator(10c685ba-8fe0-425c-958c-3fb6754d3d84)\"" pod="openshift-machine-config-operator/machine-config-daemon-p97z4" podUID="10c685ba-8fe0-425c-958c-3fb6754d3d84" Feb 17 14:01:38 crc kubenswrapper[4768]: I0217 14:01:38.533984 4768 scope.go:117] "RemoveContainer" containerID="1a561dcdfe0c02cbee0707f068057b98f49569f2f0f313ebde599d9ef2366bc7" Feb 17 14:01:38 crc kubenswrapper[4768]: E0217 14:01:38.534911 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p97z4_openshift-machine-config-operator(10c685ba-8fe0-425c-958c-3fb6754d3d84)\"" pod="openshift-machine-config-operator/machine-config-daemon-p97z4" podUID="10c685ba-8fe0-425c-958c-3fb6754d3d84" Feb 17 14:01:52 crc kubenswrapper[4768]: I0217 14:01:52.535072 4768 scope.go:117] "RemoveContainer" containerID="1a561dcdfe0c02cbee0707f068057b98f49569f2f0f313ebde599d9ef2366bc7" Feb 17 14:01:52 crc kubenswrapper[4768]: E0217 14:01:52.537130 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p97z4_openshift-machine-config-operator(10c685ba-8fe0-425c-958c-3fb6754d3d84)\"" pod="openshift-machine-config-operator/machine-config-daemon-p97z4" podUID="10c685ba-8fe0-425c-958c-3fb6754d3d84" Feb 17 14:02:01 crc kubenswrapper[4768]: I0217 14:02:01.394856 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-6qwq4"] Feb 17 14:02:01 crc kubenswrapper[4768]: E0217 14:02:01.395872 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ddf1a368-b057-4838-9eab-cba6931c623b" containerName="registry-server" Feb 17 14:02:01 crc kubenswrapper[4768]: I0217 14:02:01.395890 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="ddf1a368-b057-4838-9eab-cba6931c623b" containerName="registry-server" Feb 17 14:02:01 crc kubenswrapper[4768]: E0217 14:02:01.395923 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ddf1a368-b057-4838-9eab-cba6931c623b" containerName="extract-utilities" Feb 17 14:02:01 crc kubenswrapper[4768]: I0217 14:02:01.395936 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="ddf1a368-b057-4838-9eab-cba6931c623b" containerName="extract-utilities" Feb 17 14:02:01 crc kubenswrapper[4768]: E0217 14:02:01.395950 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ddf1a368-b057-4838-9eab-cba6931c623b" containerName="extract-content" Feb 17 14:02:01 crc kubenswrapper[4768]: I0217 14:02:01.395959 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="ddf1a368-b057-4838-9eab-cba6931c623b" containerName="extract-content" Feb 17 14:02:01 crc kubenswrapper[4768]: I0217 14:02:01.396185 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="ddf1a368-b057-4838-9eab-cba6931c623b" containerName="registry-server" Feb 17 14:02:01 crc kubenswrapper[4768]: I0217 14:02:01.397869 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-6qwq4" Feb 17 14:02:01 crc kubenswrapper[4768]: I0217 14:02:01.405820 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-6qwq4"] Feb 17 14:02:01 crc kubenswrapper[4768]: I0217 14:02:01.454679 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ca73e35a-72bc-4d94-8a69-ea6bc73ffa86-catalog-content\") pod \"community-operators-6qwq4\" (UID: \"ca73e35a-72bc-4d94-8a69-ea6bc73ffa86\") " pod="openshift-marketplace/community-operators-6qwq4" Feb 17 14:02:01 crc kubenswrapper[4768]: I0217 14:02:01.454755 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hgwkj\" (UniqueName: \"kubernetes.io/projected/ca73e35a-72bc-4d94-8a69-ea6bc73ffa86-kube-api-access-hgwkj\") pod \"community-operators-6qwq4\" (UID: \"ca73e35a-72bc-4d94-8a69-ea6bc73ffa86\") " pod="openshift-marketplace/community-operators-6qwq4" Feb 17 14:02:01 crc kubenswrapper[4768]: I0217 14:02:01.455248 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ca73e35a-72bc-4d94-8a69-ea6bc73ffa86-utilities\") pod \"community-operators-6qwq4\" (UID: \"ca73e35a-72bc-4d94-8a69-ea6bc73ffa86\") " pod="openshift-marketplace/community-operators-6qwq4" Feb 17 14:02:01 crc kubenswrapper[4768]: I0217 14:02:01.557348 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ca73e35a-72bc-4d94-8a69-ea6bc73ffa86-utilities\") pod \"community-operators-6qwq4\" (UID: \"ca73e35a-72bc-4d94-8a69-ea6bc73ffa86\") " pod="openshift-marketplace/community-operators-6qwq4" Feb 17 14:02:01 crc kubenswrapper[4768]: I0217 14:02:01.557579 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ca73e35a-72bc-4d94-8a69-ea6bc73ffa86-catalog-content\") pod \"community-operators-6qwq4\" (UID: \"ca73e35a-72bc-4d94-8a69-ea6bc73ffa86\") " pod="openshift-marketplace/community-operators-6qwq4" Feb 17 14:02:01 crc kubenswrapper[4768]: I0217 14:02:01.557685 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hgwkj\" (UniqueName: \"kubernetes.io/projected/ca73e35a-72bc-4d94-8a69-ea6bc73ffa86-kube-api-access-hgwkj\") pod \"community-operators-6qwq4\" (UID: \"ca73e35a-72bc-4d94-8a69-ea6bc73ffa86\") " pod="openshift-marketplace/community-operators-6qwq4" Feb 17 14:02:01 crc kubenswrapper[4768]: I0217 14:02:01.557935 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ca73e35a-72bc-4d94-8a69-ea6bc73ffa86-utilities\") pod \"community-operators-6qwq4\" (UID: \"ca73e35a-72bc-4d94-8a69-ea6bc73ffa86\") " pod="openshift-marketplace/community-operators-6qwq4" Feb 17 14:02:01 crc kubenswrapper[4768]: I0217 14:02:01.558212 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ca73e35a-72bc-4d94-8a69-ea6bc73ffa86-catalog-content\") pod \"community-operators-6qwq4\" (UID: \"ca73e35a-72bc-4d94-8a69-ea6bc73ffa86\") " pod="openshift-marketplace/community-operators-6qwq4" Feb 17 14:02:01 crc kubenswrapper[4768]: I0217 14:02:01.578782 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hgwkj\" (UniqueName: \"kubernetes.io/projected/ca73e35a-72bc-4d94-8a69-ea6bc73ffa86-kube-api-access-hgwkj\") pod \"community-operators-6qwq4\" (UID: \"ca73e35a-72bc-4d94-8a69-ea6bc73ffa86\") " pod="openshift-marketplace/community-operators-6qwq4" Feb 17 14:02:01 crc kubenswrapper[4768]: I0217 14:02:01.720524 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-6qwq4" Feb 17 14:02:02 crc kubenswrapper[4768]: I0217 14:02:02.184671 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-6qwq4"] Feb 17 14:02:02 crc kubenswrapper[4768]: I0217 14:02:02.609072 4768 generic.go:334] "Generic (PLEG): container finished" podID="ca73e35a-72bc-4d94-8a69-ea6bc73ffa86" containerID="a9dcda6eaf91307b45ba43384c1b31482493fd032712c7d7152bd24a3b52ed36" exitCode=0 Feb 17 14:02:02 crc kubenswrapper[4768]: I0217 14:02:02.609176 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6qwq4" event={"ID":"ca73e35a-72bc-4d94-8a69-ea6bc73ffa86","Type":"ContainerDied","Data":"a9dcda6eaf91307b45ba43384c1b31482493fd032712c7d7152bd24a3b52ed36"} Feb 17 14:02:02 crc kubenswrapper[4768]: I0217 14:02:02.609400 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6qwq4" event={"ID":"ca73e35a-72bc-4d94-8a69-ea6bc73ffa86","Type":"ContainerStarted","Data":"a557a7b3e865ec24147ea887bf380af0da0161bea3f7c97a754a75d34c490a18"} Feb 17 14:02:04 crc kubenswrapper[4768]: I0217 14:02:04.534456 4768 scope.go:117] "RemoveContainer" containerID="1a561dcdfe0c02cbee0707f068057b98f49569f2f0f313ebde599d9ef2366bc7" Feb 17 14:02:04 crc kubenswrapper[4768]: E0217 14:02:04.535208 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p97z4_openshift-machine-config-operator(10c685ba-8fe0-425c-958c-3fb6754d3d84)\"" pod="openshift-machine-config-operator/machine-config-daemon-p97z4" podUID="10c685ba-8fe0-425c-958c-3fb6754d3d84" Feb 17 14:02:04 crc kubenswrapper[4768]: I0217 14:02:04.684965 4768 generic.go:334] "Generic (PLEG): container finished" podID="ca73e35a-72bc-4d94-8a69-ea6bc73ffa86" containerID="85037d045527f0ab8ac3c48c16ed3144c9d0d8c94ede210b9e5a0300e99d6831" exitCode=0 Feb 17 14:02:04 crc kubenswrapper[4768]: I0217 14:02:04.685261 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6qwq4" event={"ID":"ca73e35a-72bc-4d94-8a69-ea6bc73ffa86","Type":"ContainerDied","Data":"85037d045527f0ab8ac3c48c16ed3144c9d0d8c94ede210b9e5a0300e99d6831"} Feb 17 14:02:05 crc kubenswrapper[4768]: I0217 14:02:05.697765 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6qwq4" event={"ID":"ca73e35a-72bc-4d94-8a69-ea6bc73ffa86","Type":"ContainerStarted","Data":"6e8a5f998e923aff9c23004e626ce9628e28ffa92cb83c1c9b0e0b7dbfda870f"} Feb 17 14:02:05 crc kubenswrapper[4768]: I0217 14:02:05.725474 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-6qwq4" podStartSLOduration=2.216231137 podStartE2EDuration="4.725448945s" podCreationTimestamp="2026-02-17 14:02:01 +0000 UTC" firstStartedPulling="2026-02-17 14:02:02.610907964 +0000 UTC m=+1541.890294406" lastFinishedPulling="2026-02-17 14:02:05.120125772 +0000 UTC m=+1544.399512214" observedRunningTime="2026-02-17 14:02:05.714712283 +0000 UTC m=+1544.994098725" watchObservedRunningTime="2026-02-17 14:02:05.725448945 +0000 UTC m=+1545.004835397" Feb 17 14:02:11 crc kubenswrapper[4768]: I0217 14:02:11.721530 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-6qwq4" Feb 17 14:02:11 crc kubenswrapper[4768]: I0217 14:02:11.723127 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-6qwq4" Feb 17 14:02:11 crc kubenswrapper[4768]: I0217 14:02:11.775389 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-6qwq4" Feb 17 14:02:11 crc kubenswrapper[4768]: I0217 14:02:11.828599 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-6qwq4" Feb 17 14:02:12 crc kubenswrapper[4768]: I0217 14:02:12.015340 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-6qwq4"] Feb 17 14:02:13 crc kubenswrapper[4768]: I0217 14:02:13.795159 4768 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-6qwq4" podUID="ca73e35a-72bc-4d94-8a69-ea6bc73ffa86" containerName="registry-server" containerID="cri-o://6e8a5f998e923aff9c23004e626ce9628e28ffa92cb83c1c9b0e0b7dbfda870f" gracePeriod=2 Feb 17 14:02:14 crc kubenswrapper[4768]: I0217 14:02:14.230872 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-6qwq4" Feb 17 14:02:14 crc kubenswrapper[4768]: I0217 14:02:14.304233 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ca73e35a-72bc-4d94-8a69-ea6bc73ffa86-catalog-content\") pod \"ca73e35a-72bc-4d94-8a69-ea6bc73ffa86\" (UID: \"ca73e35a-72bc-4d94-8a69-ea6bc73ffa86\") " Feb 17 14:02:14 crc kubenswrapper[4768]: I0217 14:02:14.304692 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hgwkj\" (UniqueName: \"kubernetes.io/projected/ca73e35a-72bc-4d94-8a69-ea6bc73ffa86-kube-api-access-hgwkj\") pod \"ca73e35a-72bc-4d94-8a69-ea6bc73ffa86\" (UID: \"ca73e35a-72bc-4d94-8a69-ea6bc73ffa86\") " Feb 17 14:02:14 crc kubenswrapper[4768]: I0217 14:02:14.305070 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ca73e35a-72bc-4d94-8a69-ea6bc73ffa86-utilities\") pod \"ca73e35a-72bc-4d94-8a69-ea6bc73ffa86\" (UID: \"ca73e35a-72bc-4d94-8a69-ea6bc73ffa86\") " Feb 17 14:02:14 crc kubenswrapper[4768]: I0217 14:02:14.307599 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ca73e35a-72bc-4d94-8a69-ea6bc73ffa86-utilities" (OuterVolumeSpecName: "utilities") pod "ca73e35a-72bc-4d94-8a69-ea6bc73ffa86" (UID: "ca73e35a-72bc-4d94-8a69-ea6bc73ffa86"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 14:02:14 crc kubenswrapper[4768]: I0217 14:02:14.315847 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ca73e35a-72bc-4d94-8a69-ea6bc73ffa86-kube-api-access-hgwkj" (OuterVolumeSpecName: "kube-api-access-hgwkj") pod "ca73e35a-72bc-4d94-8a69-ea6bc73ffa86" (UID: "ca73e35a-72bc-4d94-8a69-ea6bc73ffa86"). InnerVolumeSpecName "kube-api-access-hgwkj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 14:02:14 crc kubenswrapper[4768]: I0217 14:02:14.407399 4768 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ca73e35a-72bc-4d94-8a69-ea6bc73ffa86-utilities\") on node \"crc\" DevicePath \"\"" Feb 17 14:02:14 crc kubenswrapper[4768]: I0217 14:02:14.407437 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hgwkj\" (UniqueName: \"kubernetes.io/projected/ca73e35a-72bc-4d94-8a69-ea6bc73ffa86-kube-api-access-hgwkj\") on node \"crc\" DevicePath \"\"" Feb 17 14:02:14 crc kubenswrapper[4768]: I0217 14:02:14.732446 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ca73e35a-72bc-4d94-8a69-ea6bc73ffa86-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ca73e35a-72bc-4d94-8a69-ea6bc73ffa86" (UID: "ca73e35a-72bc-4d94-8a69-ea6bc73ffa86"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 14:02:14 crc kubenswrapper[4768]: I0217 14:02:14.807506 4768 generic.go:334] "Generic (PLEG): container finished" podID="ca73e35a-72bc-4d94-8a69-ea6bc73ffa86" containerID="6e8a5f998e923aff9c23004e626ce9628e28ffa92cb83c1c9b0e0b7dbfda870f" exitCode=0 Feb 17 14:02:14 crc kubenswrapper[4768]: I0217 14:02:14.807589 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6qwq4" event={"ID":"ca73e35a-72bc-4d94-8a69-ea6bc73ffa86","Type":"ContainerDied","Data":"6e8a5f998e923aff9c23004e626ce9628e28ffa92cb83c1c9b0e0b7dbfda870f"} Feb 17 14:02:14 crc kubenswrapper[4768]: I0217 14:02:14.807641 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6qwq4" event={"ID":"ca73e35a-72bc-4d94-8a69-ea6bc73ffa86","Type":"ContainerDied","Data":"a557a7b3e865ec24147ea887bf380af0da0161bea3f7c97a754a75d34c490a18"} Feb 17 14:02:14 crc kubenswrapper[4768]: I0217 14:02:14.807681 4768 scope.go:117] "RemoveContainer" containerID="6e8a5f998e923aff9c23004e626ce9628e28ffa92cb83c1c9b0e0b7dbfda870f" Feb 17 14:02:14 crc kubenswrapper[4768]: I0217 14:02:14.807903 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-6qwq4" Feb 17 14:02:14 crc kubenswrapper[4768]: I0217 14:02:14.815188 4768 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ca73e35a-72bc-4d94-8a69-ea6bc73ffa86-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 17 14:02:14 crc kubenswrapper[4768]: I0217 14:02:14.835276 4768 scope.go:117] "RemoveContainer" containerID="85037d045527f0ab8ac3c48c16ed3144c9d0d8c94ede210b9e5a0300e99d6831" Feb 17 14:02:14 crc kubenswrapper[4768]: I0217 14:02:14.849395 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-6qwq4"] Feb 17 14:02:14 crc kubenswrapper[4768]: I0217 14:02:14.861152 4768 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-6qwq4"] Feb 17 14:02:14 crc kubenswrapper[4768]: I0217 14:02:14.875396 4768 scope.go:117] "RemoveContainer" containerID="a9dcda6eaf91307b45ba43384c1b31482493fd032712c7d7152bd24a3b52ed36" Feb 17 14:02:14 crc kubenswrapper[4768]: I0217 14:02:14.915346 4768 scope.go:117] "RemoveContainer" containerID="6e8a5f998e923aff9c23004e626ce9628e28ffa92cb83c1c9b0e0b7dbfda870f" Feb 17 14:02:14 crc kubenswrapper[4768]: E0217 14:02:14.916352 4768 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6e8a5f998e923aff9c23004e626ce9628e28ffa92cb83c1c9b0e0b7dbfda870f\": container with ID starting with 6e8a5f998e923aff9c23004e626ce9628e28ffa92cb83c1c9b0e0b7dbfda870f not found: ID does not exist" containerID="6e8a5f998e923aff9c23004e626ce9628e28ffa92cb83c1c9b0e0b7dbfda870f" Feb 17 14:02:14 crc kubenswrapper[4768]: I0217 14:02:14.916411 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6e8a5f998e923aff9c23004e626ce9628e28ffa92cb83c1c9b0e0b7dbfda870f"} err="failed to get container status \"6e8a5f998e923aff9c23004e626ce9628e28ffa92cb83c1c9b0e0b7dbfda870f\": rpc error: code = NotFound desc = could not find container \"6e8a5f998e923aff9c23004e626ce9628e28ffa92cb83c1c9b0e0b7dbfda870f\": container with ID starting with 6e8a5f998e923aff9c23004e626ce9628e28ffa92cb83c1c9b0e0b7dbfda870f not found: ID does not exist" Feb 17 14:02:14 crc kubenswrapper[4768]: I0217 14:02:14.916442 4768 scope.go:117] "RemoveContainer" containerID="85037d045527f0ab8ac3c48c16ed3144c9d0d8c94ede210b9e5a0300e99d6831" Feb 17 14:02:14 crc kubenswrapper[4768]: E0217 14:02:14.916822 4768 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"85037d045527f0ab8ac3c48c16ed3144c9d0d8c94ede210b9e5a0300e99d6831\": container with ID starting with 85037d045527f0ab8ac3c48c16ed3144c9d0d8c94ede210b9e5a0300e99d6831 not found: ID does not exist" containerID="85037d045527f0ab8ac3c48c16ed3144c9d0d8c94ede210b9e5a0300e99d6831" Feb 17 14:02:14 crc kubenswrapper[4768]: I0217 14:02:14.916861 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"85037d045527f0ab8ac3c48c16ed3144c9d0d8c94ede210b9e5a0300e99d6831"} err="failed to get container status \"85037d045527f0ab8ac3c48c16ed3144c9d0d8c94ede210b9e5a0300e99d6831\": rpc error: code = NotFound desc = could not find container \"85037d045527f0ab8ac3c48c16ed3144c9d0d8c94ede210b9e5a0300e99d6831\": container with ID starting with 85037d045527f0ab8ac3c48c16ed3144c9d0d8c94ede210b9e5a0300e99d6831 not found: ID does not exist" Feb 17 14:02:14 crc kubenswrapper[4768]: I0217 14:02:14.916883 4768 scope.go:117] "RemoveContainer" containerID="a9dcda6eaf91307b45ba43384c1b31482493fd032712c7d7152bd24a3b52ed36" Feb 17 14:02:14 crc kubenswrapper[4768]: E0217 14:02:14.917142 4768 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a9dcda6eaf91307b45ba43384c1b31482493fd032712c7d7152bd24a3b52ed36\": container with ID starting with a9dcda6eaf91307b45ba43384c1b31482493fd032712c7d7152bd24a3b52ed36 not found: ID does not exist" containerID="a9dcda6eaf91307b45ba43384c1b31482493fd032712c7d7152bd24a3b52ed36" Feb 17 14:02:14 crc kubenswrapper[4768]: I0217 14:02:14.917170 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a9dcda6eaf91307b45ba43384c1b31482493fd032712c7d7152bd24a3b52ed36"} err="failed to get container status \"a9dcda6eaf91307b45ba43384c1b31482493fd032712c7d7152bd24a3b52ed36\": rpc error: code = NotFound desc = could not find container \"a9dcda6eaf91307b45ba43384c1b31482493fd032712c7d7152bd24a3b52ed36\": container with ID starting with a9dcda6eaf91307b45ba43384c1b31482493fd032712c7d7152bd24a3b52ed36 not found: ID does not exist" Feb 17 14:02:15 crc kubenswrapper[4768]: I0217 14:02:15.534943 4768 scope.go:117] "RemoveContainer" containerID="1a561dcdfe0c02cbee0707f068057b98f49569f2f0f313ebde599d9ef2366bc7" Feb 17 14:02:15 crc kubenswrapper[4768]: E0217 14:02:15.535411 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p97z4_openshift-machine-config-operator(10c685ba-8fe0-425c-958c-3fb6754d3d84)\"" pod="openshift-machine-config-operator/machine-config-daemon-p97z4" podUID="10c685ba-8fe0-425c-958c-3fb6754d3d84" Feb 17 14:02:15 crc kubenswrapper[4768]: I0217 14:02:15.549737 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ca73e35a-72bc-4d94-8a69-ea6bc73ffa86" path="/var/lib/kubelet/pods/ca73e35a-72bc-4d94-8a69-ea6bc73ffa86/volumes" Feb 17 14:02:30 crc kubenswrapper[4768]: I0217 14:02:30.533988 4768 scope.go:117] "RemoveContainer" containerID="1a561dcdfe0c02cbee0707f068057b98f49569f2f0f313ebde599d9ef2366bc7" Feb 17 14:02:30 crc kubenswrapper[4768]: E0217 14:02:30.534709 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p97z4_openshift-machine-config-operator(10c685ba-8fe0-425c-958c-3fb6754d3d84)\"" pod="openshift-machine-config-operator/machine-config-daemon-p97z4" podUID="10c685ba-8fe0-425c-958c-3fb6754d3d84" Feb 17 14:02:45 crc kubenswrapper[4768]: I0217 14:02:45.535442 4768 scope.go:117] "RemoveContainer" containerID="1a561dcdfe0c02cbee0707f068057b98f49569f2f0f313ebde599d9ef2366bc7" Feb 17 14:02:45 crc kubenswrapper[4768]: E0217 14:02:45.536625 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p97z4_openshift-machine-config-operator(10c685ba-8fe0-425c-958c-3fb6754d3d84)\"" pod="openshift-machine-config-operator/machine-config-daemon-p97z4" podUID="10c685ba-8fe0-425c-958c-3fb6754d3d84" Feb 17 14:02:54 crc kubenswrapper[4768]: I0217 14:02:54.180825 4768 generic.go:334] "Generic (PLEG): container finished" podID="0cf2614c-dbfe-400c-a4ff-a19a96c2f9a0" containerID="80bffeccdc095e6711d83c2dbfd6007c1d5d29002d2e17165a09cc59273039b7" exitCode=0 Feb 17 14:02:54 crc kubenswrapper[4768]: I0217 14:02:54.180994 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-qplfr" event={"ID":"0cf2614c-dbfe-400c-a4ff-a19a96c2f9a0","Type":"ContainerDied","Data":"80bffeccdc095e6711d83c2dbfd6007c1d5d29002d2e17165a09cc59273039b7"} Feb 17 14:02:55 crc kubenswrapper[4768]: I0217 14:02:55.625902 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-qplfr" Feb 17 14:02:55 crc kubenswrapper[4768]: I0217 14:02:55.754263 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0cf2614c-dbfe-400c-a4ff-a19a96c2f9a0-bootstrap-combined-ca-bundle\") pod \"0cf2614c-dbfe-400c-a4ff-a19a96c2f9a0\" (UID: \"0cf2614c-dbfe-400c-a4ff-a19a96c2f9a0\") " Feb 17 14:02:55 crc kubenswrapper[4768]: I0217 14:02:55.754356 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0cf2614c-dbfe-400c-a4ff-a19a96c2f9a0-inventory\") pod \"0cf2614c-dbfe-400c-a4ff-a19a96c2f9a0\" (UID: \"0cf2614c-dbfe-400c-a4ff-a19a96c2f9a0\") " Feb 17 14:02:55 crc kubenswrapper[4768]: I0217 14:02:55.754470 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0cf2614c-dbfe-400c-a4ff-a19a96c2f9a0-ssh-key-openstack-edpm-ipam\") pod \"0cf2614c-dbfe-400c-a4ff-a19a96c2f9a0\" (UID: \"0cf2614c-dbfe-400c-a4ff-a19a96c2f9a0\") " Feb 17 14:02:55 crc kubenswrapper[4768]: I0217 14:02:55.754540 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g8n26\" (UniqueName: \"kubernetes.io/projected/0cf2614c-dbfe-400c-a4ff-a19a96c2f9a0-kube-api-access-g8n26\") pod \"0cf2614c-dbfe-400c-a4ff-a19a96c2f9a0\" (UID: \"0cf2614c-dbfe-400c-a4ff-a19a96c2f9a0\") " Feb 17 14:02:55 crc kubenswrapper[4768]: I0217 14:02:55.759536 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0cf2614c-dbfe-400c-a4ff-a19a96c2f9a0-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "0cf2614c-dbfe-400c-a4ff-a19a96c2f9a0" (UID: "0cf2614c-dbfe-400c-a4ff-a19a96c2f9a0"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 14:02:55 crc kubenswrapper[4768]: I0217 14:02:55.765607 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0cf2614c-dbfe-400c-a4ff-a19a96c2f9a0-kube-api-access-g8n26" (OuterVolumeSpecName: "kube-api-access-g8n26") pod "0cf2614c-dbfe-400c-a4ff-a19a96c2f9a0" (UID: "0cf2614c-dbfe-400c-a4ff-a19a96c2f9a0"). InnerVolumeSpecName "kube-api-access-g8n26". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 14:02:55 crc kubenswrapper[4768]: I0217 14:02:55.782514 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0cf2614c-dbfe-400c-a4ff-a19a96c2f9a0-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "0cf2614c-dbfe-400c-a4ff-a19a96c2f9a0" (UID: "0cf2614c-dbfe-400c-a4ff-a19a96c2f9a0"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 14:02:55 crc kubenswrapper[4768]: I0217 14:02:55.792889 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0cf2614c-dbfe-400c-a4ff-a19a96c2f9a0-inventory" (OuterVolumeSpecName: "inventory") pod "0cf2614c-dbfe-400c-a4ff-a19a96c2f9a0" (UID: "0cf2614c-dbfe-400c-a4ff-a19a96c2f9a0"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 14:02:55 crc kubenswrapper[4768]: I0217 14:02:55.856764 4768 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0cf2614c-dbfe-400c-a4ff-a19a96c2f9a0-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 14:02:55 crc kubenswrapper[4768]: I0217 14:02:55.856809 4768 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0cf2614c-dbfe-400c-a4ff-a19a96c2f9a0-inventory\") on node \"crc\" DevicePath \"\"" Feb 17 14:02:55 crc kubenswrapper[4768]: I0217 14:02:55.856824 4768 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0cf2614c-dbfe-400c-a4ff-a19a96c2f9a0-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 17 14:02:55 crc kubenswrapper[4768]: I0217 14:02:55.856837 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g8n26\" (UniqueName: \"kubernetes.io/projected/0cf2614c-dbfe-400c-a4ff-a19a96c2f9a0-kube-api-access-g8n26\") on node \"crc\" DevicePath \"\"" Feb 17 14:02:56 crc kubenswrapper[4768]: I0217 14:02:56.205340 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-qplfr" event={"ID":"0cf2614c-dbfe-400c-a4ff-a19a96c2f9a0","Type":"ContainerDied","Data":"15ae69a338f22552e6474e27da33a5c3fa7a81475910e528860dc92eab94e819"} Feb 17 14:02:56 crc kubenswrapper[4768]: I0217 14:02:56.205863 4768 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="15ae69a338f22552e6474e27da33a5c3fa7a81475910e528860dc92eab94e819" Feb 17 14:02:56 crc kubenswrapper[4768]: I0217 14:02:56.205422 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-qplfr" Feb 17 14:02:56 crc kubenswrapper[4768]: I0217 14:02:56.301144 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-blr9d"] Feb 17 14:02:56 crc kubenswrapper[4768]: E0217 14:02:56.301831 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ca73e35a-72bc-4d94-8a69-ea6bc73ffa86" containerName="extract-content" Feb 17 14:02:56 crc kubenswrapper[4768]: I0217 14:02:56.301867 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="ca73e35a-72bc-4d94-8a69-ea6bc73ffa86" containerName="extract-content" Feb 17 14:02:56 crc kubenswrapper[4768]: E0217 14:02:56.301892 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0cf2614c-dbfe-400c-a4ff-a19a96c2f9a0" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Feb 17 14:02:56 crc kubenswrapper[4768]: I0217 14:02:56.301911 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="0cf2614c-dbfe-400c-a4ff-a19a96c2f9a0" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Feb 17 14:02:56 crc kubenswrapper[4768]: E0217 14:02:56.301964 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ca73e35a-72bc-4d94-8a69-ea6bc73ffa86" containerName="registry-server" Feb 17 14:02:56 crc kubenswrapper[4768]: I0217 14:02:56.301990 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="ca73e35a-72bc-4d94-8a69-ea6bc73ffa86" containerName="registry-server" Feb 17 14:02:56 crc kubenswrapper[4768]: E0217 14:02:56.302006 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ca73e35a-72bc-4d94-8a69-ea6bc73ffa86" containerName="extract-utilities" Feb 17 14:02:56 crc kubenswrapper[4768]: I0217 14:02:56.302019 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="ca73e35a-72bc-4d94-8a69-ea6bc73ffa86" containerName="extract-utilities" Feb 17 14:02:56 crc kubenswrapper[4768]: I0217 14:02:56.302349 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="ca73e35a-72bc-4d94-8a69-ea6bc73ffa86" containerName="registry-server" Feb 17 14:02:56 crc kubenswrapper[4768]: I0217 14:02:56.302393 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="0cf2614c-dbfe-400c-a4ff-a19a96c2f9a0" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Feb 17 14:02:56 crc kubenswrapper[4768]: I0217 14:02:56.303231 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-blr9d" Feb 17 14:02:56 crc kubenswrapper[4768]: I0217 14:02:56.306965 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-tr87q" Feb 17 14:02:56 crc kubenswrapper[4768]: I0217 14:02:56.307134 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 17 14:02:56 crc kubenswrapper[4768]: I0217 14:02:56.307263 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 17 14:02:56 crc kubenswrapper[4768]: I0217 14:02:56.307319 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 17 14:02:56 crc kubenswrapper[4768]: I0217 14:02:56.309762 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-blr9d"] Feb 17 14:02:56 crc kubenswrapper[4768]: I0217 14:02:56.365304 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4wd9h\" (UniqueName: \"kubernetes.io/projected/a8163799-ddb2-4876-830f-19da3abc4578-kube-api-access-4wd9h\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-blr9d\" (UID: \"a8163799-ddb2-4876-830f-19da3abc4578\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-blr9d" Feb 17 14:02:56 crc kubenswrapper[4768]: I0217 14:02:56.365411 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a8163799-ddb2-4876-830f-19da3abc4578-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-blr9d\" (UID: \"a8163799-ddb2-4876-830f-19da3abc4578\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-blr9d" Feb 17 14:02:56 crc kubenswrapper[4768]: I0217 14:02:56.365461 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a8163799-ddb2-4876-830f-19da3abc4578-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-blr9d\" (UID: \"a8163799-ddb2-4876-830f-19da3abc4578\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-blr9d" Feb 17 14:02:56 crc kubenswrapper[4768]: I0217 14:02:56.466937 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4wd9h\" (UniqueName: \"kubernetes.io/projected/a8163799-ddb2-4876-830f-19da3abc4578-kube-api-access-4wd9h\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-blr9d\" (UID: \"a8163799-ddb2-4876-830f-19da3abc4578\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-blr9d" Feb 17 14:02:56 crc kubenswrapper[4768]: I0217 14:02:56.467089 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a8163799-ddb2-4876-830f-19da3abc4578-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-blr9d\" (UID: \"a8163799-ddb2-4876-830f-19da3abc4578\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-blr9d" Feb 17 14:02:56 crc kubenswrapper[4768]: I0217 14:02:56.467177 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a8163799-ddb2-4876-830f-19da3abc4578-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-blr9d\" (UID: \"a8163799-ddb2-4876-830f-19da3abc4578\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-blr9d" Feb 17 14:02:56 crc kubenswrapper[4768]: I0217 14:02:56.473702 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a8163799-ddb2-4876-830f-19da3abc4578-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-blr9d\" (UID: \"a8163799-ddb2-4876-830f-19da3abc4578\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-blr9d" Feb 17 14:02:56 crc kubenswrapper[4768]: I0217 14:02:56.473824 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a8163799-ddb2-4876-830f-19da3abc4578-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-blr9d\" (UID: \"a8163799-ddb2-4876-830f-19da3abc4578\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-blr9d" Feb 17 14:02:56 crc kubenswrapper[4768]: I0217 14:02:56.487541 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4wd9h\" (UniqueName: \"kubernetes.io/projected/a8163799-ddb2-4876-830f-19da3abc4578-kube-api-access-4wd9h\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-blr9d\" (UID: \"a8163799-ddb2-4876-830f-19da3abc4578\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-blr9d" Feb 17 14:02:56 crc kubenswrapper[4768]: I0217 14:02:56.622909 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-blr9d" Feb 17 14:02:57 crc kubenswrapper[4768]: I0217 14:02:57.181224 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-blr9d"] Feb 17 14:02:57 crc kubenswrapper[4768]: I0217 14:02:57.214266 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-blr9d" event={"ID":"a8163799-ddb2-4876-830f-19da3abc4578","Type":"ContainerStarted","Data":"f086ef29c593696b2a282ae30efda5817eecf69060b2606738e140366e129766"} Feb 17 14:02:58 crc kubenswrapper[4768]: I0217 14:02:58.230050 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-blr9d" event={"ID":"a8163799-ddb2-4876-830f-19da3abc4578","Type":"ContainerStarted","Data":"5f6e1ac172f1aae4ba674c7ac5f0bf78edc0302ddf5bb43fd0a5b111d9b240c1"} Feb 17 14:02:58 crc kubenswrapper[4768]: I0217 14:02:58.260251 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-blr9d" podStartSLOduration=1.816090744 podStartE2EDuration="2.260234174s" podCreationTimestamp="2026-02-17 14:02:56 +0000 UTC" firstStartedPulling="2026-02-17 14:02:57.17793309 +0000 UTC m=+1596.457319532" lastFinishedPulling="2026-02-17 14:02:57.62207652 +0000 UTC m=+1596.901462962" observedRunningTime="2026-02-17 14:02:58.256554585 +0000 UTC m=+1597.535941027" watchObservedRunningTime="2026-02-17 14:02:58.260234174 +0000 UTC m=+1597.539620616" Feb 17 14:02:59 crc kubenswrapper[4768]: I0217 14:02:59.535337 4768 scope.go:117] "RemoveContainer" containerID="1a561dcdfe0c02cbee0707f068057b98f49569f2f0f313ebde599d9ef2366bc7" Feb 17 14:02:59 crc kubenswrapper[4768]: E0217 14:02:59.535869 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p97z4_openshift-machine-config-operator(10c685ba-8fe0-425c-958c-3fb6754d3d84)\"" pod="openshift-machine-config-operator/machine-config-daemon-p97z4" podUID="10c685ba-8fe0-425c-958c-3fb6754d3d84" Feb 17 14:03:13 crc kubenswrapper[4768]: I0217 14:03:13.534803 4768 scope.go:117] "RemoveContainer" containerID="1a561dcdfe0c02cbee0707f068057b98f49569f2f0f313ebde599d9ef2366bc7" Feb 17 14:03:13 crc kubenswrapper[4768]: E0217 14:03:13.535784 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p97z4_openshift-machine-config-operator(10c685ba-8fe0-425c-958c-3fb6754d3d84)\"" pod="openshift-machine-config-operator/machine-config-daemon-p97z4" podUID="10c685ba-8fe0-425c-958c-3fb6754d3d84" Feb 17 14:03:25 crc kubenswrapper[4768]: I0217 14:03:25.534947 4768 scope.go:117] "RemoveContainer" containerID="1a561dcdfe0c02cbee0707f068057b98f49569f2f0f313ebde599d9ef2366bc7" Feb 17 14:03:25 crc kubenswrapper[4768]: E0217 14:03:25.535784 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p97z4_openshift-machine-config-operator(10c685ba-8fe0-425c-958c-3fb6754d3d84)\"" pod="openshift-machine-config-operator/machine-config-daemon-p97z4" podUID="10c685ba-8fe0-425c-958c-3fb6754d3d84" Feb 17 14:03:37 crc kubenswrapper[4768]: I0217 14:03:37.535136 4768 scope.go:117] "RemoveContainer" containerID="1a561dcdfe0c02cbee0707f068057b98f49569f2f0f313ebde599d9ef2366bc7" Feb 17 14:03:37 crc kubenswrapper[4768]: E0217 14:03:37.535844 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p97z4_openshift-machine-config-operator(10c685ba-8fe0-425c-958c-3fb6754d3d84)\"" pod="openshift-machine-config-operator/machine-config-daemon-p97z4" podUID="10c685ba-8fe0-425c-958c-3fb6754d3d84" Feb 17 14:03:45 crc kubenswrapper[4768]: I0217 14:03:45.045202 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-create-6qstf"] Feb 17 14:03:45 crc kubenswrapper[4768]: I0217 14:03:45.055026 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-8557-account-create-update-zm7x5"] Feb 17 14:03:45 crc kubenswrapper[4768]: I0217 14:03:45.064093 4768 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-8557-account-create-update-zm7x5"] Feb 17 14:03:45 crc kubenswrapper[4768]: I0217 14:03:45.071776 4768 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-create-6qstf"] Feb 17 14:03:45 crc kubenswrapper[4768]: I0217 14:03:45.548599 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fb47d4a7-16fc-402d-8943-40d7d22a00c4" path="/var/lib/kubelet/pods/fb47d4a7-16fc-402d-8943-40d7d22a00c4/volumes" Feb 17 14:03:45 crc kubenswrapper[4768]: I0217 14:03:45.550149 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ff18ff3e-97c6-433b-8ad9-837a77fb0e88" path="/var/lib/kubelet/pods/ff18ff3e-97c6-433b-8ad9-837a77fb0e88/volumes" Feb 17 14:03:46 crc kubenswrapper[4768]: I0217 14:03:46.031913 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-create-2wbdp"] Feb 17 14:03:46 crc kubenswrapper[4768]: I0217 14:03:46.044367 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-3797-account-create-update-bftff"] Feb 17 14:03:46 crc kubenswrapper[4768]: I0217 14:03:46.051786 4768 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-3797-account-create-update-bftff"] Feb 17 14:03:46 crc kubenswrapper[4768]: I0217 14:03:46.060156 4768 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-create-2wbdp"] Feb 17 14:03:47 crc kubenswrapper[4768]: I0217 14:03:47.548273 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43f9209b-c554-49e3-886a-4e9ee73ebe3e" path="/var/lib/kubelet/pods/43f9209b-c554-49e3-886a-4e9ee73ebe3e/volumes" Feb 17 14:03:47 crc kubenswrapper[4768]: I0217 14:03:47.549337 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44008c3e-4ca6-4d59-8a7c-046a28c72b7d" path="/var/lib/kubelet/pods/44008c3e-4ca6-4d59-8a7c-046a28c72b7d/volumes" Feb 17 14:03:49 crc kubenswrapper[4768]: I0217 14:03:49.077525 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-create-4h65c"] Feb 17 14:03:49 crc kubenswrapper[4768]: I0217 14:03:49.086379 4768 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-create-4h65c"] Feb 17 14:03:49 crc kubenswrapper[4768]: I0217 14:03:49.544904 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f1fec574-62f3-4dfd-a087-8071bb46a099" path="/var/lib/kubelet/pods/f1fec574-62f3-4dfd-a087-8071bb46a099/volumes" Feb 17 14:03:50 crc kubenswrapper[4768]: I0217 14:03:50.032085 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-5156-account-create-update-zbrm9"] Feb 17 14:03:50 crc kubenswrapper[4768]: I0217 14:03:50.040590 4768 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-5156-account-create-update-zbrm9"] Feb 17 14:03:51 crc kubenswrapper[4768]: I0217 14:03:51.558650 4768 scope.go:117] "RemoveContainer" containerID="1a561dcdfe0c02cbee0707f068057b98f49569f2f0f313ebde599d9ef2366bc7" Feb 17 14:03:51 crc kubenswrapper[4768]: E0217 14:03:51.559039 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p97z4_openshift-machine-config-operator(10c685ba-8fe0-425c-958c-3fb6754d3d84)\"" pod="openshift-machine-config-operator/machine-config-daemon-p97z4" podUID="10c685ba-8fe0-425c-958c-3fb6754d3d84" Feb 17 14:03:51 crc kubenswrapper[4768]: I0217 14:03:51.562385 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1796830a-b57b-42b2-8b81-63fbdc349740" path="/var/lib/kubelet/pods/1796830a-b57b-42b2-8b81-63fbdc349740/volumes" Feb 17 14:04:02 crc kubenswrapper[4768]: I0217 14:04:02.534049 4768 scope.go:117] "RemoveContainer" containerID="1a561dcdfe0c02cbee0707f068057b98f49569f2f0f313ebde599d9ef2366bc7" Feb 17 14:04:02 crc kubenswrapper[4768]: E0217 14:04:02.534949 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p97z4_openshift-machine-config-operator(10c685ba-8fe0-425c-958c-3fb6754d3d84)\"" pod="openshift-machine-config-operator/machine-config-daemon-p97z4" podUID="10c685ba-8fe0-425c-958c-3fb6754d3d84" Feb 17 14:04:13 crc kubenswrapper[4768]: I0217 14:04:13.037883 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-fz9q7"] Feb 17 14:04:13 crc kubenswrapper[4768]: I0217 14:04:13.047682 4768 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-fz9q7"] Feb 17 14:04:13 crc kubenswrapper[4768]: I0217 14:04:13.551265 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cabcbc7a-674e-499d-a30f-037a35c12ba7" path="/var/lib/kubelet/pods/cabcbc7a-674e-499d-a30f-037a35c12ba7/volumes" Feb 17 14:04:14 crc kubenswrapper[4768]: I0217 14:04:14.535611 4768 scope.go:117] "RemoveContainer" containerID="1a561dcdfe0c02cbee0707f068057b98f49569f2f0f313ebde599d9ef2366bc7" Feb 17 14:04:14 crc kubenswrapper[4768]: E0217 14:04:14.536267 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p97z4_openshift-machine-config-operator(10c685ba-8fe0-425c-958c-3fb6754d3d84)\"" pod="openshift-machine-config-operator/machine-config-daemon-p97z4" podUID="10c685ba-8fe0-425c-958c-3fb6754d3d84" Feb 17 14:04:22 crc kubenswrapper[4768]: I0217 14:04:22.029974 4768 generic.go:334] "Generic (PLEG): container finished" podID="a8163799-ddb2-4876-830f-19da3abc4578" containerID="5f6e1ac172f1aae4ba674c7ac5f0bf78edc0302ddf5bb43fd0a5b111d9b240c1" exitCode=0 Feb 17 14:04:22 crc kubenswrapper[4768]: I0217 14:04:22.030088 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-blr9d" event={"ID":"a8163799-ddb2-4876-830f-19da3abc4578","Type":"ContainerDied","Data":"5f6e1ac172f1aae4ba674c7ac5f0bf78edc0302ddf5bb43fd0a5b111d9b240c1"} Feb 17 14:04:23 crc kubenswrapper[4768]: I0217 14:04:23.421699 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-blr9d" Feb 17 14:04:23 crc kubenswrapper[4768]: I0217 14:04:23.591341 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4wd9h\" (UniqueName: \"kubernetes.io/projected/a8163799-ddb2-4876-830f-19da3abc4578-kube-api-access-4wd9h\") pod \"a8163799-ddb2-4876-830f-19da3abc4578\" (UID: \"a8163799-ddb2-4876-830f-19da3abc4578\") " Feb 17 14:04:23 crc kubenswrapper[4768]: I0217 14:04:23.591446 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a8163799-ddb2-4876-830f-19da3abc4578-ssh-key-openstack-edpm-ipam\") pod \"a8163799-ddb2-4876-830f-19da3abc4578\" (UID: \"a8163799-ddb2-4876-830f-19da3abc4578\") " Feb 17 14:04:23 crc kubenswrapper[4768]: I0217 14:04:23.591512 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a8163799-ddb2-4876-830f-19da3abc4578-inventory\") pod \"a8163799-ddb2-4876-830f-19da3abc4578\" (UID: \"a8163799-ddb2-4876-830f-19da3abc4578\") " Feb 17 14:04:23 crc kubenswrapper[4768]: I0217 14:04:23.599746 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a8163799-ddb2-4876-830f-19da3abc4578-kube-api-access-4wd9h" (OuterVolumeSpecName: "kube-api-access-4wd9h") pod "a8163799-ddb2-4876-830f-19da3abc4578" (UID: "a8163799-ddb2-4876-830f-19da3abc4578"). InnerVolumeSpecName "kube-api-access-4wd9h". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 14:04:23 crc kubenswrapper[4768]: I0217 14:04:23.625129 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a8163799-ddb2-4876-830f-19da3abc4578-inventory" (OuterVolumeSpecName: "inventory") pod "a8163799-ddb2-4876-830f-19da3abc4578" (UID: "a8163799-ddb2-4876-830f-19da3abc4578"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 14:04:23 crc kubenswrapper[4768]: I0217 14:04:23.628314 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a8163799-ddb2-4876-830f-19da3abc4578-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "a8163799-ddb2-4876-830f-19da3abc4578" (UID: "a8163799-ddb2-4876-830f-19da3abc4578"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 14:04:23 crc kubenswrapper[4768]: I0217 14:04:23.694653 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4wd9h\" (UniqueName: \"kubernetes.io/projected/a8163799-ddb2-4876-830f-19da3abc4578-kube-api-access-4wd9h\") on node \"crc\" DevicePath \"\"" Feb 17 14:04:23 crc kubenswrapper[4768]: I0217 14:04:23.694693 4768 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a8163799-ddb2-4876-830f-19da3abc4578-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 17 14:04:23 crc kubenswrapper[4768]: I0217 14:04:23.694707 4768 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a8163799-ddb2-4876-830f-19da3abc4578-inventory\") on node \"crc\" DevicePath \"\"" Feb 17 14:04:24 crc kubenswrapper[4768]: I0217 14:04:24.053535 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-blr9d" event={"ID":"a8163799-ddb2-4876-830f-19da3abc4578","Type":"ContainerDied","Data":"f086ef29c593696b2a282ae30efda5817eecf69060b2606738e140366e129766"} Feb 17 14:04:24 crc kubenswrapper[4768]: I0217 14:04:24.053607 4768 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f086ef29c593696b2a282ae30efda5817eecf69060b2606738e140366e129766" Feb 17 14:04:24 crc kubenswrapper[4768]: I0217 14:04:24.053635 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-blr9d" Feb 17 14:04:24 crc kubenswrapper[4768]: I0217 14:04:24.144318 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-zzvmc"] Feb 17 14:04:24 crc kubenswrapper[4768]: E0217 14:04:24.144763 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a8163799-ddb2-4876-830f-19da3abc4578" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Feb 17 14:04:24 crc kubenswrapper[4768]: I0217 14:04:24.144787 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="a8163799-ddb2-4876-830f-19da3abc4578" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Feb 17 14:04:24 crc kubenswrapper[4768]: I0217 14:04:24.145023 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="a8163799-ddb2-4876-830f-19da3abc4578" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Feb 17 14:04:24 crc kubenswrapper[4768]: I0217 14:04:24.145726 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-zzvmc" Feb 17 14:04:24 crc kubenswrapper[4768]: I0217 14:04:24.147690 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 17 14:04:24 crc kubenswrapper[4768]: I0217 14:04:24.147836 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-tr87q" Feb 17 14:04:24 crc kubenswrapper[4768]: I0217 14:04:24.147883 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 17 14:04:24 crc kubenswrapper[4768]: I0217 14:04:24.148900 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 17 14:04:24 crc kubenswrapper[4768]: I0217 14:04:24.163662 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-zzvmc"] Feb 17 14:04:24 crc kubenswrapper[4768]: I0217 14:04:24.202899 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9749c980-c481-4841-b24e-bd1dc6625b59-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-zzvmc\" (UID: \"9749c980-c481-4841-b24e-bd1dc6625b59\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-zzvmc" Feb 17 14:04:24 crc kubenswrapper[4768]: I0217 14:04:24.203031 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sf9pf\" (UniqueName: \"kubernetes.io/projected/9749c980-c481-4841-b24e-bd1dc6625b59-kube-api-access-sf9pf\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-zzvmc\" (UID: \"9749c980-c481-4841-b24e-bd1dc6625b59\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-zzvmc" Feb 17 14:04:24 crc kubenswrapper[4768]: I0217 14:04:24.203117 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9749c980-c481-4841-b24e-bd1dc6625b59-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-zzvmc\" (UID: \"9749c980-c481-4841-b24e-bd1dc6625b59\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-zzvmc" Feb 17 14:04:24 crc kubenswrapper[4768]: I0217 14:04:24.304265 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9749c980-c481-4841-b24e-bd1dc6625b59-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-zzvmc\" (UID: \"9749c980-c481-4841-b24e-bd1dc6625b59\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-zzvmc" Feb 17 14:04:24 crc kubenswrapper[4768]: I0217 14:04:24.304421 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9749c980-c481-4841-b24e-bd1dc6625b59-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-zzvmc\" (UID: \"9749c980-c481-4841-b24e-bd1dc6625b59\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-zzvmc" Feb 17 14:04:24 crc kubenswrapper[4768]: I0217 14:04:24.304487 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sf9pf\" (UniqueName: \"kubernetes.io/projected/9749c980-c481-4841-b24e-bd1dc6625b59-kube-api-access-sf9pf\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-zzvmc\" (UID: \"9749c980-c481-4841-b24e-bd1dc6625b59\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-zzvmc" Feb 17 14:04:24 crc kubenswrapper[4768]: I0217 14:04:24.311757 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9749c980-c481-4841-b24e-bd1dc6625b59-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-zzvmc\" (UID: \"9749c980-c481-4841-b24e-bd1dc6625b59\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-zzvmc" Feb 17 14:04:24 crc kubenswrapper[4768]: I0217 14:04:24.325631 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9749c980-c481-4841-b24e-bd1dc6625b59-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-zzvmc\" (UID: \"9749c980-c481-4841-b24e-bd1dc6625b59\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-zzvmc" Feb 17 14:04:24 crc kubenswrapper[4768]: I0217 14:04:24.326796 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sf9pf\" (UniqueName: \"kubernetes.io/projected/9749c980-c481-4841-b24e-bd1dc6625b59-kube-api-access-sf9pf\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-zzvmc\" (UID: \"9749c980-c481-4841-b24e-bd1dc6625b59\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-zzvmc" Feb 17 14:04:24 crc kubenswrapper[4768]: I0217 14:04:24.463021 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-zzvmc" Feb 17 14:04:24 crc kubenswrapper[4768]: I0217 14:04:24.989672 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-zzvmc"] Feb 17 14:04:24 crc kubenswrapper[4768]: I0217 14:04:24.994723 4768 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 17 14:04:25 crc kubenswrapper[4768]: I0217 14:04:25.064404 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-zzvmc" event={"ID":"9749c980-c481-4841-b24e-bd1dc6625b59","Type":"ContainerStarted","Data":"d4b62d52e4861bb8b965c4468fa89616c00cc8781afe002bd237a2a790c77d9b"} Feb 17 14:04:25 crc kubenswrapper[4768]: I0217 14:04:25.552409 4768 scope.go:117] "RemoveContainer" containerID="1a561dcdfe0c02cbee0707f068057b98f49569f2f0f313ebde599d9ef2366bc7" Feb 17 14:04:25 crc kubenswrapper[4768]: E0217 14:04:25.553531 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p97z4_openshift-machine-config-operator(10c685ba-8fe0-425c-958c-3fb6754d3d84)\"" pod="openshift-machine-config-operator/machine-config-daemon-p97z4" podUID="10c685ba-8fe0-425c-958c-3fb6754d3d84" Feb 17 14:04:26 crc kubenswrapper[4768]: I0217 14:04:26.077621 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-zzvmc" event={"ID":"9749c980-c481-4841-b24e-bd1dc6625b59","Type":"ContainerStarted","Data":"c61eebdfe527aadf44733767d4888c1dd26e21f502496c3007590a42085e7bdd"} Feb 17 14:04:26 crc kubenswrapper[4768]: I0217 14:04:26.096881 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-zzvmc" podStartSLOduration=1.5576763649999998 podStartE2EDuration="2.096853598s" podCreationTimestamp="2026-02-17 14:04:24 +0000 UTC" firstStartedPulling="2026-02-17 14:04:24.994549306 +0000 UTC m=+1684.273935748" lastFinishedPulling="2026-02-17 14:04:25.533726529 +0000 UTC m=+1684.813112981" observedRunningTime="2026-02-17 14:04:26.093087164 +0000 UTC m=+1685.372473646" watchObservedRunningTime="2026-02-17 14:04:26.096853598 +0000 UTC m=+1685.376240060" Feb 17 14:04:27 crc kubenswrapper[4768]: I0217 14:04:27.663636 4768 scope.go:117] "RemoveContainer" containerID="bf6acf4ae817d5dce7f5800bd78bd14b3933a032b692cb1b600c1ca83c8a4ab1" Feb 17 14:04:27 crc kubenswrapper[4768]: I0217 14:04:27.687332 4768 scope.go:117] "RemoveContainer" containerID="71e1adea8f990f4377121cf0b59e8d1ffb7b581c80312ebacf71e34764aceed5" Feb 17 14:04:27 crc kubenswrapper[4768]: I0217 14:04:27.732311 4768 scope.go:117] "RemoveContainer" containerID="ce4bad8f9dc20d5d0b127a1d1075495f86c7606d70dbca27d62c46cbae1bb061" Feb 17 14:04:27 crc kubenswrapper[4768]: I0217 14:04:27.776325 4768 scope.go:117] "RemoveContainer" containerID="f0a0952c2bc3c6fb4ccf78bb5d4b8ebe9205bb68ab4808bcc5ad6cbd12d56f76" Feb 17 14:04:27 crc kubenswrapper[4768]: I0217 14:04:27.818045 4768 scope.go:117] "RemoveContainer" containerID="d9406c637b16ae2ac9e13bad82de5b06d8284624cb2ce93679aebc846e4e102e" Feb 17 14:04:27 crc kubenswrapper[4768]: I0217 14:04:27.857947 4768 scope.go:117] "RemoveContainer" containerID="a88311b8751f759efa505a5321b475ddf302f354e83e59b4c701de0054d75e95" Feb 17 14:04:27 crc kubenswrapper[4768]: I0217 14:04:27.895706 4768 scope.go:117] "RemoveContainer" containerID="4d18fd16d9b7b5f25fe352c251ebd2670b27c45b1266d760555f5efff85d5253" Feb 17 14:04:38 crc kubenswrapper[4768]: I0217 14:04:38.054910 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-create-6bgrn"] Feb 17 14:04:38 crc kubenswrapper[4768]: I0217 14:04:38.063977 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-create-vkdnz"] Feb 17 14:04:38 crc kubenswrapper[4768]: I0217 14:04:38.072778 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-1c1c-account-create-update-9knv8"] Feb 17 14:04:38 crc kubenswrapper[4768]: I0217 14:04:38.082400 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-d091-account-create-update-mmwqv"] Feb 17 14:04:38 crc kubenswrapper[4768]: I0217 14:04:38.094317 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-create-9cb6l"] Feb 17 14:04:38 crc kubenswrapper[4768]: I0217 14:04:38.101854 4768 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-create-6bgrn"] Feb 17 14:04:38 crc kubenswrapper[4768]: I0217 14:04:38.108681 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-eeff-account-create-update-d4h7z"] Feb 17 14:04:38 crc kubenswrapper[4768]: I0217 14:04:38.150455 4768 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-create-vkdnz"] Feb 17 14:04:38 crc kubenswrapper[4768]: I0217 14:04:38.152434 4768 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-create-9cb6l"] Feb 17 14:04:38 crc kubenswrapper[4768]: I0217 14:04:38.161930 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-sync-4xwgl"] Feb 17 14:04:38 crc kubenswrapper[4768]: I0217 14:04:38.170133 4768 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-d091-account-create-update-mmwqv"] Feb 17 14:04:38 crc kubenswrapper[4768]: I0217 14:04:38.178828 4768 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-1c1c-account-create-update-9knv8"] Feb 17 14:04:38 crc kubenswrapper[4768]: I0217 14:04:38.187426 4768 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-eeff-account-create-update-d4h7z"] Feb 17 14:04:38 crc kubenswrapper[4768]: I0217 14:04:38.195477 4768 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-sync-4xwgl"] Feb 17 14:04:39 crc kubenswrapper[4768]: I0217 14:04:39.534293 4768 scope.go:117] "RemoveContainer" containerID="1a561dcdfe0c02cbee0707f068057b98f49569f2f0f313ebde599d9ef2366bc7" Feb 17 14:04:39 crc kubenswrapper[4768]: E0217 14:04:39.535634 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p97z4_openshift-machine-config-operator(10c685ba-8fe0-425c-958c-3fb6754d3d84)\"" pod="openshift-machine-config-operator/machine-config-daemon-p97z4" podUID="10c685ba-8fe0-425c-958c-3fb6754d3d84" Feb 17 14:04:39 crc kubenswrapper[4768]: I0217 14:04:39.549081 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="02492e8d-aeb8-48ae-b6a3-05bcbcf88eb1" path="/var/lib/kubelet/pods/02492e8d-aeb8-48ae-b6a3-05bcbcf88eb1/volumes" Feb 17 14:04:39 crc kubenswrapper[4768]: I0217 14:04:39.550606 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2a6f0a6c-7ca0-4f3b-ab3b-d5e548d4874a" path="/var/lib/kubelet/pods/2a6f0a6c-7ca0-4f3b-ab3b-d5e548d4874a/volumes" Feb 17 14:04:39 crc kubenswrapper[4768]: I0217 14:04:39.551695 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31314241-7fb9-41ba-811f-64a9a907f49a" path="/var/lib/kubelet/pods/31314241-7fb9-41ba-811f-64a9a907f49a/volumes" Feb 17 14:04:39 crc kubenswrapper[4768]: I0217 14:04:39.552761 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="63f492b6-e295-4f78-9d73-0643188ffe1c" path="/var/lib/kubelet/pods/63f492b6-e295-4f78-9d73-0643188ffe1c/volumes" Feb 17 14:04:39 crc kubenswrapper[4768]: I0217 14:04:39.554163 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7f2dd794-fc73-4f82-a57d-e9d9314e8b7c" path="/var/lib/kubelet/pods/7f2dd794-fc73-4f82-a57d-e9d9314e8b7c/volumes" Feb 17 14:04:39 crc kubenswrapper[4768]: I0217 14:04:39.556513 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9aa6c65b-8d9c-4e2d-a8c6-fa9845261fde" path="/var/lib/kubelet/pods/9aa6c65b-8d9c-4e2d-a8c6-fa9845261fde/volumes" Feb 17 14:04:39 crc kubenswrapper[4768]: I0217 14:04:39.557588 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="df03a5cd-6bf6-4275-bb4f-0310e49656fd" path="/var/lib/kubelet/pods/df03a5cd-6bf6-4275-bb4f-0310e49656fd/volumes" Feb 17 14:04:45 crc kubenswrapper[4768]: I0217 14:04:45.032254 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-sync-76b7s"] Feb 17 14:04:45 crc kubenswrapper[4768]: I0217 14:04:45.041460 4768 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-sync-76b7s"] Feb 17 14:04:45 crc kubenswrapper[4768]: I0217 14:04:45.555026 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9a664abb-3a1f-405e-830b-f3f2ad8c4d22" path="/var/lib/kubelet/pods/9a664abb-3a1f-405e-830b-f3f2ad8c4d22/volumes" Feb 17 14:04:52 crc kubenswrapper[4768]: I0217 14:04:52.534321 4768 scope.go:117] "RemoveContainer" containerID="1a561dcdfe0c02cbee0707f068057b98f49569f2f0f313ebde599d9ef2366bc7" Feb 17 14:04:52 crc kubenswrapper[4768]: E0217 14:04:52.535081 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p97z4_openshift-machine-config-operator(10c685ba-8fe0-425c-958c-3fb6754d3d84)\"" pod="openshift-machine-config-operator/machine-config-daemon-p97z4" podUID="10c685ba-8fe0-425c-958c-3fb6754d3d84" Feb 17 14:05:03 crc kubenswrapper[4768]: I0217 14:05:03.535937 4768 scope.go:117] "RemoveContainer" containerID="1a561dcdfe0c02cbee0707f068057b98f49569f2f0f313ebde599d9ef2366bc7" Feb 17 14:05:03 crc kubenswrapper[4768]: E0217 14:05:03.537411 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p97z4_openshift-machine-config-operator(10c685ba-8fe0-425c-958c-3fb6754d3d84)\"" pod="openshift-machine-config-operator/machine-config-daemon-p97z4" podUID="10c685ba-8fe0-425c-958c-3fb6754d3d84" Feb 17 14:05:16 crc kubenswrapper[4768]: I0217 14:05:16.046172 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-sync-4vs6w"] Feb 17 14:05:16 crc kubenswrapper[4768]: I0217 14:05:16.054664 4768 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-sync-4vs6w"] Feb 17 14:05:17 crc kubenswrapper[4768]: I0217 14:05:17.534738 4768 scope.go:117] "RemoveContainer" containerID="1a561dcdfe0c02cbee0707f068057b98f49569f2f0f313ebde599d9ef2366bc7" Feb 17 14:05:17 crc kubenswrapper[4768]: E0217 14:05:17.535036 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p97z4_openshift-machine-config-operator(10c685ba-8fe0-425c-958c-3fb6754d3d84)\"" pod="openshift-machine-config-operator/machine-config-daemon-p97z4" podUID="10c685ba-8fe0-425c-958c-3fb6754d3d84" Feb 17 14:05:17 crc kubenswrapper[4768]: I0217 14:05:17.545852 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="33f4a984-4d42-469e-8eda-c49264f0e4d9" path="/var/lib/kubelet/pods/33f4a984-4d42-469e-8eda-c49264f0e4d9/volumes" Feb 17 14:05:23 crc kubenswrapper[4768]: I0217 14:05:23.877255 4768 generic.go:334] "Generic (PLEG): container finished" podID="9749c980-c481-4841-b24e-bd1dc6625b59" containerID="c61eebdfe527aadf44733767d4888c1dd26e21f502496c3007590a42085e7bdd" exitCode=0 Feb 17 14:05:23 crc kubenswrapper[4768]: I0217 14:05:23.877383 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-zzvmc" event={"ID":"9749c980-c481-4841-b24e-bd1dc6625b59","Type":"ContainerDied","Data":"c61eebdfe527aadf44733767d4888c1dd26e21f502496c3007590a42085e7bdd"} Feb 17 14:05:25 crc kubenswrapper[4768]: I0217 14:05:25.318220 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-zzvmc" Feb 17 14:05:25 crc kubenswrapper[4768]: I0217 14:05:25.347619 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sf9pf\" (UniqueName: \"kubernetes.io/projected/9749c980-c481-4841-b24e-bd1dc6625b59-kube-api-access-sf9pf\") pod \"9749c980-c481-4841-b24e-bd1dc6625b59\" (UID: \"9749c980-c481-4841-b24e-bd1dc6625b59\") " Feb 17 14:05:25 crc kubenswrapper[4768]: I0217 14:05:25.347738 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9749c980-c481-4841-b24e-bd1dc6625b59-inventory\") pod \"9749c980-c481-4841-b24e-bd1dc6625b59\" (UID: \"9749c980-c481-4841-b24e-bd1dc6625b59\") " Feb 17 14:05:25 crc kubenswrapper[4768]: I0217 14:05:25.347889 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9749c980-c481-4841-b24e-bd1dc6625b59-ssh-key-openstack-edpm-ipam\") pod \"9749c980-c481-4841-b24e-bd1dc6625b59\" (UID: \"9749c980-c481-4841-b24e-bd1dc6625b59\") " Feb 17 14:05:25 crc kubenswrapper[4768]: I0217 14:05:25.353633 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9749c980-c481-4841-b24e-bd1dc6625b59-kube-api-access-sf9pf" (OuterVolumeSpecName: "kube-api-access-sf9pf") pod "9749c980-c481-4841-b24e-bd1dc6625b59" (UID: "9749c980-c481-4841-b24e-bd1dc6625b59"). InnerVolumeSpecName "kube-api-access-sf9pf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 14:05:25 crc kubenswrapper[4768]: I0217 14:05:25.373411 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9749c980-c481-4841-b24e-bd1dc6625b59-inventory" (OuterVolumeSpecName: "inventory") pod "9749c980-c481-4841-b24e-bd1dc6625b59" (UID: "9749c980-c481-4841-b24e-bd1dc6625b59"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 14:05:25 crc kubenswrapper[4768]: I0217 14:05:25.376409 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9749c980-c481-4841-b24e-bd1dc6625b59-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "9749c980-c481-4841-b24e-bd1dc6625b59" (UID: "9749c980-c481-4841-b24e-bd1dc6625b59"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 14:05:25 crc kubenswrapper[4768]: I0217 14:05:25.450945 4768 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9749c980-c481-4841-b24e-bd1dc6625b59-inventory\") on node \"crc\" DevicePath \"\"" Feb 17 14:05:25 crc kubenswrapper[4768]: I0217 14:05:25.450983 4768 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9749c980-c481-4841-b24e-bd1dc6625b59-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 17 14:05:25 crc kubenswrapper[4768]: I0217 14:05:25.450995 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sf9pf\" (UniqueName: \"kubernetes.io/projected/9749c980-c481-4841-b24e-bd1dc6625b59-kube-api-access-sf9pf\") on node \"crc\" DevicePath \"\"" Feb 17 14:05:25 crc kubenswrapper[4768]: I0217 14:05:25.897770 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-zzvmc" event={"ID":"9749c980-c481-4841-b24e-bd1dc6625b59","Type":"ContainerDied","Data":"d4b62d52e4861bb8b965c4468fa89616c00cc8781afe002bd237a2a790c77d9b"} Feb 17 14:05:25 crc kubenswrapper[4768]: I0217 14:05:25.897850 4768 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d4b62d52e4861bb8b965c4468fa89616c00cc8781afe002bd237a2a790c77d9b" Feb 17 14:05:25 crc kubenswrapper[4768]: I0217 14:05:25.897883 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-zzvmc" Feb 17 14:05:26 crc kubenswrapper[4768]: I0217 14:05:26.005534 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-tj4bn"] Feb 17 14:05:26 crc kubenswrapper[4768]: E0217 14:05:26.006123 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9749c980-c481-4841-b24e-bd1dc6625b59" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Feb 17 14:05:26 crc kubenswrapper[4768]: I0217 14:05:26.006148 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="9749c980-c481-4841-b24e-bd1dc6625b59" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Feb 17 14:05:26 crc kubenswrapper[4768]: I0217 14:05:26.006742 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="9749c980-c481-4841-b24e-bd1dc6625b59" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Feb 17 14:05:26 crc kubenswrapper[4768]: I0217 14:05:26.007871 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-tj4bn" Feb 17 14:05:26 crc kubenswrapper[4768]: I0217 14:05:26.011985 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 17 14:05:26 crc kubenswrapper[4768]: I0217 14:05:26.012353 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 17 14:05:26 crc kubenswrapper[4768]: I0217 14:05:26.012526 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 17 14:05:26 crc kubenswrapper[4768]: I0217 14:05:26.015680 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-tr87q" Feb 17 14:05:26 crc kubenswrapper[4768]: I0217 14:05:26.022904 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-tj4bn"] Feb 17 14:05:26 crc kubenswrapper[4768]: I0217 14:05:26.060379 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/72dee802-02e1-4ce6-adf4-a32b56d357b4-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-tj4bn\" (UID: \"72dee802-02e1-4ce6-adf4-a32b56d357b4\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-tj4bn" Feb 17 14:05:26 crc kubenswrapper[4768]: I0217 14:05:26.060448 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hl6cb\" (UniqueName: \"kubernetes.io/projected/72dee802-02e1-4ce6-adf4-a32b56d357b4-kube-api-access-hl6cb\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-tj4bn\" (UID: \"72dee802-02e1-4ce6-adf4-a32b56d357b4\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-tj4bn" Feb 17 14:05:26 crc kubenswrapper[4768]: I0217 14:05:26.060479 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/72dee802-02e1-4ce6-adf4-a32b56d357b4-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-tj4bn\" (UID: \"72dee802-02e1-4ce6-adf4-a32b56d357b4\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-tj4bn" Feb 17 14:05:26 crc kubenswrapper[4768]: I0217 14:05:26.162048 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hl6cb\" (UniqueName: \"kubernetes.io/projected/72dee802-02e1-4ce6-adf4-a32b56d357b4-kube-api-access-hl6cb\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-tj4bn\" (UID: \"72dee802-02e1-4ce6-adf4-a32b56d357b4\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-tj4bn" Feb 17 14:05:26 crc kubenswrapper[4768]: I0217 14:05:26.162135 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/72dee802-02e1-4ce6-adf4-a32b56d357b4-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-tj4bn\" (UID: \"72dee802-02e1-4ce6-adf4-a32b56d357b4\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-tj4bn" Feb 17 14:05:26 crc kubenswrapper[4768]: I0217 14:05:26.162255 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/72dee802-02e1-4ce6-adf4-a32b56d357b4-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-tj4bn\" (UID: \"72dee802-02e1-4ce6-adf4-a32b56d357b4\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-tj4bn" Feb 17 14:05:26 crc kubenswrapper[4768]: I0217 14:05:26.167059 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/72dee802-02e1-4ce6-adf4-a32b56d357b4-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-tj4bn\" (UID: \"72dee802-02e1-4ce6-adf4-a32b56d357b4\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-tj4bn" Feb 17 14:05:26 crc kubenswrapper[4768]: I0217 14:05:26.167335 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/72dee802-02e1-4ce6-adf4-a32b56d357b4-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-tj4bn\" (UID: \"72dee802-02e1-4ce6-adf4-a32b56d357b4\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-tj4bn" Feb 17 14:05:26 crc kubenswrapper[4768]: I0217 14:05:26.189303 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hl6cb\" (UniqueName: \"kubernetes.io/projected/72dee802-02e1-4ce6-adf4-a32b56d357b4-kube-api-access-hl6cb\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-tj4bn\" (UID: \"72dee802-02e1-4ce6-adf4-a32b56d357b4\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-tj4bn" Feb 17 14:05:26 crc kubenswrapper[4768]: I0217 14:05:26.355611 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-tj4bn" Feb 17 14:05:26 crc kubenswrapper[4768]: I0217 14:05:26.868430 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-tj4bn"] Feb 17 14:05:26 crc kubenswrapper[4768]: I0217 14:05:26.906673 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-tj4bn" event={"ID":"72dee802-02e1-4ce6-adf4-a32b56d357b4","Type":"ContainerStarted","Data":"86356278fd15c80abad039267b6eea77eb74c0f1158b4546405ba97aeb20c556"} Feb 17 14:05:27 crc kubenswrapper[4768]: I0217 14:05:27.030434 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-sync-xfpzr"] Feb 17 14:05:27 crc kubenswrapper[4768]: I0217 14:05:27.043868 4768 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-sync-xfpzr"] Feb 17 14:05:27 crc kubenswrapper[4768]: I0217 14:05:27.057964 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-78zqh"] Feb 17 14:05:27 crc kubenswrapper[4768]: I0217 14:05:27.066444 4768 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-78zqh"] Feb 17 14:05:27 crc kubenswrapper[4768]: I0217 14:05:27.546295 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7ff099b6-c514-40c8-aa19-370d7f8dfbaf" path="/var/lib/kubelet/pods/7ff099b6-c514-40c8-aa19-370d7f8dfbaf/volumes" Feb 17 14:05:27 crc kubenswrapper[4768]: I0217 14:05:27.547446 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e1acea03-8a67-474d-a6b1-803ea949a747" path="/var/lib/kubelet/pods/e1acea03-8a67-474d-a6b1-803ea949a747/volumes" Feb 17 14:05:27 crc kubenswrapper[4768]: I0217 14:05:27.916513 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-tj4bn" event={"ID":"72dee802-02e1-4ce6-adf4-a32b56d357b4","Type":"ContainerStarted","Data":"77f2921e290403ef93dab138307a13c1c82f526f706037fc5c0b7b7303a33a72"} Feb 17 14:05:27 crc kubenswrapper[4768]: I0217 14:05:27.935978 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-tj4bn" podStartSLOduration=2.459308227 podStartE2EDuration="2.935956235s" podCreationTimestamp="2026-02-17 14:05:25 +0000 UTC" firstStartedPulling="2026-02-17 14:05:26.882281287 +0000 UTC m=+1746.161667729" lastFinishedPulling="2026-02-17 14:05:27.358929295 +0000 UTC m=+1746.638315737" observedRunningTime="2026-02-17 14:05:27.934065613 +0000 UTC m=+1747.213452055" watchObservedRunningTime="2026-02-17 14:05:27.935956235 +0000 UTC m=+1747.215342667" Feb 17 14:05:28 crc kubenswrapper[4768]: I0217 14:05:28.031305 4768 scope.go:117] "RemoveContainer" containerID="567bcfcd5b3a86cbacbf8fa49080f7399efeec0b42dd805f28787c6d2216a1a4" Feb 17 14:05:28 crc kubenswrapper[4768]: I0217 14:05:28.076145 4768 scope.go:117] "RemoveContainer" containerID="817654ff26d17a41b821e13d9494c615148a42d040ef50715af28a50f1a3360a" Feb 17 14:05:28 crc kubenswrapper[4768]: I0217 14:05:28.103069 4768 scope.go:117] "RemoveContainer" containerID="dd796d4b4a78e0e26f4248f26ee369b9a738e40d3896b5ef5141cc8645aafe76" Feb 17 14:05:28 crc kubenswrapper[4768]: I0217 14:05:28.150778 4768 scope.go:117] "RemoveContainer" containerID="ce75f57dc3df8a873079f9c8b07d66c9fdac75ed9895fb9cdad8d31fc27e241b" Feb 17 14:05:28 crc kubenswrapper[4768]: I0217 14:05:28.191266 4768 scope.go:117] "RemoveContainer" containerID="2b899bd14c79681239889a47f05b90923fe2933934a6fd482410670324cca7c8" Feb 17 14:05:28 crc kubenswrapper[4768]: I0217 14:05:28.212928 4768 scope.go:117] "RemoveContainer" containerID="d17a952c29f40f15fe6174f2dc06dfb5a24b20f0edbcd4e2c6e6fcce7c2ef88d" Feb 17 14:05:28 crc kubenswrapper[4768]: I0217 14:05:28.238910 4768 scope.go:117] "RemoveContainer" containerID="1025948318c7e5bdae6ba53f7f34d7fc4f909f69d14dc541130d510c4e0b05c6" Feb 17 14:05:28 crc kubenswrapper[4768]: I0217 14:05:28.281884 4768 scope.go:117] "RemoveContainer" containerID="6cad7da28298a9a03fbe52f9d8f1b2a16ea7ad53f48e2e65ed46870f19e25384" Feb 17 14:05:28 crc kubenswrapper[4768]: I0217 14:05:28.321841 4768 scope.go:117] "RemoveContainer" containerID="287a30ba8a24d538044a95f5da0b65dc6dadf2c4c58e322bcdf289f4acb987f2" Feb 17 14:05:28 crc kubenswrapper[4768]: I0217 14:05:28.343891 4768 scope.go:117] "RemoveContainer" containerID="68948f5019072470fe8cd0a2b36a2fd7dc1ce4a5f4323051921c897110b76a7e" Feb 17 14:05:28 crc kubenswrapper[4768]: I0217 14:05:28.365988 4768 scope.go:117] "RemoveContainer" containerID="53973795e45a389ef48509cb52732f8a2459dfd39953db7f6644005b0b1daa69" Feb 17 14:05:30 crc kubenswrapper[4768]: I0217 14:05:30.533849 4768 scope.go:117] "RemoveContainer" containerID="1a561dcdfe0c02cbee0707f068057b98f49569f2f0f313ebde599d9ef2366bc7" Feb 17 14:05:30 crc kubenswrapper[4768]: E0217 14:05:30.534324 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p97z4_openshift-machine-config-operator(10c685ba-8fe0-425c-958c-3fb6754d3d84)\"" pod="openshift-machine-config-operator/machine-config-daemon-p97z4" podUID="10c685ba-8fe0-425c-958c-3fb6754d3d84" Feb 17 14:05:31 crc kubenswrapper[4768]: I0217 14:05:31.957211 4768 generic.go:334] "Generic (PLEG): container finished" podID="72dee802-02e1-4ce6-adf4-a32b56d357b4" containerID="77f2921e290403ef93dab138307a13c1c82f526f706037fc5c0b7b7303a33a72" exitCode=0 Feb 17 14:05:31 crc kubenswrapper[4768]: I0217 14:05:31.957259 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-tj4bn" event={"ID":"72dee802-02e1-4ce6-adf4-a32b56d357b4","Type":"ContainerDied","Data":"77f2921e290403ef93dab138307a13c1c82f526f706037fc5c0b7b7303a33a72"} Feb 17 14:05:33 crc kubenswrapper[4768]: I0217 14:05:33.398390 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-tj4bn" Feb 17 14:05:33 crc kubenswrapper[4768]: I0217 14:05:33.508930 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hl6cb\" (UniqueName: \"kubernetes.io/projected/72dee802-02e1-4ce6-adf4-a32b56d357b4-kube-api-access-hl6cb\") pod \"72dee802-02e1-4ce6-adf4-a32b56d357b4\" (UID: \"72dee802-02e1-4ce6-adf4-a32b56d357b4\") " Feb 17 14:05:33 crc kubenswrapper[4768]: I0217 14:05:33.509806 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/72dee802-02e1-4ce6-adf4-a32b56d357b4-inventory\") pod \"72dee802-02e1-4ce6-adf4-a32b56d357b4\" (UID: \"72dee802-02e1-4ce6-adf4-a32b56d357b4\") " Feb 17 14:05:33 crc kubenswrapper[4768]: I0217 14:05:33.510599 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/72dee802-02e1-4ce6-adf4-a32b56d357b4-ssh-key-openstack-edpm-ipam\") pod \"72dee802-02e1-4ce6-adf4-a32b56d357b4\" (UID: \"72dee802-02e1-4ce6-adf4-a32b56d357b4\") " Feb 17 14:05:33 crc kubenswrapper[4768]: I0217 14:05:33.517725 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/72dee802-02e1-4ce6-adf4-a32b56d357b4-kube-api-access-hl6cb" (OuterVolumeSpecName: "kube-api-access-hl6cb") pod "72dee802-02e1-4ce6-adf4-a32b56d357b4" (UID: "72dee802-02e1-4ce6-adf4-a32b56d357b4"). InnerVolumeSpecName "kube-api-access-hl6cb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 14:05:33 crc kubenswrapper[4768]: I0217 14:05:33.549359 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/72dee802-02e1-4ce6-adf4-a32b56d357b4-inventory" (OuterVolumeSpecName: "inventory") pod "72dee802-02e1-4ce6-adf4-a32b56d357b4" (UID: "72dee802-02e1-4ce6-adf4-a32b56d357b4"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 14:05:33 crc kubenswrapper[4768]: I0217 14:05:33.557059 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/72dee802-02e1-4ce6-adf4-a32b56d357b4-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "72dee802-02e1-4ce6-adf4-a32b56d357b4" (UID: "72dee802-02e1-4ce6-adf4-a32b56d357b4"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 14:05:33 crc kubenswrapper[4768]: I0217 14:05:33.612340 4768 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/72dee802-02e1-4ce6-adf4-a32b56d357b4-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 17 14:05:33 crc kubenswrapper[4768]: I0217 14:05:33.612515 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hl6cb\" (UniqueName: \"kubernetes.io/projected/72dee802-02e1-4ce6-adf4-a32b56d357b4-kube-api-access-hl6cb\") on node \"crc\" DevicePath \"\"" Feb 17 14:05:33 crc kubenswrapper[4768]: I0217 14:05:33.612573 4768 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/72dee802-02e1-4ce6-adf4-a32b56d357b4-inventory\") on node \"crc\" DevicePath \"\"" Feb 17 14:05:33 crc kubenswrapper[4768]: I0217 14:05:33.978693 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-tj4bn" event={"ID":"72dee802-02e1-4ce6-adf4-a32b56d357b4","Type":"ContainerDied","Data":"86356278fd15c80abad039267b6eea77eb74c0f1158b4546405ba97aeb20c556"} Feb 17 14:05:33 crc kubenswrapper[4768]: I0217 14:05:33.978734 4768 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="86356278fd15c80abad039267b6eea77eb74c0f1158b4546405ba97aeb20c556" Feb 17 14:05:33 crc kubenswrapper[4768]: I0217 14:05:33.978857 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-tj4bn" Feb 17 14:05:34 crc kubenswrapper[4768]: I0217 14:05:34.080430 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-rdgml"] Feb 17 14:05:34 crc kubenswrapper[4768]: E0217 14:05:34.081173 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="72dee802-02e1-4ce6-adf4-a32b56d357b4" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Feb 17 14:05:34 crc kubenswrapper[4768]: I0217 14:05:34.081276 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="72dee802-02e1-4ce6-adf4-a32b56d357b4" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Feb 17 14:05:34 crc kubenswrapper[4768]: I0217 14:05:34.081621 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="72dee802-02e1-4ce6-adf4-a32b56d357b4" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Feb 17 14:05:34 crc kubenswrapper[4768]: I0217 14:05:34.082542 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-rdgml" Feb 17 14:05:34 crc kubenswrapper[4768]: I0217 14:05:34.085056 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 17 14:05:34 crc kubenswrapper[4768]: I0217 14:05:34.085089 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-tr87q" Feb 17 14:05:34 crc kubenswrapper[4768]: I0217 14:05:34.085274 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 17 14:05:34 crc kubenswrapper[4768]: I0217 14:05:34.102240 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 17 14:05:34 crc kubenswrapper[4768]: I0217 14:05:34.105300 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-rdgml"] Feb 17 14:05:34 crc kubenswrapper[4768]: I0217 14:05:34.224874 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c7f3a2b7-d13a-4cb1-9b34-5cd1c21cf3c6-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-rdgml\" (UID: \"c7f3a2b7-d13a-4cb1-9b34-5cd1c21cf3c6\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-rdgml" Feb 17 14:05:34 crc kubenswrapper[4768]: I0217 14:05:34.225272 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c7f3a2b7-d13a-4cb1-9b34-5cd1c21cf3c6-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-rdgml\" (UID: \"c7f3a2b7-d13a-4cb1-9b34-5cd1c21cf3c6\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-rdgml" Feb 17 14:05:34 crc kubenswrapper[4768]: I0217 14:05:34.225357 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2245l\" (UniqueName: \"kubernetes.io/projected/c7f3a2b7-d13a-4cb1-9b34-5cd1c21cf3c6-kube-api-access-2245l\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-rdgml\" (UID: \"c7f3a2b7-d13a-4cb1-9b34-5cd1c21cf3c6\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-rdgml" Feb 17 14:05:34 crc kubenswrapper[4768]: I0217 14:05:34.327775 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c7f3a2b7-d13a-4cb1-9b34-5cd1c21cf3c6-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-rdgml\" (UID: \"c7f3a2b7-d13a-4cb1-9b34-5cd1c21cf3c6\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-rdgml" Feb 17 14:05:34 crc kubenswrapper[4768]: I0217 14:05:34.328642 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c7f3a2b7-d13a-4cb1-9b34-5cd1c21cf3c6-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-rdgml\" (UID: \"c7f3a2b7-d13a-4cb1-9b34-5cd1c21cf3c6\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-rdgml" Feb 17 14:05:34 crc kubenswrapper[4768]: I0217 14:05:34.328828 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2245l\" (UniqueName: \"kubernetes.io/projected/c7f3a2b7-d13a-4cb1-9b34-5cd1c21cf3c6-kube-api-access-2245l\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-rdgml\" (UID: \"c7f3a2b7-d13a-4cb1-9b34-5cd1c21cf3c6\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-rdgml" Feb 17 14:05:34 crc kubenswrapper[4768]: I0217 14:05:34.333016 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c7f3a2b7-d13a-4cb1-9b34-5cd1c21cf3c6-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-rdgml\" (UID: \"c7f3a2b7-d13a-4cb1-9b34-5cd1c21cf3c6\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-rdgml" Feb 17 14:05:34 crc kubenswrapper[4768]: I0217 14:05:34.334590 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c7f3a2b7-d13a-4cb1-9b34-5cd1c21cf3c6-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-rdgml\" (UID: \"c7f3a2b7-d13a-4cb1-9b34-5cd1c21cf3c6\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-rdgml" Feb 17 14:05:34 crc kubenswrapper[4768]: I0217 14:05:34.349803 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2245l\" (UniqueName: \"kubernetes.io/projected/c7f3a2b7-d13a-4cb1-9b34-5cd1c21cf3c6-kube-api-access-2245l\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-rdgml\" (UID: \"c7f3a2b7-d13a-4cb1-9b34-5cd1c21cf3c6\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-rdgml" Feb 17 14:05:34 crc kubenswrapper[4768]: I0217 14:05:34.417225 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-rdgml" Feb 17 14:05:34 crc kubenswrapper[4768]: I0217 14:05:34.967959 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-rdgml"] Feb 17 14:05:34 crc kubenswrapper[4768]: I0217 14:05:34.989464 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-rdgml" event={"ID":"c7f3a2b7-d13a-4cb1-9b34-5cd1c21cf3c6","Type":"ContainerStarted","Data":"c7c788c87813a2611a12900567bb9e757045cec0dcf5413dca1a61103aba766c"} Feb 17 14:05:36 crc kubenswrapper[4768]: I0217 14:05:36.000645 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-rdgml" event={"ID":"c7f3a2b7-d13a-4cb1-9b34-5cd1c21cf3c6","Type":"ContainerStarted","Data":"a6d1cb492ce52605ca2541e82f7bc276dd0e8325ceecf301c6c649be8126c8fb"} Feb 17 14:05:36 crc kubenswrapper[4768]: I0217 14:05:36.025279 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-rdgml" podStartSLOduration=1.601440254 podStartE2EDuration="2.025249785s" podCreationTimestamp="2026-02-17 14:05:34 +0000 UTC" firstStartedPulling="2026-02-17 14:05:34.973648893 +0000 UTC m=+1754.253035375" lastFinishedPulling="2026-02-17 14:05:35.397458464 +0000 UTC m=+1754.676844906" observedRunningTime="2026-02-17 14:05:36.016018991 +0000 UTC m=+1755.295405443" watchObservedRunningTime="2026-02-17 14:05:36.025249785 +0000 UTC m=+1755.304636267" Feb 17 14:05:39 crc kubenswrapper[4768]: I0217 14:05:39.054611 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-sync-jmq2h"] Feb 17 14:05:39 crc kubenswrapper[4768]: I0217 14:05:39.066403 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-sync-5c8md"] Feb 17 14:05:39 crc kubenswrapper[4768]: I0217 14:05:39.076241 4768 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-sync-jmq2h"] Feb 17 14:05:39 crc kubenswrapper[4768]: I0217 14:05:39.083677 4768 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-sync-5c8md"] Feb 17 14:05:39 crc kubenswrapper[4768]: I0217 14:05:39.546711 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="df7e53d7-b63b-41b4-b909-c6effd0dab0c" path="/var/lib/kubelet/pods/df7e53d7-b63b-41b4-b909-c6effd0dab0c/volumes" Feb 17 14:05:39 crc kubenswrapper[4768]: I0217 14:05:39.547509 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e23e418f-2c16-4fa8-94fb-5e575affd61b" path="/var/lib/kubelet/pods/e23e418f-2c16-4fa8-94fb-5e575affd61b/volumes" Feb 17 14:05:41 crc kubenswrapper[4768]: I0217 14:05:41.543428 4768 scope.go:117] "RemoveContainer" containerID="1a561dcdfe0c02cbee0707f068057b98f49569f2f0f313ebde599d9ef2366bc7" Feb 17 14:05:41 crc kubenswrapper[4768]: E0217 14:05:41.544204 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p97z4_openshift-machine-config-operator(10c685ba-8fe0-425c-958c-3fb6754d3d84)\"" pod="openshift-machine-config-operator/machine-config-daemon-p97z4" podUID="10c685ba-8fe0-425c-958c-3fb6754d3d84" Feb 17 14:05:53 crc kubenswrapper[4768]: I0217 14:05:53.536263 4768 scope.go:117] "RemoveContainer" containerID="1a561dcdfe0c02cbee0707f068057b98f49569f2f0f313ebde599d9ef2366bc7" Feb 17 14:05:53 crc kubenswrapper[4768]: E0217 14:05:53.537420 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p97z4_openshift-machine-config-operator(10c685ba-8fe0-425c-958c-3fb6754d3d84)\"" pod="openshift-machine-config-operator/machine-config-daemon-p97z4" podUID="10c685ba-8fe0-425c-958c-3fb6754d3d84" Feb 17 14:06:06 crc kubenswrapper[4768]: I0217 14:06:06.534197 4768 scope.go:117] "RemoveContainer" containerID="1a561dcdfe0c02cbee0707f068057b98f49569f2f0f313ebde599d9ef2366bc7" Feb 17 14:06:06 crc kubenswrapper[4768]: E0217 14:06:06.534878 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p97z4_openshift-machine-config-operator(10c685ba-8fe0-425c-958c-3fb6754d3d84)\"" pod="openshift-machine-config-operator/machine-config-daemon-p97z4" podUID="10c685ba-8fe0-425c-958c-3fb6754d3d84" Feb 17 14:06:09 crc kubenswrapper[4768]: I0217 14:06:09.330311 4768 generic.go:334] "Generic (PLEG): container finished" podID="c7f3a2b7-d13a-4cb1-9b34-5cd1c21cf3c6" containerID="a6d1cb492ce52605ca2541e82f7bc276dd0e8325ceecf301c6c649be8126c8fb" exitCode=0 Feb 17 14:06:09 crc kubenswrapper[4768]: I0217 14:06:09.330440 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-rdgml" event={"ID":"c7f3a2b7-d13a-4cb1-9b34-5cd1c21cf3c6","Type":"ContainerDied","Data":"a6d1cb492ce52605ca2541e82f7bc276dd0e8325ceecf301c6c649be8126c8fb"} Feb 17 14:06:10 crc kubenswrapper[4768]: I0217 14:06:10.737871 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-rdgml" Feb 17 14:06:10 crc kubenswrapper[4768]: I0217 14:06:10.862954 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c7f3a2b7-d13a-4cb1-9b34-5cd1c21cf3c6-ssh-key-openstack-edpm-ipam\") pod \"c7f3a2b7-d13a-4cb1-9b34-5cd1c21cf3c6\" (UID: \"c7f3a2b7-d13a-4cb1-9b34-5cd1c21cf3c6\") " Feb 17 14:06:10 crc kubenswrapper[4768]: I0217 14:06:10.863025 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2245l\" (UniqueName: \"kubernetes.io/projected/c7f3a2b7-d13a-4cb1-9b34-5cd1c21cf3c6-kube-api-access-2245l\") pod \"c7f3a2b7-d13a-4cb1-9b34-5cd1c21cf3c6\" (UID: \"c7f3a2b7-d13a-4cb1-9b34-5cd1c21cf3c6\") " Feb 17 14:06:10 crc kubenswrapper[4768]: I0217 14:06:10.863060 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c7f3a2b7-d13a-4cb1-9b34-5cd1c21cf3c6-inventory\") pod \"c7f3a2b7-d13a-4cb1-9b34-5cd1c21cf3c6\" (UID: \"c7f3a2b7-d13a-4cb1-9b34-5cd1c21cf3c6\") " Feb 17 14:06:10 crc kubenswrapper[4768]: I0217 14:06:10.872571 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c7f3a2b7-d13a-4cb1-9b34-5cd1c21cf3c6-kube-api-access-2245l" (OuterVolumeSpecName: "kube-api-access-2245l") pod "c7f3a2b7-d13a-4cb1-9b34-5cd1c21cf3c6" (UID: "c7f3a2b7-d13a-4cb1-9b34-5cd1c21cf3c6"). InnerVolumeSpecName "kube-api-access-2245l". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 14:06:10 crc kubenswrapper[4768]: I0217 14:06:10.905061 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c7f3a2b7-d13a-4cb1-9b34-5cd1c21cf3c6-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "c7f3a2b7-d13a-4cb1-9b34-5cd1c21cf3c6" (UID: "c7f3a2b7-d13a-4cb1-9b34-5cd1c21cf3c6"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 14:06:10 crc kubenswrapper[4768]: I0217 14:06:10.913702 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c7f3a2b7-d13a-4cb1-9b34-5cd1c21cf3c6-inventory" (OuterVolumeSpecName: "inventory") pod "c7f3a2b7-d13a-4cb1-9b34-5cd1c21cf3c6" (UID: "c7f3a2b7-d13a-4cb1-9b34-5cd1c21cf3c6"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 14:06:10 crc kubenswrapper[4768]: I0217 14:06:10.965763 4768 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c7f3a2b7-d13a-4cb1-9b34-5cd1c21cf3c6-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 17 14:06:10 crc kubenswrapper[4768]: I0217 14:06:10.965816 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2245l\" (UniqueName: \"kubernetes.io/projected/c7f3a2b7-d13a-4cb1-9b34-5cd1c21cf3c6-kube-api-access-2245l\") on node \"crc\" DevicePath \"\"" Feb 17 14:06:10 crc kubenswrapper[4768]: I0217 14:06:10.965829 4768 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c7f3a2b7-d13a-4cb1-9b34-5cd1c21cf3c6-inventory\") on node \"crc\" DevicePath \"\"" Feb 17 14:06:11 crc kubenswrapper[4768]: I0217 14:06:11.353302 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-rdgml" event={"ID":"c7f3a2b7-d13a-4cb1-9b34-5cd1c21cf3c6","Type":"ContainerDied","Data":"c7c788c87813a2611a12900567bb9e757045cec0dcf5413dca1a61103aba766c"} Feb 17 14:06:11 crc kubenswrapper[4768]: I0217 14:06:11.353591 4768 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c7c788c87813a2611a12900567bb9e757045cec0dcf5413dca1a61103aba766c" Feb 17 14:06:11 crc kubenswrapper[4768]: I0217 14:06:11.353429 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-rdgml" Feb 17 14:06:11 crc kubenswrapper[4768]: I0217 14:06:11.472447 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-mr4bd"] Feb 17 14:06:11 crc kubenswrapper[4768]: E0217 14:06:11.472895 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c7f3a2b7-d13a-4cb1-9b34-5cd1c21cf3c6" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Feb 17 14:06:11 crc kubenswrapper[4768]: I0217 14:06:11.472919 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="c7f3a2b7-d13a-4cb1-9b34-5cd1c21cf3c6" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Feb 17 14:06:11 crc kubenswrapper[4768]: I0217 14:06:11.473168 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="c7f3a2b7-d13a-4cb1-9b34-5cd1c21cf3c6" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Feb 17 14:06:11 crc kubenswrapper[4768]: I0217 14:06:11.473896 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-mr4bd" Feb 17 14:06:11 crc kubenswrapper[4768]: I0217 14:06:11.476403 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-tr87q" Feb 17 14:06:11 crc kubenswrapper[4768]: I0217 14:06:11.476470 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 17 14:06:11 crc kubenswrapper[4768]: I0217 14:06:11.476738 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 17 14:06:11 crc kubenswrapper[4768]: I0217 14:06:11.478075 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 17 14:06:11 crc kubenswrapper[4768]: I0217 14:06:11.490602 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4fgwk\" (UniqueName: \"kubernetes.io/projected/e5fb7529-06bd-4dbe-aeb8-5753feec5be2-kube-api-access-4fgwk\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-mr4bd\" (UID: \"e5fb7529-06bd-4dbe-aeb8-5753feec5be2\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-mr4bd" Feb 17 14:06:11 crc kubenswrapper[4768]: I0217 14:06:11.490737 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e5fb7529-06bd-4dbe-aeb8-5753feec5be2-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-mr4bd\" (UID: \"e5fb7529-06bd-4dbe-aeb8-5753feec5be2\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-mr4bd" Feb 17 14:06:11 crc kubenswrapper[4768]: I0217 14:06:11.490797 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e5fb7529-06bd-4dbe-aeb8-5753feec5be2-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-mr4bd\" (UID: \"e5fb7529-06bd-4dbe-aeb8-5753feec5be2\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-mr4bd" Feb 17 14:06:11 crc kubenswrapper[4768]: I0217 14:06:11.518201 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-mr4bd"] Feb 17 14:06:11 crc kubenswrapper[4768]: I0217 14:06:11.592137 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e5fb7529-06bd-4dbe-aeb8-5753feec5be2-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-mr4bd\" (UID: \"e5fb7529-06bd-4dbe-aeb8-5753feec5be2\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-mr4bd" Feb 17 14:06:11 crc kubenswrapper[4768]: I0217 14:06:11.592400 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e5fb7529-06bd-4dbe-aeb8-5753feec5be2-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-mr4bd\" (UID: \"e5fb7529-06bd-4dbe-aeb8-5753feec5be2\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-mr4bd" Feb 17 14:06:11 crc kubenswrapper[4768]: I0217 14:06:11.592692 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4fgwk\" (UniqueName: \"kubernetes.io/projected/e5fb7529-06bd-4dbe-aeb8-5753feec5be2-kube-api-access-4fgwk\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-mr4bd\" (UID: \"e5fb7529-06bd-4dbe-aeb8-5753feec5be2\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-mr4bd" Feb 17 14:06:11 crc kubenswrapper[4768]: I0217 14:06:11.595863 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e5fb7529-06bd-4dbe-aeb8-5753feec5be2-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-mr4bd\" (UID: \"e5fb7529-06bd-4dbe-aeb8-5753feec5be2\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-mr4bd" Feb 17 14:06:11 crc kubenswrapper[4768]: I0217 14:06:11.598463 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e5fb7529-06bd-4dbe-aeb8-5753feec5be2-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-mr4bd\" (UID: \"e5fb7529-06bd-4dbe-aeb8-5753feec5be2\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-mr4bd" Feb 17 14:06:11 crc kubenswrapper[4768]: I0217 14:06:11.610684 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4fgwk\" (UniqueName: \"kubernetes.io/projected/e5fb7529-06bd-4dbe-aeb8-5753feec5be2-kube-api-access-4fgwk\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-mr4bd\" (UID: \"e5fb7529-06bd-4dbe-aeb8-5753feec5be2\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-mr4bd" Feb 17 14:06:11 crc kubenswrapper[4768]: I0217 14:06:11.812083 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-mr4bd" Feb 17 14:06:12 crc kubenswrapper[4768]: I0217 14:06:12.297247 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-mr4bd"] Feb 17 14:06:12 crc kubenswrapper[4768]: I0217 14:06:12.363384 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-mr4bd" event={"ID":"e5fb7529-06bd-4dbe-aeb8-5753feec5be2","Type":"ContainerStarted","Data":"312363ee2e449e33e834925ecdd906296e40948f648c6ca64ae6b3c34c354392"} Feb 17 14:06:14 crc kubenswrapper[4768]: I0217 14:06:14.387962 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-mr4bd" event={"ID":"e5fb7529-06bd-4dbe-aeb8-5753feec5be2","Type":"ContainerStarted","Data":"10b0f43530f424ba141f7b8935cbc89055e7ada474026f6240ddf5742ac43d2c"} Feb 17 14:06:14 crc kubenswrapper[4768]: I0217 14:06:14.412417 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-mr4bd" podStartSLOduration=2.400215014 podStartE2EDuration="3.412397924s" podCreationTimestamp="2026-02-17 14:06:11 +0000 UTC" firstStartedPulling="2026-02-17 14:06:12.306090746 +0000 UTC m=+1791.585477188" lastFinishedPulling="2026-02-17 14:06:13.318273646 +0000 UTC m=+1792.597660098" observedRunningTime="2026-02-17 14:06:14.406385609 +0000 UTC m=+1793.685772071" watchObservedRunningTime="2026-02-17 14:06:14.412397924 +0000 UTC m=+1793.691784376" Feb 17 14:06:18 crc kubenswrapper[4768]: I0217 14:06:18.534469 4768 scope.go:117] "RemoveContainer" containerID="1a561dcdfe0c02cbee0707f068057b98f49569f2f0f313ebde599d9ef2366bc7" Feb 17 14:06:18 crc kubenswrapper[4768]: E0217 14:06:18.535316 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p97z4_openshift-machine-config-operator(10c685ba-8fe0-425c-958c-3fb6754d3d84)\"" pod="openshift-machine-config-operator/machine-config-daemon-p97z4" podUID="10c685ba-8fe0-425c-958c-3fb6754d3d84" Feb 17 14:06:28 crc kubenswrapper[4768]: I0217 14:06:28.626527 4768 scope.go:117] "RemoveContainer" containerID="e368e83be738ce819f9a99a41a85b5c0583f0baa2c6c1f5bd60a123d3eb716a7" Feb 17 14:06:28 crc kubenswrapper[4768]: I0217 14:06:28.661096 4768 scope.go:117] "RemoveContainer" containerID="66f32dc57d820647119bc07c2c3ffc4dae0c504a4d8c1f693646f527e404d135" Feb 17 14:06:29 crc kubenswrapper[4768]: I0217 14:06:29.534674 4768 scope.go:117] "RemoveContainer" containerID="1a561dcdfe0c02cbee0707f068057b98f49569f2f0f313ebde599d9ef2366bc7" Feb 17 14:06:30 crc kubenswrapper[4768]: I0217 14:06:30.535023 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-p97z4" event={"ID":"10c685ba-8fe0-425c-958c-3fb6754d3d84","Type":"ContainerStarted","Data":"f4b1ed5ad4696f245b46f42d7dd4597fdcb14a363987811db0ee8a9896aa7bd9"} Feb 17 14:06:36 crc kubenswrapper[4768]: I0217 14:06:36.044316 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-930a-account-create-update-vzjbq"] Feb 17 14:06:36 crc kubenswrapper[4768]: I0217 14:06:36.067739 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-db-create-wbqmg"] Feb 17 14:06:36 crc kubenswrapper[4768]: I0217 14:06:36.067810 4768 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-930a-account-create-update-vzjbq"] Feb 17 14:06:36 crc kubenswrapper[4768]: I0217 14:06:36.067827 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-db-create-9c5h6"] Feb 17 14:06:36 crc kubenswrapper[4768]: I0217 14:06:36.078877 4768 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-db-create-9c5h6"] Feb 17 14:06:36 crc kubenswrapper[4768]: I0217 14:06:36.089016 4768 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-db-create-wbqmg"] Feb 17 14:06:37 crc kubenswrapper[4768]: I0217 14:06:37.033426 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-db-create-ggdjk"] Feb 17 14:06:37 crc kubenswrapper[4768]: I0217 14:06:37.045374 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-ef60-account-create-update-262kh"] Feb 17 14:06:37 crc kubenswrapper[4768]: I0217 14:06:37.055578 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-f310-account-create-update-pssrg"] Feb 17 14:06:37 crc kubenswrapper[4768]: I0217 14:06:37.063173 4768 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-db-create-ggdjk"] Feb 17 14:06:37 crc kubenswrapper[4768]: I0217 14:06:37.070351 4768 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-f310-account-create-update-pssrg"] Feb 17 14:06:37 crc kubenswrapper[4768]: I0217 14:06:37.076982 4768 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-ef60-account-create-update-262kh"] Feb 17 14:06:37 crc kubenswrapper[4768]: I0217 14:06:37.561623 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="11dd3938-0363-4020-b8c3-4a1510d0d400" path="/var/lib/kubelet/pods/11dd3938-0363-4020-b8c3-4a1510d0d400/volumes" Feb 17 14:06:37 crc kubenswrapper[4768]: I0217 14:06:37.568364 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="127b440a-bcde-4b51-ae43-221b093dcdb7" path="/var/lib/kubelet/pods/127b440a-bcde-4b51-ae43-221b093dcdb7/volumes" Feb 17 14:06:37 crc kubenswrapper[4768]: I0217 14:06:37.570870 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8dc679a7-9d70-46d9-a89b-69e761fcf366" path="/var/lib/kubelet/pods/8dc679a7-9d70-46d9-a89b-69e761fcf366/volumes" Feb 17 14:06:37 crc kubenswrapper[4768]: I0217 14:06:37.572464 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b9421cc5-76da-4822-984c-7ac27c814dfe" path="/var/lib/kubelet/pods/b9421cc5-76da-4822-984c-7ac27c814dfe/volumes" Feb 17 14:06:37 crc kubenswrapper[4768]: I0217 14:06:37.575077 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d4fed022-7a29-4dd3-8660-be750880438c" path="/var/lib/kubelet/pods/d4fed022-7a29-4dd3-8660-be750880438c/volumes" Feb 17 14:06:37 crc kubenswrapper[4768]: I0217 14:06:37.576384 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f159e76f-1606-4a1d-8ce3-647851c11669" path="/var/lib/kubelet/pods/f159e76f-1606-4a1d-8ce3-647851c11669/volumes" Feb 17 14:06:54 crc kubenswrapper[4768]: I0217 14:06:54.757272 4768 generic.go:334] "Generic (PLEG): container finished" podID="e5fb7529-06bd-4dbe-aeb8-5753feec5be2" containerID="10b0f43530f424ba141f7b8935cbc89055e7ada474026f6240ddf5742ac43d2c" exitCode=0 Feb 17 14:06:54 crc kubenswrapper[4768]: I0217 14:06:54.757505 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-mr4bd" event={"ID":"e5fb7529-06bd-4dbe-aeb8-5753feec5be2","Type":"ContainerDied","Data":"10b0f43530f424ba141f7b8935cbc89055e7ada474026f6240ddf5742ac43d2c"} Feb 17 14:06:56 crc kubenswrapper[4768]: I0217 14:06:56.145519 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-mr4bd" Feb 17 14:06:56 crc kubenswrapper[4768]: I0217 14:06:56.307123 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e5fb7529-06bd-4dbe-aeb8-5753feec5be2-ssh-key-openstack-edpm-ipam\") pod \"e5fb7529-06bd-4dbe-aeb8-5753feec5be2\" (UID: \"e5fb7529-06bd-4dbe-aeb8-5753feec5be2\") " Feb 17 14:06:56 crc kubenswrapper[4768]: I0217 14:06:56.307375 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e5fb7529-06bd-4dbe-aeb8-5753feec5be2-inventory\") pod \"e5fb7529-06bd-4dbe-aeb8-5753feec5be2\" (UID: \"e5fb7529-06bd-4dbe-aeb8-5753feec5be2\") " Feb 17 14:06:56 crc kubenswrapper[4768]: I0217 14:06:56.307585 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4fgwk\" (UniqueName: \"kubernetes.io/projected/e5fb7529-06bd-4dbe-aeb8-5753feec5be2-kube-api-access-4fgwk\") pod \"e5fb7529-06bd-4dbe-aeb8-5753feec5be2\" (UID: \"e5fb7529-06bd-4dbe-aeb8-5753feec5be2\") " Feb 17 14:06:56 crc kubenswrapper[4768]: I0217 14:06:56.315942 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e5fb7529-06bd-4dbe-aeb8-5753feec5be2-kube-api-access-4fgwk" (OuterVolumeSpecName: "kube-api-access-4fgwk") pod "e5fb7529-06bd-4dbe-aeb8-5753feec5be2" (UID: "e5fb7529-06bd-4dbe-aeb8-5753feec5be2"). InnerVolumeSpecName "kube-api-access-4fgwk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 14:06:56 crc kubenswrapper[4768]: I0217 14:06:56.338765 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e5fb7529-06bd-4dbe-aeb8-5753feec5be2-inventory" (OuterVolumeSpecName: "inventory") pod "e5fb7529-06bd-4dbe-aeb8-5753feec5be2" (UID: "e5fb7529-06bd-4dbe-aeb8-5753feec5be2"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 14:06:56 crc kubenswrapper[4768]: I0217 14:06:56.343452 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e5fb7529-06bd-4dbe-aeb8-5753feec5be2-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "e5fb7529-06bd-4dbe-aeb8-5753feec5be2" (UID: "e5fb7529-06bd-4dbe-aeb8-5753feec5be2"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 14:06:56 crc kubenswrapper[4768]: I0217 14:06:56.412603 4768 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e5fb7529-06bd-4dbe-aeb8-5753feec5be2-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 17 14:06:56 crc kubenswrapper[4768]: I0217 14:06:56.412639 4768 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e5fb7529-06bd-4dbe-aeb8-5753feec5be2-inventory\") on node \"crc\" DevicePath \"\"" Feb 17 14:06:56 crc kubenswrapper[4768]: I0217 14:06:56.412651 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4fgwk\" (UniqueName: \"kubernetes.io/projected/e5fb7529-06bd-4dbe-aeb8-5753feec5be2-kube-api-access-4fgwk\") on node \"crc\" DevicePath \"\"" Feb 17 14:06:56 crc kubenswrapper[4768]: I0217 14:06:56.774076 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-mr4bd" event={"ID":"e5fb7529-06bd-4dbe-aeb8-5753feec5be2","Type":"ContainerDied","Data":"312363ee2e449e33e834925ecdd906296e40948f648c6ca64ae6b3c34c354392"} Feb 17 14:06:56 crc kubenswrapper[4768]: I0217 14:06:56.774604 4768 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="312363ee2e449e33e834925ecdd906296e40948f648c6ca64ae6b3c34c354392" Feb 17 14:06:56 crc kubenswrapper[4768]: I0217 14:06:56.774562 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-mr4bd" Feb 17 14:06:56 crc kubenswrapper[4768]: I0217 14:06:56.870752 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-4nwsr"] Feb 17 14:06:56 crc kubenswrapper[4768]: E0217 14:06:56.871322 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e5fb7529-06bd-4dbe-aeb8-5753feec5be2" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Feb 17 14:06:56 crc kubenswrapper[4768]: I0217 14:06:56.871550 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="e5fb7529-06bd-4dbe-aeb8-5753feec5be2" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Feb 17 14:06:56 crc kubenswrapper[4768]: I0217 14:06:56.871981 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="e5fb7529-06bd-4dbe-aeb8-5753feec5be2" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Feb 17 14:06:56 crc kubenswrapper[4768]: I0217 14:06:56.873801 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-4nwsr" Feb 17 14:06:56 crc kubenswrapper[4768]: I0217 14:06:56.876693 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 17 14:06:56 crc kubenswrapper[4768]: I0217 14:06:56.878935 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-tr87q" Feb 17 14:06:56 crc kubenswrapper[4768]: I0217 14:06:56.879080 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 17 14:06:56 crc kubenswrapper[4768]: I0217 14:06:56.879361 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 17 14:06:56 crc kubenswrapper[4768]: I0217 14:06:56.884350 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-4nwsr"] Feb 17 14:06:56 crc kubenswrapper[4768]: I0217 14:06:56.926901 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-78k6q\" (UniqueName: \"kubernetes.io/projected/62a034b9-286c-4b4b-aea8-8ca20fe7610f-kube-api-access-78k6q\") pod \"ssh-known-hosts-edpm-deployment-4nwsr\" (UID: \"62a034b9-286c-4b4b-aea8-8ca20fe7610f\") " pod="openstack/ssh-known-hosts-edpm-deployment-4nwsr" Feb 17 14:06:56 crc kubenswrapper[4768]: I0217 14:06:56.927259 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/62a034b9-286c-4b4b-aea8-8ca20fe7610f-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-4nwsr\" (UID: \"62a034b9-286c-4b4b-aea8-8ca20fe7610f\") " pod="openstack/ssh-known-hosts-edpm-deployment-4nwsr" Feb 17 14:06:56 crc kubenswrapper[4768]: I0217 14:06:56.927451 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/62a034b9-286c-4b4b-aea8-8ca20fe7610f-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-4nwsr\" (UID: \"62a034b9-286c-4b4b-aea8-8ca20fe7610f\") " pod="openstack/ssh-known-hosts-edpm-deployment-4nwsr" Feb 17 14:06:57 crc kubenswrapper[4768]: I0217 14:06:57.029902 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/62a034b9-286c-4b4b-aea8-8ca20fe7610f-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-4nwsr\" (UID: \"62a034b9-286c-4b4b-aea8-8ca20fe7610f\") " pod="openstack/ssh-known-hosts-edpm-deployment-4nwsr" Feb 17 14:06:57 crc kubenswrapper[4768]: I0217 14:06:57.030185 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-78k6q\" (UniqueName: \"kubernetes.io/projected/62a034b9-286c-4b4b-aea8-8ca20fe7610f-kube-api-access-78k6q\") pod \"ssh-known-hosts-edpm-deployment-4nwsr\" (UID: \"62a034b9-286c-4b4b-aea8-8ca20fe7610f\") " pod="openstack/ssh-known-hosts-edpm-deployment-4nwsr" Feb 17 14:06:57 crc kubenswrapper[4768]: I0217 14:06:57.030279 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/62a034b9-286c-4b4b-aea8-8ca20fe7610f-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-4nwsr\" (UID: \"62a034b9-286c-4b4b-aea8-8ca20fe7610f\") " pod="openstack/ssh-known-hosts-edpm-deployment-4nwsr" Feb 17 14:06:57 crc kubenswrapper[4768]: I0217 14:06:57.036825 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/62a034b9-286c-4b4b-aea8-8ca20fe7610f-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-4nwsr\" (UID: \"62a034b9-286c-4b4b-aea8-8ca20fe7610f\") " pod="openstack/ssh-known-hosts-edpm-deployment-4nwsr" Feb 17 14:06:57 crc kubenswrapper[4768]: I0217 14:06:57.036857 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/62a034b9-286c-4b4b-aea8-8ca20fe7610f-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-4nwsr\" (UID: \"62a034b9-286c-4b4b-aea8-8ca20fe7610f\") " pod="openstack/ssh-known-hosts-edpm-deployment-4nwsr" Feb 17 14:06:57 crc kubenswrapper[4768]: I0217 14:06:57.053224 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-78k6q\" (UniqueName: \"kubernetes.io/projected/62a034b9-286c-4b4b-aea8-8ca20fe7610f-kube-api-access-78k6q\") pod \"ssh-known-hosts-edpm-deployment-4nwsr\" (UID: \"62a034b9-286c-4b4b-aea8-8ca20fe7610f\") " pod="openstack/ssh-known-hosts-edpm-deployment-4nwsr" Feb 17 14:06:57 crc kubenswrapper[4768]: I0217 14:06:57.202283 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-4nwsr" Feb 17 14:06:57 crc kubenswrapper[4768]: I0217 14:06:57.755441 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-4nwsr"] Feb 17 14:06:57 crc kubenswrapper[4768]: I0217 14:06:57.787181 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-4nwsr" event={"ID":"62a034b9-286c-4b4b-aea8-8ca20fe7610f","Type":"ContainerStarted","Data":"45bd33d8953b4ce16de3242161217d0768ee2d355895e167a4adc4aa0520c4d7"} Feb 17 14:06:58 crc kubenswrapper[4768]: I0217 14:06:58.799608 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-4nwsr" event={"ID":"62a034b9-286c-4b4b-aea8-8ca20fe7610f","Type":"ContainerStarted","Data":"a42997966b9d017794cbb77398089aa8d82ae6d0b0a396bd2e66e6bcaf2ebed2"} Feb 17 14:06:58 crc kubenswrapper[4768]: I0217 14:06:58.830674 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ssh-known-hosts-edpm-deployment-4nwsr" podStartSLOduration=2.236489583 podStartE2EDuration="2.830631532s" podCreationTimestamp="2026-02-17 14:06:56 +0000 UTC" firstStartedPulling="2026-02-17 14:06:57.771857446 +0000 UTC m=+1837.051243888" lastFinishedPulling="2026-02-17 14:06:58.365999385 +0000 UTC m=+1837.645385837" observedRunningTime="2026-02-17 14:06:58.820728511 +0000 UTC m=+1838.100114973" watchObservedRunningTime="2026-02-17 14:06:58.830631532 +0000 UTC m=+1838.110017974" Feb 17 14:07:02 crc kubenswrapper[4768]: I0217 14:07:02.042507 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-lqvwx"] Feb 17 14:07:02 crc kubenswrapper[4768]: I0217 14:07:02.057908 4768 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-lqvwx"] Feb 17 14:07:03 crc kubenswrapper[4768]: I0217 14:07:03.547042 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e37a5596-9bc4-4df1-af63-e4475450a07f" path="/var/lib/kubelet/pods/e37a5596-9bc4-4df1-af63-e4475450a07f/volumes" Feb 17 14:07:06 crc kubenswrapper[4768]: I0217 14:07:06.883973 4768 generic.go:334] "Generic (PLEG): container finished" podID="62a034b9-286c-4b4b-aea8-8ca20fe7610f" containerID="a42997966b9d017794cbb77398089aa8d82ae6d0b0a396bd2e66e6bcaf2ebed2" exitCode=0 Feb 17 14:07:06 crc kubenswrapper[4768]: I0217 14:07:06.884062 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-4nwsr" event={"ID":"62a034b9-286c-4b4b-aea8-8ca20fe7610f","Type":"ContainerDied","Data":"a42997966b9d017794cbb77398089aa8d82ae6d0b0a396bd2e66e6bcaf2ebed2"} Feb 17 14:07:08 crc kubenswrapper[4768]: I0217 14:07:08.317462 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-4nwsr" Feb 17 14:07:08 crc kubenswrapper[4768]: I0217 14:07:08.488323 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/62a034b9-286c-4b4b-aea8-8ca20fe7610f-ssh-key-openstack-edpm-ipam\") pod \"62a034b9-286c-4b4b-aea8-8ca20fe7610f\" (UID: \"62a034b9-286c-4b4b-aea8-8ca20fe7610f\") " Feb 17 14:07:08 crc kubenswrapper[4768]: I0217 14:07:08.488481 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/62a034b9-286c-4b4b-aea8-8ca20fe7610f-inventory-0\") pod \"62a034b9-286c-4b4b-aea8-8ca20fe7610f\" (UID: \"62a034b9-286c-4b4b-aea8-8ca20fe7610f\") " Feb 17 14:07:08 crc kubenswrapper[4768]: I0217 14:07:08.488554 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-78k6q\" (UniqueName: \"kubernetes.io/projected/62a034b9-286c-4b4b-aea8-8ca20fe7610f-kube-api-access-78k6q\") pod \"62a034b9-286c-4b4b-aea8-8ca20fe7610f\" (UID: \"62a034b9-286c-4b4b-aea8-8ca20fe7610f\") " Feb 17 14:07:08 crc kubenswrapper[4768]: I0217 14:07:08.494297 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/62a034b9-286c-4b4b-aea8-8ca20fe7610f-kube-api-access-78k6q" (OuterVolumeSpecName: "kube-api-access-78k6q") pod "62a034b9-286c-4b4b-aea8-8ca20fe7610f" (UID: "62a034b9-286c-4b4b-aea8-8ca20fe7610f"). InnerVolumeSpecName "kube-api-access-78k6q". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 14:07:08 crc kubenswrapper[4768]: I0217 14:07:08.517847 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/62a034b9-286c-4b4b-aea8-8ca20fe7610f-inventory-0" (OuterVolumeSpecName: "inventory-0") pod "62a034b9-286c-4b4b-aea8-8ca20fe7610f" (UID: "62a034b9-286c-4b4b-aea8-8ca20fe7610f"). InnerVolumeSpecName "inventory-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 14:07:08 crc kubenswrapper[4768]: I0217 14:07:08.524142 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/62a034b9-286c-4b4b-aea8-8ca20fe7610f-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "62a034b9-286c-4b4b-aea8-8ca20fe7610f" (UID: "62a034b9-286c-4b4b-aea8-8ca20fe7610f"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 14:07:08 crc kubenswrapper[4768]: I0217 14:07:08.592965 4768 reconciler_common.go:293] "Volume detached for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/62a034b9-286c-4b4b-aea8-8ca20fe7610f-inventory-0\") on node \"crc\" DevicePath \"\"" Feb 17 14:07:08 crc kubenswrapper[4768]: I0217 14:07:08.593020 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-78k6q\" (UniqueName: \"kubernetes.io/projected/62a034b9-286c-4b4b-aea8-8ca20fe7610f-kube-api-access-78k6q\") on node \"crc\" DevicePath \"\"" Feb 17 14:07:08 crc kubenswrapper[4768]: I0217 14:07:08.593039 4768 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/62a034b9-286c-4b4b-aea8-8ca20fe7610f-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 17 14:07:08 crc kubenswrapper[4768]: I0217 14:07:08.949991 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-4nwsr" event={"ID":"62a034b9-286c-4b4b-aea8-8ca20fe7610f","Type":"ContainerDied","Data":"45bd33d8953b4ce16de3242161217d0768ee2d355895e167a4adc4aa0520c4d7"} Feb 17 14:07:08 crc kubenswrapper[4768]: I0217 14:07:08.950021 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-4nwsr" Feb 17 14:07:08 crc kubenswrapper[4768]: I0217 14:07:08.950042 4768 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="45bd33d8953b4ce16de3242161217d0768ee2d355895e167a4adc4aa0520c4d7" Feb 17 14:07:08 crc kubenswrapper[4768]: I0217 14:07:08.992172 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-td9b9"] Feb 17 14:07:08 crc kubenswrapper[4768]: E0217 14:07:08.992621 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="62a034b9-286c-4b4b-aea8-8ca20fe7610f" containerName="ssh-known-hosts-edpm-deployment" Feb 17 14:07:08 crc kubenswrapper[4768]: I0217 14:07:08.992647 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="62a034b9-286c-4b4b-aea8-8ca20fe7610f" containerName="ssh-known-hosts-edpm-deployment" Feb 17 14:07:08 crc kubenswrapper[4768]: I0217 14:07:08.993057 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="62a034b9-286c-4b4b-aea8-8ca20fe7610f" containerName="ssh-known-hosts-edpm-deployment" Feb 17 14:07:08 crc kubenswrapper[4768]: I0217 14:07:08.995413 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-td9b9" Feb 17 14:07:08 crc kubenswrapper[4768]: I0217 14:07:08.998552 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 17 14:07:08 crc kubenswrapper[4768]: I0217 14:07:08.998675 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 17 14:07:08 crc kubenswrapper[4768]: I0217 14:07:08.998819 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 17 14:07:08 crc kubenswrapper[4768]: I0217 14:07:08.998941 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-tr87q" Feb 17 14:07:09 crc kubenswrapper[4768]: I0217 14:07:09.001712 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c3e60ab5-9a2d-4dc6-8ce8-d730becb94ff-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-td9b9\" (UID: \"c3e60ab5-9a2d-4dc6-8ce8-d730becb94ff\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-td9b9" Feb 17 14:07:09 crc kubenswrapper[4768]: I0217 14:07:09.001855 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5dl9r\" (UniqueName: \"kubernetes.io/projected/c3e60ab5-9a2d-4dc6-8ce8-d730becb94ff-kube-api-access-5dl9r\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-td9b9\" (UID: \"c3e60ab5-9a2d-4dc6-8ce8-d730becb94ff\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-td9b9" Feb 17 14:07:09 crc kubenswrapper[4768]: I0217 14:07:09.002069 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c3e60ab5-9a2d-4dc6-8ce8-d730becb94ff-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-td9b9\" (UID: \"c3e60ab5-9a2d-4dc6-8ce8-d730becb94ff\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-td9b9" Feb 17 14:07:09 crc kubenswrapper[4768]: I0217 14:07:09.005943 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-td9b9"] Feb 17 14:07:09 crc kubenswrapper[4768]: I0217 14:07:09.103474 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c3e60ab5-9a2d-4dc6-8ce8-d730becb94ff-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-td9b9\" (UID: \"c3e60ab5-9a2d-4dc6-8ce8-d730becb94ff\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-td9b9" Feb 17 14:07:09 crc kubenswrapper[4768]: I0217 14:07:09.103581 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c3e60ab5-9a2d-4dc6-8ce8-d730becb94ff-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-td9b9\" (UID: \"c3e60ab5-9a2d-4dc6-8ce8-d730becb94ff\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-td9b9" Feb 17 14:07:09 crc kubenswrapper[4768]: I0217 14:07:09.103627 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5dl9r\" (UniqueName: \"kubernetes.io/projected/c3e60ab5-9a2d-4dc6-8ce8-d730becb94ff-kube-api-access-5dl9r\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-td9b9\" (UID: \"c3e60ab5-9a2d-4dc6-8ce8-d730becb94ff\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-td9b9" Feb 17 14:07:09 crc kubenswrapper[4768]: I0217 14:07:09.108807 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c3e60ab5-9a2d-4dc6-8ce8-d730becb94ff-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-td9b9\" (UID: \"c3e60ab5-9a2d-4dc6-8ce8-d730becb94ff\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-td9b9" Feb 17 14:07:09 crc kubenswrapper[4768]: I0217 14:07:09.116364 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c3e60ab5-9a2d-4dc6-8ce8-d730becb94ff-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-td9b9\" (UID: \"c3e60ab5-9a2d-4dc6-8ce8-d730becb94ff\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-td9b9" Feb 17 14:07:09 crc kubenswrapper[4768]: I0217 14:07:09.124581 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5dl9r\" (UniqueName: \"kubernetes.io/projected/c3e60ab5-9a2d-4dc6-8ce8-d730becb94ff-kube-api-access-5dl9r\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-td9b9\" (UID: \"c3e60ab5-9a2d-4dc6-8ce8-d730becb94ff\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-td9b9" Feb 17 14:07:09 crc kubenswrapper[4768]: I0217 14:07:09.322312 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-td9b9" Feb 17 14:07:09 crc kubenswrapper[4768]: I0217 14:07:09.858586 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-td9b9"] Feb 17 14:07:09 crc kubenswrapper[4768]: I0217 14:07:09.961995 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-td9b9" event={"ID":"c3e60ab5-9a2d-4dc6-8ce8-d730becb94ff","Type":"ContainerStarted","Data":"6a6a8d9d28a889b929a0bd478bf96b28ee2e75c70778fa81a9f3ea6c3b11460b"} Feb 17 14:07:10 crc kubenswrapper[4768]: I0217 14:07:10.982358 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-td9b9" event={"ID":"c3e60ab5-9a2d-4dc6-8ce8-d730becb94ff","Type":"ContainerStarted","Data":"7219794be635a4fc7d0c9a23f38bad94a96507dac84decf5b004a96514ff7e93"} Feb 17 14:07:11 crc kubenswrapper[4768]: I0217 14:07:11.003848 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-td9b9" podStartSLOduration=2.295850838 podStartE2EDuration="3.003817578s" podCreationTimestamp="2026-02-17 14:07:08 +0000 UTC" firstStartedPulling="2026-02-17 14:07:09.866649981 +0000 UTC m=+1849.146036433" lastFinishedPulling="2026-02-17 14:07:10.574616721 +0000 UTC m=+1849.854003173" observedRunningTime="2026-02-17 14:07:11.001259328 +0000 UTC m=+1850.280645780" watchObservedRunningTime="2026-02-17 14:07:11.003817578 +0000 UTC m=+1850.283204030" Feb 17 14:07:18 crc kubenswrapper[4768]: I0217 14:07:18.044076 4768 generic.go:334] "Generic (PLEG): container finished" podID="c3e60ab5-9a2d-4dc6-8ce8-d730becb94ff" containerID="7219794be635a4fc7d0c9a23f38bad94a96507dac84decf5b004a96514ff7e93" exitCode=0 Feb 17 14:07:18 crc kubenswrapper[4768]: I0217 14:07:18.044186 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-td9b9" event={"ID":"c3e60ab5-9a2d-4dc6-8ce8-d730becb94ff","Type":"ContainerDied","Data":"7219794be635a4fc7d0c9a23f38bad94a96507dac84decf5b004a96514ff7e93"} Feb 17 14:07:19 crc kubenswrapper[4768]: I0217 14:07:19.440206 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-td9b9" Feb 17 14:07:19 crc kubenswrapper[4768]: I0217 14:07:19.524823 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5dl9r\" (UniqueName: \"kubernetes.io/projected/c3e60ab5-9a2d-4dc6-8ce8-d730becb94ff-kube-api-access-5dl9r\") pod \"c3e60ab5-9a2d-4dc6-8ce8-d730becb94ff\" (UID: \"c3e60ab5-9a2d-4dc6-8ce8-d730becb94ff\") " Feb 17 14:07:19 crc kubenswrapper[4768]: I0217 14:07:19.524908 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c3e60ab5-9a2d-4dc6-8ce8-d730becb94ff-inventory\") pod \"c3e60ab5-9a2d-4dc6-8ce8-d730becb94ff\" (UID: \"c3e60ab5-9a2d-4dc6-8ce8-d730becb94ff\") " Feb 17 14:07:19 crc kubenswrapper[4768]: I0217 14:07:19.525135 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c3e60ab5-9a2d-4dc6-8ce8-d730becb94ff-ssh-key-openstack-edpm-ipam\") pod \"c3e60ab5-9a2d-4dc6-8ce8-d730becb94ff\" (UID: \"c3e60ab5-9a2d-4dc6-8ce8-d730becb94ff\") " Feb 17 14:07:19 crc kubenswrapper[4768]: I0217 14:07:19.530426 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c3e60ab5-9a2d-4dc6-8ce8-d730becb94ff-kube-api-access-5dl9r" (OuterVolumeSpecName: "kube-api-access-5dl9r") pod "c3e60ab5-9a2d-4dc6-8ce8-d730becb94ff" (UID: "c3e60ab5-9a2d-4dc6-8ce8-d730becb94ff"). InnerVolumeSpecName "kube-api-access-5dl9r". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 14:07:19 crc kubenswrapper[4768]: I0217 14:07:19.556385 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c3e60ab5-9a2d-4dc6-8ce8-d730becb94ff-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "c3e60ab5-9a2d-4dc6-8ce8-d730becb94ff" (UID: "c3e60ab5-9a2d-4dc6-8ce8-d730becb94ff"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 14:07:19 crc kubenswrapper[4768]: I0217 14:07:19.566121 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c3e60ab5-9a2d-4dc6-8ce8-d730becb94ff-inventory" (OuterVolumeSpecName: "inventory") pod "c3e60ab5-9a2d-4dc6-8ce8-d730becb94ff" (UID: "c3e60ab5-9a2d-4dc6-8ce8-d730becb94ff"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 14:07:19 crc kubenswrapper[4768]: I0217 14:07:19.628138 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5dl9r\" (UniqueName: \"kubernetes.io/projected/c3e60ab5-9a2d-4dc6-8ce8-d730becb94ff-kube-api-access-5dl9r\") on node \"crc\" DevicePath \"\"" Feb 17 14:07:19 crc kubenswrapper[4768]: I0217 14:07:19.628176 4768 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c3e60ab5-9a2d-4dc6-8ce8-d730becb94ff-inventory\") on node \"crc\" DevicePath \"\"" Feb 17 14:07:19 crc kubenswrapper[4768]: I0217 14:07:19.628219 4768 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c3e60ab5-9a2d-4dc6-8ce8-d730becb94ff-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 17 14:07:20 crc kubenswrapper[4768]: I0217 14:07:20.069454 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-td9b9" event={"ID":"c3e60ab5-9a2d-4dc6-8ce8-d730becb94ff","Type":"ContainerDied","Data":"6a6a8d9d28a889b929a0bd478bf96b28ee2e75c70778fa81a9f3ea6c3b11460b"} Feb 17 14:07:20 crc kubenswrapper[4768]: I0217 14:07:20.069872 4768 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6a6a8d9d28a889b929a0bd478bf96b28ee2e75c70778fa81a9f3ea6c3b11460b" Feb 17 14:07:20 crc kubenswrapper[4768]: I0217 14:07:20.069522 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-td9b9" Feb 17 14:07:20 crc kubenswrapper[4768]: I0217 14:07:20.162541 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-kzxnk"] Feb 17 14:07:20 crc kubenswrapper[4768]: E0217 14:07:20.163192 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c3e60ab5-9a2d-4dc6-8ce8-d730becb94ff" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Feb 17 14:07:20 crc kubenswrapper[4768]: I0217 14:07:20.163223 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="c3e60ab5-9a2d-4dc6-8ce8-d730becb94ff" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Feb 17 14:07:20 crc kubenswrapper[4768]: I0217 14:07:20.163438 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="c3e60ab5-9a2d-4dc6-8ce8-d730becb94ff" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Feb 17 14:07:20 crc kubenswrapper[4768]: I0217 14:07:20.164375 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-kzxnk" Feb 17 14:07:20 crc kubenswrapper[4768]: I0217 14:07:20.167635 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 17 14:07:20 crc kubenswrapper[4768]: I0217 14:07:20.168016 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-tr87q" Feb 17 14:07:20 crc kubenswrapper[4768]: I0217 14:07:20.168254 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 17 14:07:20 crc kubenswrapper[4768]: I0217 14:07:20.168444 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 17 14:07:20 crc kubenswrapper[4768]: I0217 14:07:20.181192 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-kzxnk"] Feb 17 14:07:20 crc kubenswrapper[4768]: I0217 14:07:20.239040 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/42b3a8d2-3952-474e-9821-8472466012cb-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-kzxnk\" (UID: \"42b3a8d2-3952-474e-9821-8472466012cb\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-kzxnk" Feb 17 14:07:20 crc kubenswrapper[4768]: I0217 14:07:20.239096 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/42b3a8d2-3952-474e-9821-8472466012cb-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-kzxnk\" (UID: \"42b3a8d2-3952-474e-9821-8472466012cb\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-kzxnk" Feb 17 14:07:20 crc kubenswrapper[4768]: I0217 14:07:20.239172 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fbmqd\" (UniqueName: \"kubernetes.io/projected/42b3a8d2-3952-474e-9821-8472466012cb-kube-api-access-fbmqd\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-kzxnk\" (UID: \"42b3a8d2-3952-474e-9821-8472466012cb\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-kzxnk" Feb 17 14:07:20 crc kubenswrapper[4768]: I0217 14:07:20.341311 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/42b3a8d2-3952-474e-9821-8472466012cb-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-kzxnk\" (UID: \"42b3a8d2-3952-474e-9821-8472466012cb\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-kzxnk" Feb 17 14:07:20 crc kubenswrapper[4768]: I0217 14:07:20.341360 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/42b3a8d2-3952-474e-9821-8472466012cb-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-kzxnk\" (UID: \"42b3a8d2-3952-474e-9821-8472466012cb\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-kzxnk" Feb 17 14:07:20 crc kubenswrapper[4768]: I0217 14:07:20.341399 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fbmqd\" (UniqueName: \"kubernetes.io/projected/42b3a8d2-3952-474e-9821-8472466012cb-kube-api-access-fbmqd\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-kzxnk\" (UID: \"42b3a8d2-3952-474e-9821-8472466012cb\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-kzxnk" Feb 17 14:07:20 crc kubenswrapper[4768]: I0217 14:07:20.346808 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/42b3a8d2-3952-474e-9821-8472466012cb-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-kzxnk\" (UID: \"42b3a8d2-3952-474e-9821-8472466012cb\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-kzxnk" Feb 17 14:07:20 crc kubenswrapper[4768]: I0217 14:07:20.347351 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/42b3a8d2-3952-474e-9821-8472466012cb-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-kzxnk\" (UID: \"42b3a8d2-3952-474e-9821-8472466012cb\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-kzxnk" Feb 17 14:07:20 crc kubenswrapper[4768]: I0217 14:07:20.361668 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fbmqd\" (UniqueName: \"kubernetes.io/projected/42b3a8d2-3952-474e-9821-8472466012cb-kube-api-access-fbmqd\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-kzxnk\" (UID: \"42b3a8d2-3952-474e-9821-8472466012cb\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-kzxnk" Feb 17 14:07:20 crc kubenswrapper[4768]: I0217 14:07:20.497975 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-kzxnk" Feb 17 14:07:21 crc kubenswrapper[4768]: I0217 14:07:21.056260 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-kzxnk"] Feb 17 14:07:21 crc kubenswrapper[4768]: I0217 14:07:21.081348 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-kzxnk" event={"ID":"42b3a8d2-3952-474e-9821-8472466012cb","Type":"ContainerStarted","Data":"0890c4d630cb8d48015b8aa3804338e365cc06750a5db4d6f2708cdc9adbfbe9"} Feb 17 14:07:21 crc kubenswrapper[4768]: I0217 14:07:21.568669 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 17 14:07:22 crc kubenswrapper[4768]: I0217 14:07:22.090643 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-kzxnk" event={"ID":"42b3a8d2-3952-474e-9821-8472466012cb","Type":"ContainerStarted","Data":"0570034ad802f32763bca326030cff383e25430261cca085490d2ecfca7674b1"} Feb 17 14:07:22 crc kubenswrapper[4768]: I0217 14:07:22.113232 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-kzxnk" podStartSLOduration=1.617049367 podStartE2EDuration="2.113214179s" podCreationTimestamp="2026-02-17 14:07:20 +0000 UTC" firstStartedPulling="2026-02-17 14:07:21.0664426 +0000 UTC m=+1860.345829042" lastFinishedPulling="2026-02-17 14:07:21.562607412 +0000 UTC m=+1860.841993854" observedRunningTime="2026-02-17 14:07:22.10888848 +0000 UTC m=+1861.388274922" watchObservedRunningTime="2026-02-17 14:07:22.113214179 +0000 UTC m=+1861.392600621" Feb 17 14:07:24 crc kubenswrapper[4768]: I0217 14:07:24.041928 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-cell-mapping-j6688"] Feb 17 14:07:24 crc kubenswrapper[4768]: I0217 14:07:24.048895 4768 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-cell-mapping-j6688"] Feb 17 14:07:25 crc kubenswrapper[4768]: I0217 14:07:25.545965 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ab5558d8-d8ab-4e56-a053-bd878be0dfb7" path="/var/lib/kubelet/pods/ab5558d8-d8ab-4e56-a053-bd878be0dfb7/volumes" Feb 17 14:07:26 crc kubenswrapper[4768]: I0217 14:07:26.043983 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-z5wdm"] Feb 17 14:07:26 crc kubenswrapper[4768]: I0217 14:07:26.057541 4768 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-z5wdm"] Feb 17 14:07:27 crc kubenswrapper[4768]: I0217 14:07:27.546752 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7636550e-37fd-4031-a05d-603fef57553a" path="/var/lib/kubelet/pods/7636550e-37fd-4031-a05d-603fef57553a/volumes" Feb 17 14:07:28 crc kubenswrapper[4768]: I0217 14:07:28.777012 4768 scope.go:117] "RemoveContainer" containerID="7f6431dc27c7a6bdb46e8e5182986983ab2f9b0e28a950cee33fb278b3006033" Feb 17 14:07:28 crc kubenswrapper[4768]: I0217 14:07:28.801089 4768 scope.go:117] "RemoveContainer" containerID="0d2430c2ff9b73a4bba8bdc12e39f50afd5d02c15a1929ec3be4ef7304e75580" Feb 17 14:07:28 crc kubenswrapper[4768]: I0217 14:07:28.854352 4768 scope.go:117] "RemoveContainer" containerID="ed2461ef02a61bee063e89b94b574e2a569467f23a6acfaec8cb90a7beed37ed" Feb 17 14:07:28 crc kubenswrapper[4768]: I0217 14:07:28.900565 4768 scope.go:117] "RemoveContainer" containerID="bbf8f18df6c232b107faf4ca4b5b269de1cf55797370bd038d67d754d01b5dc3" Feb 17 14:07:28 crc kubenswrapper[4768]: I0217 14:07:28.935011 4768 scope.go:117] "RemoveContainer" containerID="851b6d25464a9f9ecf281705fffd30f7e68254b23646d78f4935dc0709f2790d" Feb 17 14:07:28 crc kubenswrapper[4768]: I0217 14:07:28.975127 4768 scope.go:117] "RemoveContainer" containerID="2386d79fdae6ad85a03996c9a38134a990c02eb25b593969a3ee16241de00a38" Feb 17 14:07:29 crc kubenswrapper[4768]: I0217 14:07:29.038757 4768 scope.go:117] "RemoveContainer" containerID="6f1ffb80a0ea190bf7de58c9964e9e6d33de99fa982223dce6cb8f70bf07c3a0" Feb 17 14:07:29 crc kubenswrapper[4768]: I0217 14:07:29.060183 4768 scope.go:117] "RemoveContainer" containerID="b2a1f2873dd7eae98bad852dc5d0a50cfdc71bae83dc7248c89fa165e7458f33" Feb 17 14:07:29 crc kubenswrapper[4768]: I0217 14:07:29.107797 4768 scope.go:117] "RemoveContainer" containerID="2ecb5297fc86f40a5e569044850dc88193c82cec590e64e12d688999dccf833d" Feb 17 14:07:31 crc kubenswrapper[4768]: I0217 14:07:31.198155 4768 generic.go:334] "Generic (PLEG): container finished" podID="42b3a8d2-3952-474e-9821-8472466012cb" containerID="0570034ad802f32763bca326030cff383e25430261cca085490d2ecfca7674b1" exitCode=0 Feb 17 14:07:31 crc kubenswrapper[4768]: I0217 14:07:31.198343 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-kzxnk" event={"ID":"42b3a8d2-3952-474e-9821-8472466012cb","Type":"ContainerDied","Data":"0570034ad802f32763bca326030cff383e25430261cca085490d2ecfca7674b1"} Feb 17 14:07:32 crc kubenswrapper[4768]: I0217 14:07:32.624669 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-kzxnk" Feb 17 14:07:32 crc kubenswrapper[4768]: I0217 14:07:32.776543 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/42b3a8d2-3952-474e-9821-8472466012cb-ssh-key-openstack-edpm-ipam\") pod \"42b3a8d2-3952-474e-9821-8472466012cb\" (UID: \"42b3a8d2-3952-474e-9821-8472466012cb\") " Feb 17 14:07:32 crc kubenswrapper[4768]: I0217 14:07:32.776615 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/42b3a8d2-3952-474e-9821-8472466012cb-inventory\") pod \"42b3a8d2-3952-474e-9821-8472466012cb\" (UID: \"42b3a8d2-3952-474e-9821-8472466012cb\") " Feb 17 14:07:32 crc kubenswrapper[4768]: I0217 14:07:32.776646 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fbmqd\" (UniqueName: \"kubernetes.io/projected/42b3a8d2-3952-474e-9821-8472466012cb-kube-api-access-fbmqd\") pod \"42b3a8d2-3952-474e-9821-8472466012cb\" (UID: \"42b3a8d2-3952-474e-9821-8472466012cb\") " Feb 17 14:07:32 crc kubenswrapper[4768]: I0217 14:07:32.782487 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/42b3a8d2-3952-474e-9821-8472466012cb-kube-api-access-fbmqd" (OuterVolumeSpecName: "kube-api-access-fbmqd") pod "42b3a8d2-3952-474e-9821-8472466012cb" (UID: "42b3a8d2-3952-474e-9821-8472466012cb"). InnerVolumeSpecName "kube-api-access-fbmqd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 14:07:32 crc kubenswrapper[4768]: I0217 14:07:32.805067 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/42b3a8d2-3952-474e-9821-8472466012cb-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "42b3a8d2-3952-474e-9821-8472466012cb" (UID: "42b3a8d2-3952-474e-9821-8472466012cb"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 14:07:32 crc kubenswrapper[4768]: I0217 14:07:32.805533 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/42b3a8d2-3952-474e-9821-8472466012cb-inventory" (OuterVolumeSpecName: "inventory") pod "42b3a8d2-3952-474e-9821-8472466012cb" (UID: "42b3a8d2-3952-474e-9821-8472466012cb"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 14:07:32 crc kubenswrapper[4768]: I0217 14:07:32.878852 4768 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/42b3a8d2-3952-474e-9821-8472466012cb-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 17 14:07:32 crc kubenswrapper[4768]: I0217 14:07:32.878889 4768 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/42b3a8d2-3952-474e-9821-8472466012cb-inventory\") on node \"crc\" DevicePath \"\"" Feb 17 14:07:32 crc kubenswrapper[4768]: I0217 14:07:32.878902 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fbmqd\" (UniqueName: \"kubernetes.io/projected/42b3a8d2-3952-474e-9821-8472466012cb-kube-api-access-fbmqd\") on node \"crc\" DevicePath \"\"" Feb 17 14:07:33 crc kubenswrapper[4768]: I0217 14:07:33.228467 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-kzxnk" event={"ID":"42b3a8d2-3952-474e-9821-8472466012cb","Type":"ContainerDied","Data":"0890c4d630cb8d48015b8aa3804338e365cc06750a5db4d6f2708cdc9adbfbe9"} Feb 17 14:07:33 crc kubenswrapper[4768]: I0217 14:07:33.228521 4768 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0890c4d630cb8d48015b8aa3804338e365cc06750a5db4d6f2708cdc9adbfbe9" Feb 17 14:07:33 crc kubenswrapper[4768]: I0217 14:07:33.228600 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-kzxnk" Feb 17 14:07:33 crc kubenswrapper[4768]: I0217 14:07:33.296027 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-7j9x5"] Feb 17 14:07:33 crc kubenswrapper[4768]: E0217 14:07:33.296584 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="42b3a8d2-3952-474e-9821-8472466012cb" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Feb 17 14:07:33 crc kubenswrapper[4768]: I0217 14:07:33.296615 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="42b3a8d2-3952-474e-9821-8472466012cb" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Feb 17 14:07:33 crc kubenswrapper[4768]: I0217 14:07:33.296852 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="42b3a8d2-3952-474e-9821-8472466012cb" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Feb 17 14:07:33 crc kubenswrapper[4768]: I0217 14:07:33.297622 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-7j9x5" Feb 17 14:07:33 crc kubenswrapper[4768]: I0217 14:07:33.299977 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 17 14:07:33 crc kubenswrapper[4768]: I0217 14:07:33.300318 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-libvirt-default-certs-0" Feb 17 14:07:33 crc kubenswrapper[4768]: I0217 14:07:33.300493 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 17 14:07:33 crc kubenswrapper[4768]: I0217 14:07:33.301029 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-neutron-metadata-default-certs-0" Feb 17 14:07:33 crc kubenswrapper[4768]: I0217 14:07:33.301287 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-tr87q" Feb 17 14:07:33 crc kubenswrapper[4768]: I0217 14:07:33.303008 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 17 14:07:33 crc kubenswrapper[4768]: I0217 14:07:33.303040 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-ovn-default-certs-0" Feb 17 14:07:33 crc kubenswrapper[4768]: I0217 14:07:33.303281 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-telemetry-default-certs-0" Feb 17 14:07:33 crc kubenswrapper[4768]: I0217 14:07:33.313006 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-7j9x5"] Feb 17 14:07:33 crc kubenswrapper[4768]: I0217 14:07:33.489503 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/f84244fa-e156-4bf4-bc42-22336b96a556-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-7j9x5\" (UID: \"f84244fa-e156-4bf4-bc42-22336b96a556\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-7j9x5" Feb 17 14:07:33 crc kubenswrapper[4768]: I0217 14:07:33.489796 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/f84244fa-e156-4bf4-bc42-22336b96a556-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-7j9x5\" (UID: \"f84244fa-e156-4bf4-bc42-22336b96a556\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-7j9x5" Feb 17 14:07:33 crc kubenswrapper[4768]: I0217 14:07:33.489826 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f84244fa-e156-4bf4-bc42-22336b96a556-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-7j9x5\" (UID: \"f84244fa-e156-4bf4-bc42-22336b96a556\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-7j9x5" Feb 17 14:07:33 crc kubenswrapper[4768]: I0217 14:07:33.489846 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/f84244fa-e156-4bf4-bc42-22336b96a556-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-7j9x5\" (UID: \"f84244fa-e156-4bf4-bc42-22336b96a556\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-7j9x5" Feb 17 14:07:33 crc kubenswrapper[4768]: I0217 14:07:33.489873 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f84244fa-e156-4bf4-bc42-22336b96a556-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-7j9x5\" (UID: \"f84244fa-e156-4bf4-bc42-22336b96a556\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-7j9x5" Feb 17 14:07:33 crc kubenswrapper[4768]: I0217 14:07:33.489897 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7wnmb\" (UniqueName: \"kubernetes.io/projected/f84244fa-e156-4bf4-bc42-22336b96a556-kube-api-access-7wnmb\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-7j9x5\" (UID: \"f84244fa-e156-4bf4-bc42-22336b96a556\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-7j9x5" Feb 17 14:07:33 crc kubenswrapper[4768]: I0217 14:07:33.489923 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/f84244fa-e156-4bf4-bc42-22336b96a556-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-7j9x5\" (UID: \"f84244fa-e156-4bf4-bc42-22336b96a556\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-7j9x5" Feb 17 14:07:33 crc kubenswrapper[4768]: I0217 14:07:33.489949 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f84244fa-e156-4bf4-bc42-22336b96a556-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-7j9x5\" (UID: \"f84244fa-e156-4bf4-bc42-22336b96a556\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-7j9x5" Feb 17 14:07:33 crc kubenswrapper[4768]: I0217 14:07:33.489966 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f84244fa-e156-4bf4-bc42-22336b96a556-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-7j9x5\" (UID: \"f84244fa-e156-4bf4-bc42-22336b96a556\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-7j9x5" Feb 17 14:07:33 crc kubenswrapper[4768]: I0217 14:07:33.489990 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f84244fa-e156-4bf4-bc42-22336b96a556-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-7j9x5\" (UID: \"f84244fa-e156-4bf4-bc42-22336b96a556\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-7j9x5" Feb 17 14:07:33 crc kubenswrapper[4768]: I0217 14:07:33.490016 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f84244fa-e156-4bf4-bc42-22336b96a556-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-7j9x5\" (UID: \"f84244fa-e156-4bf4-bc42-22336b96a556\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-7j9x5" Feb 17 14:07:33 crc kubenswrapper[4768]: I0217 14:07:33.490054 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f84244fa-e156-4bf4-bc42-22336b96a556-ssh-key-openstack-edpm-ipam\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-7j9x5\" (UID: \"f84244fa-e156-4bf4-bc42-22336b96a556\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-7j9x5" Feb 17 14:07:33 crc kubenswrapper[4768]: I0217 14:07:33.490089 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f84244fa-e156-4bf4-bc42-22336b96a556-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-7j9x5\" (UID: \"f84244fa-e156-4bf4-bc42-22336b96a556\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-7j9x5" Feb 17 14:07:33 crc kubenswrapper[4768]: I0217 14:07:33.490124 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f84244fa-e156-4bf4-bc42-22336b96a556-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-7j9x5\" (UID: \"f84244fa-e156-4bf4-bc42-22336b96a556\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-7j9x5" Feb 17 14:07:33 crc kubenswrapper[4768]: I0217 14:07:33.592148 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7wnmb\" (UniqueName: \"kubernetes.io/projected/f84244fa-e156-4bf4-bc42-22336b96a556-kube-api-access-7wnmb\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-7j9x5\" (UID: \"f84244fa-e156-4bf4-bc42-22336b96a556\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-7j9x5" Feb 17 14:07:33 crc kubenswrapper[4768]: I0217 14:07:33.592237 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/f84244fa-e156-4bf4-bc42-22336b96a556-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-7j9x5\" (UID: \"f84244fa-e156-4bf4-bc42-22336b96a556\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-7j9x5" Feb 17 14:07:33 crc kubenswrapper[4768]: I0217 14:07:33.592284 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f84244fa-e156-4bf4-bc42-22336b96a556-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-7j9x5\" (UID: \"f84244fa-e156-4bf4-bc42-22336b96a556\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-7j9x5" Feb 17 14:07:33 crc kubenswrapper[4768]: I0217 14:07:33.592309 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f84244fa-e156-4bf4-bc42-22336b96a556-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-7j9x5\" (UID: \"f84244fa-e156-4bf4-bc42-22336b96a556\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-7j9x5" Feb 17 14:07:33 crc kubenswrapper[4768]: I0217 14:07:33.592345 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f84244fa-e156-4bf4-bc42-22336b96a556-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-7j9x5\" (UID: \"f84244fa-e156-4bf4-bc42-22336b96a556\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-7j9x5" Feb 17 14:07:33 crc kubenswrapper[4768]: I0217 14:07:33.592382 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f84244fa-e156-4bf4-bc42-22336b96a556-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-7j9x5\" (UID: \"f84244fa-e156-4bf4-bc42-22336b96a556\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-7j9x5" Feb 17 14:07:33 crc kubenswrapper[4768]: I0217 14:07:33.592434 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f84244fa-e156-4bf4-bc42-22336b96a556-ssh-key-openstack-edpm-ipam\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-7j9x5\" (UID: \"f84244fa-e156-4bf4-bc42-22336b96a556\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-7j9x5" Feb 17 14:07:33 crc kubenswrapper[4768]: I0217 14:07:33.592501 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f84244fa-e156-4bf4-bc42-22336b96a556-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-7j9x5\" (UID: \"f84244fa-e156-4bf4-bc42-22336b96a556\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-7j9x5" Feb 17 14:07:33 crc kubenswrapper[4768]: I0217 14:07:33.592541 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f84244fa-e156-4bf4-bc42-22336b96a556-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-7j9x5\" (UID: \"f84244fa-e156-4bf4-bc42-22336b96a556\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-7j9x5" Feb 17 14:07:33 crc kubenswrapper[4768]: I0217 14:07:33.592633 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/f84244fa-e156-4bf4-bc42-22336b96a556-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-7j9x5\" (UID: \"f84244fa-e156-4bf4-bc42-22336b96a556\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-7j9x5" Feb 17 14:07:33 crc kubenswrapper[4768]: I0217 14:07:33.592666 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/f84244fa-e156-4bf4-bc42-22336b96a556-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-7j9x5\" (UID: \"f84244fa-e156-4bf4-bc42-22336b96a556\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-7j9x5" Feb 17 14:07:33 crc kubenswrapper[4768]: I0217 14:07:33.592693 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f84244fa-e156-4bf4-bc42-22336b96a556-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-7j9x5\" (UID: \"f84244fa-e156-4bf4-bc42-22336b96a556\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-7j9x5" Feb 17 14:07:33 crc kubenswrapper[4768]: I0217 14:07:33.592720 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/f84244fa-e156-4bf4-bc42-22336b96a556-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-7j9x5\" (UID: \"f84244fa-e156-4bf4-bc42-22336b96a556\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-7j9x5" Feb 17 14:07:33 crc kubenswrapper[4768]: I0217 14:07:33.592752 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f84244fa-e156-4bf4-bc42-22336b96a556-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-7j9x5\" (UID: \"f84244fa-e156-4bf4-bc42-22336b96a556\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-7j9x5" Feb 17 14:07:33 crc kubenswrapper[4768]: I0217 14:07:33.599034 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f84244fa-e156-4bf4-bc42-22336b96a556-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-7j9x5\" (UID: \"f84244fa-e156-4bf4-bc42-22336b96a556\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-7j9x5" Feb 17 14:07:33 crc kubenswrapper[4768]: I0217 14:07:33.600293 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/f84244fa-e156-4bf4-bc42-22336b96a556-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-7j9x5\" (UID: \"f84244fa-e156-4bf4-bc42-22336b96a556\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-7j9x5" Feb 17 14:07:33 crc kubenswrapper[4768]: I0217 14:07:33.600564 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/f84244fa-e156-4bf4-bc42-22336b96a556-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-7j9x5\" (UID: \"f84244fa-e156-4bf4-bc42-22336b96a556\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-7j9x5" Feb 17 14:07:33 crc kubenswrapper[4768]: I0217 14:07:33.600902 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f84244fa-e156-4bf4-bc42-22336b96a556-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-7j9x5\" (UID: \"f84244fa-e156-4bf4-bc42-22336b96a556\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-7j9x5" Feb 17 14:07:33 crc kubenswrapper[4768]: I0217 14:07:33.601579 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f84244fa-e156-4bf4-bc42-22336b96a556-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-7j9x5\" (UID: \"f84244fa-e156-4bf4-bc42-22336b96a556\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-7j9x5" Feb 17 14:07:33 crc kubenswrapper[4768]: I0217 14:07:33.601924 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f84244fa-e156-4bf4-bc42-22336b96a556-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-7j9x5\" (UID: \"f84244fa-e156-4bf4-bc42-22336b96a556\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-7j9x5" Feb 17 14:07:33 crc kubenswrapper[4768]: I0217 14:07:33.604185 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f84244fa-e156-4bf4-bc42-22336b96a556-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-7j9x5\" (UID: \"f84244fa-e156-4bf4-bc42-22336b96a556\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-7j9x5" Feb 17 14:07:33 crc kubenswrapper[4768]: I0217 14:07:33.604458 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/f84244fa-e156-4bf4-bc42-22336b96a556-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-7j9x5\" (UID: \"f84244fa-e156-4bf4-bc42-22336b96a556\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-7j9x5" Feb 17 14:07:33 crc kubenswrapper[4768]: I0217 14:07:33.604874 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f84244fa-e156-4bf4-bc42-22336b96a556-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-7j9x5\" (UID: \"f84244fa-e156-4bf4-bc42-22336b96a556\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-7j9x5" Feb 17 14:07:33 crc kubenswrapper[4768]: I0217 14:07:33.607837 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f84244fa-e156-4bf4-bc42-22336b96a556-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-7j9x5\" (UID: \"f84244fa-e156-4bf4-bc42-22336b96a556\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-7j9x5" Feb 17 14:07:33 crc kubenswrapper[4768]: I0217 14:07:33.612743 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/f84244fa-e156-4bf4-bc42-22336b96a556-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-7j9x5\" (UID: \"f84244fa-e156-4bf4-bc42-22336b96a556\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-7j9x5" Feb 17 14:07:33 crc kubenswrapper[4768]: I0217 14:07:33.614548 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f84244fa-e156-4bf4-bc42-22336b96a556-ssh-key-openstack-edpm-ipam\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-7j9x5\" (UID: \"f84244fa-e156-4bf4-bc42-22336b96a556\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-7j9x5" Feb 17 14:07:33 crc kubenswrapper[4768]: I0217 14:07:33.617715 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7wnmb\" (UniqueName: \"kubernetes.io/projected/f84244fa-e156-4bf4-bc42-22336b96a556-kube-api-access-7wnmb\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-7j9x5\" (UID: \"f84244fa-e156-4bf4-bc42-22336b96a556\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-7j9x5" Feb 17 14:07:33 crc kubenswrapper[4768]: I0217 14:07:33.623460 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f84244fa-e156-4bf4-bc42-22336b96a556-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-7j9x5\" (UID: \"f84244fa-e156-4bf4-bc42-22336b96a556\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-7j9x5" Feb 17 14:07:33 crc kubenswrapper[4768]: I0217 14:07:33.918146 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-7j9x5" Feb 17 14:07:34 crc kubenswrapper[4768]: I0217 14:07:34.437036 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-7j9x5"] Feb 17 14:07:35 crc kubenswrapper[4768]: I0217 14:07:35.242859 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-7j9x5" event={"ID":"f84244fa-e156-4bf4-bc42-22336b96a556","Type":"ContainerStarted","Data":"8f6f94fe19b2187285f2908d71b870e1c0367a344ae18dea803e215b7d518f8a"} Feb 17 14:07:35 crc kubenswrapper[4768]: I0217 14:07:35.243196 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-7j9x5" event={"ID":"f84244fa-e156-4bf4-bc42-22336b96a556","Type":"ContainerStarted","Data":"72ddf44be2a5366c6f7051b9acd1c010e75589490d5552545e49cb3fbeb3e5f5"} Feb 17 14:07:35 crc kubenswrapper[4768]: I0217 14:07:35.277527 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-7j9x5" podStartSLOduration=1.8741484370000001 podStartE2EDuration="2.277509016s" podCreationTimestamp="2026-02-17 14:07:33 +0000 UTC" firstStartedPulling="2026-02-17 14:07:34.447189451 +0000 UTC m=+1873.726575893" lastFinishedPulling="2026-02-17 14:07:34.85055003 +0000 UTC m=+1874.129936472" observedRunningTime="2026-02-17 14:07:35.274442572 +0000 UTC m=+1874.553829014" watchObservedRunningTime="2026-02-17 14:07:35.277509016 +0000 UTC m=+1874.556895458" Feb 17 14:08:07 crc kubenswrapper[4768]: I0217 14:08:07.523944 4768 generic.go:334] "Generic (PLEG): container finished" podID="f84244fa-e156-4bf4-bc42-22336b96a556" containerID="8f6f94fe19b2187285f2908d71b870e1c0367a344ae18dea803e215b7d518f8a" exitCode=0 Feb 17 14:08:07 crc kubenswrapper[4768]: I0217 14:08:07.524067 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-7j9x5" event={"ID":"f84244fa-e156-4bf4-bc42-22336b96a556","Type":"ContainerDied","Data":"8f6f94fe19b2187285f2908d71b870e1c0367a344ae18dea803e215b7d518f8a"} Feb 17 14:08:09 crc kubenswrapper[4768]: I0217 14:08:09.006785 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-7j9x5" Feb 17 14:08:09 crc kubenswrapper[4768]: I0217 14:08:09.078528 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/f84244fa-e156-4bf4-bc42-22336b96a556-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"f84244fa-e156-4bf4-bc42-22336b96a556\" (UID: \"f84244fa-e156-4bf4-bc42-22336b96a556\") " Feb 17 14:08:09 crc kubenswrapper[4768]: I0217 14:08:09.078596 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f84244fa-e156-4bf4-bc42-22336b96a556-telemetry-combined-ca-bundle\") pod \"f84244fa-e156-4bf4-bc42-22336b96a556\" (UID: \"f84244fa-e156-4bf4-bc42-22336b96a556\") " Feb 17 14:08:09 crc kubenswrapper[4768]: I0217 14:08:09.078856 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7wnmb\" (UniqueName: \"kubernetes.io/projected/f84244fa-e156-4bf4-bc42-22336b96a556-kube-api-access-7wnmb\") pod \"f84244fa-e156-4bf4-bc42-22336b96a556\" (UID: \"f84244fa-e156-4bf4-bc42-22336b96a556\") " Feb 17 14:08:09 crc kubenswrapper[4768]: I0217 14:08:09.078931 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f84244fa-e156-4bf4-bc42-22336b96a556-neutron-metadata-combined-ca-bundle\") pod \"f84244fa-e156-4bf4-bc42-22336b96a556\" (UID: \"f84244fa-e156-4bf4-bc42-22336b96a556\") " Feb 17 14:08:09 crc kubenswrapper[4768]: I0217 14:08:09.079026 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f84244fa-e156-4bf4-bc42-22336b96a556-bootstrap-combined-ca-bundle\") pod \"f84244fa-e156-4bf4-bc42-22336b96a556\" (UID: \"f84244fa-e156-4bf4-bc42-22336b96a556\") " Feb 17 14:08:09 crc kubenswrapper[4768]: I0217 14:08:09.079080 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f84244fa-e156-4bf4-bc42-22336b96a556-nova-combined-ca-bundle\") pod \"f84244fa-e156-4bf4-bc42-22336b96a556\" (UID: \"f84244fa-e156-4bf4-bc42-22336b96a556\") " Feb 17 14:08:09 crc kubenswrapper[4768]: I0217 14:08:09.079186 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f84244fa-e156-4bf4-bc42-22336b96a556-inventory\") pod \"f84244fa-e156-4bf4-bc42-22336b96a556\" (UID: \"f84244fa-e156-4bf4-bc42-22336b96a556\") " Feb 17 14:08:09 crc kubenswrapper[4768]: I0217 14:08:09.079225 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/f84244fa-e156-4bf4-bc42-22336b96a556-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"f84244fa-e156-4bf4-bc42-22336b96a556\" (UID: \"f84244fa-e156-4bf4-bc42-22336b96a556\") " Feb 17 14:08:09 crc kubenswrapper[4768]: I0217 14:08:09.079766 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f84244fa-e156-4bf4-bc42-22336b96a556-ovn-combined-ca-bundle\") pod \"f84244fa-e156-4bf4-bc42-22336b96a556\" (UID: \"f84244fa-e156-4bf4-bc42-22336b96a556\") " Feb 17 14:08:09 crc kubenswrapper[4768]: I0217 14:08:09.079848 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/f84244fa-e156-4bf4-bc42-22336b96a556-openstack-edpm-ipam-ovn-default-certs-0\") pod \"f84244fa-e156-4bf4-bc42-22336b96a556\" (UID: \"f84244fa-e156-4bf4-bc42-22336b96a556\") " Feb 17 14:08:09 crc kubenswrapper[4768]: I0217 14:08:09.079918 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/f84244fa-e156-4bf4-bc42-22336b96a556-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"f84244fa-e156-4bf4-bc42-22336b96a556\" (UID: \"f84244fa-e156-4bf4-bc42-22336b96a556\") " Feb 17 14:08:09 crc kubenswrapper[4768]: I0217 14:08:09.079968 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f84244fa-e156-4bf4-bc42-22336b96a556-repo-setup-combined-ca-bundle\") pod \"f84244fa-e156-4bf4-bc42-22336b96a556\" (UID: \"f84244fa-e156-4bf4-bc42-22336b96a556\") " Feb 17 14:08:09 crc kubenswrapper[4768]: I0217 14:08:09.080054 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f84244fa-e156-4bf4-bc42-22336b96a556-ssh-key-openstack-edpm-ipam\") pod \"f84244fa-e156-4bf4-bc42-22336b96a556\" (UID: \"f84244fa-e156-4bf4-bc42-22336b96a556\") " Feb 17 14:08:09 crc kubenswrapper[4768]: I0217 14:08:09.080128 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f84244fa-e156-4bf4-bc42-22336b96a556-libvirt-combined-ca-bundle\") pod \"f84244fa-e156-4bf4-bc42-22336b96a556\" (UID: \"f84244fa-e156-4bf4-bc42-22336b96a556\") " Feb 17 14:08:09 crc kubenswrapper[4768]: I0217 14:08:09.085197 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f84244fa-e156-4bf4-bc42-22336b96a556-openstack-edpm-ipam-neutron-metadata-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-neutron-metadata-default-certs-0") pod "f84244fa-e156-4bf4-bc42-22336b96a556" (UID: "f84244fa-e156-4bf4-bc42-22336b96a556"). InnerVolumeSpecName "openstack-edpm-ipam-neutron-metadata-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 14:08:09 crc kubenswrapper[4768]: I0217 14:08:09.085238 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f84244fa-e156-4bf4-bc42-22336b96a556-telemetry-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-combined-ca-bundle") pod "f84244fa-e156-4bf4-bc42-22336b96a556" (UID: "f84244fa-e156-4bf4-bc42-22336b96a556"). InnerVolumeSpecName "telemetry-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 14:08:09 crc kubenswrapper[4768]: I0217 14:08:09.085329 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f84244fa-e156-4bf4-bc42-22336b96a556-nova-combined-ca-bundle" (OuterVolumeSpecName: "nova-combined-ca-bundle") pod "f84244fa-e156-4bf4-bc42-22336b96a556" (UID: "f84244fa-e156-4bf4-bc42-22336b96a556"). InnerVolumeSpecName "nova-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 14:08:09 crc kubenswrapper[4768]: I0217 14:08:09.086012 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f84244fa-e156-4bf4-bc42-22336b96a556-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "f84244fa-e156-4bf4-bc42-22336b96a556" (UID: "f84244fa-e156-4bf4-bc42-22336b96a556"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 14:08:09 crc kubenswrapper[4768]: I0217 14:08:09.086049 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f84244fa-e156-4bf4-bc42-22336b96a556-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "f84244fa-e156-4bf4-bc42-22336b96a556" (UID: "f84244fa-e156-4bf4-bc42-22336b96a556"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 14:08:09 crc kubenswrapper[4768]: I0217 14:08:09.087393 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f84244fa-e156-4bf4-bc42-22336b96a556-kube-api-access-7wnmb" (OuterVolumeSpecName: "kube-api-access-7wnmb") pod "f84244fa-e156-4bf4-bc42-22336b96a556" (UID: "f84244fa-e156-4bf4-bc42-22336b96a556"). InnerVolumeSpecName "kube-api-access-7wnmb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 14:08:09 crc kubenswrapper[4768]: I0217 14:08:09.088252 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f84244fa-e156-4bf4-bc42-22336b96a556-openstack-edpm-ipam-telemetry-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-telemetry-default-certs-0") pod "f84244fa-e156-4bf4-bc42-22336b96a556" (UID: "f84244fa-e156-4bf4-bc42-22336b96a556"). InnerVolumeSpecName "openstack-edpm-ipam-telemetry-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 14:08:09 crc kubenswrapper[4768]: I0217 14:08:09.089304 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f84244fa-e156-4bf4-bc42-22336b96a556-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "f84244fa-e156-4bf4-bc42-22336b96a556" (UID: "f84244fa-e156-4bf4-bc42-22336b96a556"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 14:08:09 crc kubenswrapper[4768]: I0217 14:08:09.090098 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f84244fa-e156-4bf4-bc42-22336b96a556-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "f84244fa-e156-4bf4-bc42-22336b96a556" (UID: "f84244fa-e156-4bf4-bc42-22336b96a556"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 14:08:09 crc kubenswrapper[4768]: I0217 14:08:09.090641 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f84244fa-e156-4bf4-bc42-22336b96a556-openstack-edpm-ipam-libvirt-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-libvirt-default-certs-0") pod "f84244fa-e156-4bf4-bc42-22336b96a556" (UID: "f84244fa-e156-4bf4-bc42-22336b96a556"). InnerVolumeSpecName "openstack-edpm-ipam-libvirt-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 14:08:09 crc kubenswrapper[4768]: I0217 14:08:09.090904 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f84244fa-e156-4bf4-bc42-22336b96a556-openstack-edpm-ipam-ovn-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-ovn-default-certs-0") pod "f84244fa-e156-4bf4-bc42-22336b96a556" (UID: "f84244fa-e156-4bf4-bc42-22336b96a556"). InnerVolumeSpecName "openstack-edpm-ipam-ovn-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 14:08:09 crc kubenswrapper[4768]: I0217 14:08:09.101713 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f84244fa-e156-4bf4-bc42-22336b96a556-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "f84244fa-e156-4bf4-bc42-22336b96a556" (UID: "f84244fa-e156-4bf4-bc42-22336b96a556"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 14:08:09 crc kubenswrapper[4768]: I0217 14:08:09.113940 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f84244fa-e156-4bf4-bc42-22336b96a556-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "f84244fa-e156-4bf4-bc42-22336b96a556" (UID: "f84244fa-e156-4bf4-bc42-22336b96a556"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 14:08:09 crc kubenswrapper[4768]: I0217 14:08:09.125686 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f84244fa-e156-4bf4-bc42-22336b96a556-inventory" (OuterVolumeSpecName: "inventory") pod "f84244fa-e156-4bf4-bc42-22336b96a556" (UID: "f84244fa-e156-4bf4-bc42-22336b96a556"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 14:08:09 crc kubenswrapper[4768]: I0217 14:08:09.182841 4768 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f84244fa-e156-4bf4-bc42-22336b96a556-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 14:08:09 crc kubenswrapper[4768]: I0217 14:08:09.182888 4768 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f84244fa-e156-4bf4-bc42-22336b96a556-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 14:08:09 crc kubenswrapper[4768]: I0217 14:08:09.182903 4768 reconciler_common.go:293] "Volume detached for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f84244fa-e156-4bf4-bc42-22336b96a556-nova-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 14:08:09 crc kubenswrapper[4768]: I0217 14:08:09.182916 4768 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f84244fa-e156-4bf4-bc42-22336b96a556-inventory\") on node \"crc\" DevicePath \"\"" Feb 17 14:08:09 crc kubenswrapper[4768]: I0217 14:08:09.182927 4768 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/f84244fa-e156-4bf4-bc42-22336b96a556-openstack-edpm-ipam-libvirt-default-certs-0\") on node \"crc\" DevicePath \"\"" Feb 17 14:08:09 crc kubenswrapper[4768]: I0217 14:08:09.182941 4768 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f84244fa-e156-4bf4-bc42-22336b96a556-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 14:08:09 crc kubenswrapper[4768]: I0217 14:08:09.182953 4768 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/f84244fa-e156-4bf4-bc42-22336b96a556-openstack-edpm-ipam-ovn-default-certs-0\") on node \"crc\" DevicePath \"\"" Feb 17 14:08:09 crc kubenswrapper[4768]: I0217 14:08:09.182965 4768 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/f84244fa-e156-4bf4-bc42-22336b96a556-openstack-edpm-ipam-telemetry-default-certs-0\") on node \"crc\" DevicePath \"\"" Feb 17 14:08:09 crc kubenswrapper[4768]: I0217 14:08:09.182976 4768 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f84244fa-e156-4bf4-bc42-22336b96a556-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 14:08:09 crc kubenswrapper[4768]: I0217 14:08:09.182987 4768 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f84244fa-e156-4bf4-bc42-22336b96a556-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 17 14:08:09 crc kubenswrapper[4768]: I0217 14:08:09.182999 4768 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f84244fa-e156-4bf4-bc42-22336b96a556-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 14:08:09 crc kubenswrapper[4768]: I0217 14:08:09.183013 4768 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/f84244fa-e156-4bf4-bc42-22336b96a556-openstack-edpm-ipam-neutron-metadata-default-certs-0\") on node \"crc\" DevicePath \"\"" Feb 17 14:08:09 crc kubenswrapper[4768]: I0217 14:08:09.183026 4768 reconciler_common.go:293] "Volume detached for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f84244fa-e156-4bf4-bc42-22336b96a556-telemetry-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 14:08:09 crc kubenswrapper[4768]: I0217 14:08:09.183038 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7wnmb\" (UniqueName: \"kubernetes.io/projected/f84244fa-e156-4bf4-bc42-22336b96a556-kube-api-access-7wnmb\") on node \"crc\" DevicePath \"\"" Feb 17 14:08:09 crc kubenswrapper[4768]: I0217 14:08:09.550715 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-7j9x5" event={"ID":"f84244fa-e156-4bf4-bc42-22336b96a556","Type":"ContainerDied","Data":"72ddf44be2a5366c6f7051b9acd1c010e75589490d5552545e49cb3fbeb3e5f5"} Feb 17 14:08:09 crc kubenswrapper[4768]: I0217 14:08:09.550771 4768 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="72ddf44be2a5366c6f7051b9acd1c010e75589490d5552545e49cb3fbeb3e5f5" Feb 17 14:08:09 crc kubenswrapper[4768]: I0217 14:08:09.550855 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-7j9x5" Feb 17 14:08:09 crc kubenswrapper[4768]: I0217 14:08:09.669637 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-9nqqc"] Feb 17 14:08:09 crc kubenswrapper[4768]: E0217 14:08:09.669988 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f84244fa-e156-4bf4-bc42-22336b96a556" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Feb 17 14:08:09 crc kubenswrapper[4768]: I0217 14:08:09.670003 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="f84244fa-e156-4bf4-bc42-22336b96a556" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Feb 17 14:08:09 crc kubenswrapper[4768]: I0217 14:08:09.670205 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="f84244fa-e156-4bf4-bc42-22336b96a556" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Feb 17 14:08:09 crc kubenswrapper[4768]: I0217 14:08:09.671144 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-9nqqc" Feb 17 14:08:09 crc kubenswrapper[4768]: I0217 14:08:09.673714 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 17 14:08:09 crc kubenswrapper[4768]: I0217 14:08:09.674635 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 17 14:08:09 crc kubenswrapper[4768]: I0217 14:08:09.675489 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 17 14:08:09 crc kubenswrapper[4768]: I0217 14:08:09.693025 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/20f7a484-7e3c-4df5-84b0-98bd83632fb1-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-9nqqc\" (UID: \"20f7a484-7e3c-4df5-84b0-98bd83632fb1\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-9nqqc" Feb 17 14:08:09 crc kubenswrapper[4768]: I0217 14:08:09.693087 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/20f7a484-7e3c-4df5-84b0-98bd83632fb1-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-9nqqc\" (UID: \"20f7a484-7e3c-4df5-84b0-98bd83632fb1\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-9nqqc" Feb 17 14:08:09 crc kubenswrapper[4768]: I0217 14:08:09.693148 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/20f7a484-7e3c-4df5-84b0-98bd83632fb1-ssh-key-openstack-edpm-ipam\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-9nqqc\" (UID: \"20f7a484-7e3c-4df5-84b0-98bd83632fb1\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-9nqqc" Feb 17 14:08:09 crc kubenswrapper[4768]: I0217 14:08:09.693173 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/20f7a484-7e3c-4df5-84b0-98bd83632fb1-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-9nqqc\" (UID: \"20f7a484-7e3c-4df5-84b0-98bd83632fb1\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-9nqqc" Feb 17 14:08:09 crc kubenswrapper[4768]: I0217 14:08:09.693281 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2srl6\" (UniqueName: \"kubernetes.io/projected/20f7a484-7e3c-4df5-84b0-98bd83632fb1-kube-api-access-2srl6\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-9nqqc\" (UID: \"20f7a484-7e3c-4df5-84b0-98bd83632fb1\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-9nqqc" Feb 17 14:08:09 crc kubenswrapper[4768]: I0217 14:08:09.701906 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-9nqqc"] Feb 17 14:08:09 crc kubenswrapper[4768]: I0217 14:08:09.702044 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-config" Feb 17 14:08:09 crc kubenswrapper[4768]: I0217 14:08:09.702307 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-tr87q" Feb 17 14:08:09 crc kubenswrapper[4768]: I0217 14:08:09.794934 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/20f7a484-7e3c-4df5-84b0-98bd83632fb1-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-9nqqc\" (UID: \"20f7a484-7e3c-4df5-84b0-98bd83632fb1\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-9nqqc" Feb 17 14:08:09 crc kubenswrapper[4768]: I0217 14:08:09.795011 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/20f7a484-7e3c-4df5-84b0-98bd83632fb1-ssh-key-openstack-edpm-ipam\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-9nqqc\" (UID: \"20f7a484-7e3c-4df5-84b0-98bd83632fb1\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-9nqqc" Feb 17 14:08:09 crc kubenswrapper[4768]: I0217 14:08:09.795044 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/20f7a484-7e3c-4df5-84b0-98bd83632fb1-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-9nqqc\" (UID: \"20f7a484-7e3c-4df5-84b0-98bd83632fb1\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-9nqqc" Feb 17 14:08:09 crc kubenswrapper[4768]: I0217 14:08:09.795220 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2srl6\" (UniqueName: \"kubernetes.io/projected/20f7a484-7e3c-4df5-84b0-98bd83632fb1-kube-api-access-2srl6\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-9nqqc\" (UID: \"20f7a484-7e3c-4df5-84b0-98bd83632fb1\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-9nqqc" Feb 17 14:08:09 crc kubenswrapper[4768]: I0217 14:08:09.795308 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/20f7a484-7e3c-4df5-84b0-98bd83632fb1-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-9nqqc\" (UID: \"20f7a484-7e3c-4df5-84b0-98bd83632fb1\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-9nqqc" Feb 17 14:08:09 crc kubenswrapper[4768]: I0217 14:08:09.796093 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/20f7a484-7e3c-4df5-84b0-98bd83632fb1-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-9nqqc\" (UID: \"20f7a484-7e3c-4df5-84b0-98bd83632fb1\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-9nqqc" Feb 17 14:08:09 crc kubenswrapper[4768]: I0217 14:08:09.798706 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/20f7a484-7e3c-4df5-84b0-98bd83632fb1-ssh-key-openstack-edpm-ipam\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-9nqqc\" (UID: \"20f7a484-7e3c-4df5-84b0-98bd83632fb1\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-9nqqc" Feb 17 14:08:09 crc kubenswrapper[4768]: I0217 14:08:09.799066 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/20f7a484-7e3c-4df5-84b0-98bd83632fb1-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-9nqqc\" (UID: \"20f7a484-7e3c-4df5-84b0-98bd83632fb1\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-9nqqc" Feb 17 14:08:09 crc kubenswrapper[4768]: I0217 14:08:09.799962 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/20f7a484-7e3c-4df5-84b0-98bd83632fb1-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-9nqqc\" (UID: \"20f7a484-7e3c-4df5-84b0-98bd83632fb1\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-9nqqc" Feb 17 14:08:09 crc kubenswrapper[4768]: I0217 14:08:09.815610 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2srl6\" (UniqueName: \"kubernetes.io/projected/20f7a484-7e3c-4df5-84b0-98bd83632fb1-kube-api-access-2srl6\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-9nqqc\" (UID: \"20f7a484-7e3c-4df5-84b0-98bd83632fb1\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-9nqqc" Feb 17 14:08:09 crc kubenswrapper[4768]: I0217 14:08:09.993769 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-9nqqc" Feb 17 14:08:10 crc kubenswrapper[4768]: I0217 14:08:10.046530 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-cell-mapping-k959c"] Feb 17 14:08:10 crc kubenswrapper[4768]: I0217 14:08:10.059370 4768 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-cell-mapping-k959c"] Feb 17 14:08:10 crc kubenswrapper[4768]: I0217 14:08:10.490296 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-9nqqc"] Feb 17 14:08:10 crc kubenswrapper[4768]: W0217 14:08:10.495218 4768 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod20f7a484_7e3c_4df5_84b0_98bd83632fb1.slice/crio-b2d022bf406eb248c7cb0b8c5259e3e60619f83b2598a6b6a850eeca2c364113 WatchSource:0}: Error finding container b2d022bf406eb248c7cb0b8c5259e3e60619f83b2598a6b6a850eeca2c364113: Status 404 returned error can't find the container with id b2d022bf406eb248c7cb0b8c5259e3e60619f83b2598a6b6a850eeca2c364113 Feb 17 14:08:10 crc kubenswrapper[4768]: I0217 14:08:10.559539 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-9nqqc" event={"ID":"20f7a484-7e3c-4df5-84b0-98bd83632fb1","Type":"ContainerStarted","Data":"b2d022bf406eb248c7cb0b8c5259e3e60619f83b2598a6b6a850eeca2c364113"} Feb 17 14:08:11 crc kubenswrapper[4768]: I0217 14:08:11.544622 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="24767692-8f87-45e7-b2cc-f80b48b4fcf7" path="/var/lib/kubelet/pods/24767692-8f87-45e7-b2cc-f80b48b4fcf7/volumes" Feb 17 14:08:11 crc kubenswrapper[4768]: I0217 14:08:11.569186 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-9nqqc" event={"ID":"20f7a484-7e3c-4df5-84b0-98bd83632fb1","Type":"ContainerStarted","Data":"2a3b5f49d344463bc0610bcfb0c7b843602f65878cf26b8a25faa5475edb9ed0"} Feb 17 14:08:11 crc kubenswrapper[4768]: I0217 14:08:11.587824 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-9nqqc" podStartSLOduration=2.080225793 podStartE2EDuration="2.587803838s" podCreationTimestamp="2026-02-17 14:08:09 +0000 UTC" firstStartedPulling="2026-02-17 14:08:10.497086793 +0000 UTC m=+1909.776473245" lastFinishedPulling="2026-02-17 14:08:11.004664848 +0000 UTC m=+1910.284051290" observedRunningTime="2026-02-17 14:08:11.582535283 +0000 UTC m=+1910.861921725" watchObservedRunningTime="2026-02-17 14:08:11.587803838 +0000 UTC m=+1910.867190280" Feb 17 14:08:29 crc kubenswrapper[4768]: I0217 14:08:29.307378 4768 scope.go:117] "RemoveContainer" containerID="ab7aa0a25c48011c11b72614458f8cdfdae219a4d4976421375fd5451f2ec087" Feb 17 14:08:58 crc kubenswrapper[4768]: I0217 14:08:58.059894 4768 patch_prober.go:28] interesting pod/machine-config-daemon-p97z4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 14:08:58 crc kubenswrapper[4768]: I0217 14:08:58.060476 4768 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-p97z4" podUID="10c685ba-8fe0-425c-958c-3fb6754d3d84" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 14:09:06 crc kubenswrapper[4768]: I0217 14:09:06.091877 4768 generic.go:334] "Generic (PLEG): container finished" podID="20f7a484-7e3c-4df5-84b0-98bd83632fb1" containerID="2a3b5f49d344463bc0610bcfb0c7b843602f65878cf26b8a25faa5475edb9ed0" exitCode=0 Feb 17 14:09:06 crc kubenswrapper[4768]: I0217 14:09:06.092240 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-9nqqc" event={"ID":"20f7a484-7e3c-4df5-84b0-98bd83632fb1","Type":"ContainerDied","Data":"2a3b5f49d344463bc0610bcfb0c7b843602f65878cf26b8a25faa5475edb9ed0"} Feb 17 14:09:07 crc kubenswrapper[4768]: I0217 14:09:07.482636 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-9nqqc" Feb 17 14:09:07 crc kubenswrapper[4768]: I0217 14:09:07.572642 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/20f7a484-7e3c-4df5-84b0-98bd83632fb1-ssh-key-openstack-edpm-ipam\") pod \"20f7a484-7e3c-4df5-84b0-98bd83632fb1\" (UID: \"20f7a484-7e3c-4df5-84b0-98bd83632fb1\") " Feb 17 14:09:07 crc kubenswrapper[4768]: I0217 14:09:07.572733 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/20f7a484-7e3c-4df5-84b0-98bd83632fb1-ovn-combined-ca-bundle\") pod \"20f7a484-7e3c-4df5-84b0-98bd83632fb1\" (UID: \"20f7a484-7e3c-4df5-84b0-98bd83632fb1\") " Feb 17 14:09:07 crc kubenswrapper[4768]: I0217 14:09:07.572789 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2srl6\" (UniqueName: \"kubernetes.io/projected/20f7a484-7e3c-4df5-84b0-98bd83632fb1-kube-api-access-2srl6\") pod \"20f7a484-7e3c-4df5-84b0-98bd83632fb1\" (UID: \"20f7a484-7e3c-4df5-84b0-98bd83632fb1\") " Feb 17 14:09:07 crc kubenswrapper[4768]: I0217 14:09:07.572834 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/20f7a484-7e3c-4df5-84b0-98bd83632fb1-ovncontroller-config-0\") pod \"20f7a484-7e3c-4df5-84b0-98bd83632fb1\" (UID: \"20f7a484-7e3c-4df5-84b0-98bd83632fb1\") " Feb 17 14:09:07 crc kubenswrapper[4768]: I0217 14:09:07.573357 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/20f7a484-7e3c-4df5-84b0-98bd83632fb1-inventory\") pod \"20f7a484-7e3c-4df5-84b0-98bd83632fb1\" (UID: \"20f7a484-7e3c-4df5-84b0-98bd83632fb1\") " Feb 17 14:09:07 crc kubenswrapper[4768]: I0217 14:09:07.608624 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20f7a484-7e3c-4df5-84b0-98bd83632fb1-kube-api-access-2srl6" (OuterVolumeSpecName: "kube-api-access-2srl6") pod "20f7a484-7e3c-4df5-84b0-98bd83632fb1" (UID: "20f7a484-7e3c-4df5-84b0-98bd83632fb1"). InnerVolumeSpecName "kube-api-access-2srl6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 14:09:07 crc kubenswrapper[4768]: I0217 14:09:07.622232 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20f7a484-7e3c-4df5-84b0-98bd83632fb1-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "20f7a484-7e3c-4df5-84b0-98bd83632fb1" (UID: "20f7a484-7e3c-4df5-84b0-98bd83632fb1"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 14:09:07 crc kubenswrapper[4768]: I0217 14:09:07.622373 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20f7a484-7e3c-4df5-84b0-98bd83632fb1-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "20f7a484-7e3c-4df5-84b0-98bd83632fb1" (UID: "20f7a484-7e3c-4df5-84b0-98bd83632fb1"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 14:09:07 crc kubenswrapper[4768]: I0217 14:09:07.643272 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20f7a484-7e3c-4df5-84b0-98bd83632fb1-inventory" (OuterVolumeSpecName: "inventory") pod "20f7a484-7e3c-4df5-84b0-98bd83632fb1" (UID: "20f7a484-7e3c-4df5-84b0-98bd83632fb1"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 14:09:07 crc kubenswrapper[4768]: I0217 14:09:07.677515 4768 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/20f7a484-7e3c-4df5-84b0-98bd83632fb1-inventory\") on node \"crc\" DevicePath \"\"" Feb 17 14:09:07 crc kubenswrapper[4768]: I0217 14:09:07.677552 4768 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/20f7a484-7e3c-4df5-84b0-98bd83632fb1-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 17 14:09:07 crc kubenswrapper[4768]: I0217 14:09:07.677567 4768 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/20f7a484-7e3c-4df5-84b0-98bd83632fb1-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 14:09:07 crc kubenswrapper[4768]: I0217 14:09:07.677577 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2srl6\" (UniqueName: \"kubernetes.io/projected/20f7a484-7e3c-4df5-84b0-98bd83632fb1-kube-api-access-2srl6\") on node \"crc\" DevicePath \"\"" Feb 17 14:09:07 crc kubenswrapper[4768]: I0217 14:09:07.684022 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/20f7a484-7e3c-4df5-84b0-98bd83632fb1-ovncontroller-config-0" (OuterVolumeSpecName: "ovncontroller-config-0") pod "20f7a484-7e3c-4df5-84b0-98bd83632fb1" (UID: "20f7a484-7e3c-4df5-84b0-98bd83632fb1"). InnerVolumeSpecName "ovncontroller-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 14:09:07 crc kubenswrapper[4768]: I0217 14:09:07.780147 4768 reconciler_common.go:293] "Volume detached for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/20f7a484-7e3c-4df5-84b0-98bd83632fb1-ovncontroller-config-0\") on node \"crc\" DevicePath \"\"" Feb 17 14:09:08 crc kubenswrapper[4768]: I0217 14:09:08.109784 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-9nqqc" event={"ID":"20f7a484-7e3c-4df5-84b0-98bd83632fb1","Type":"ContainerDied","Data":"b2d022bf406eb248c7cb0b8c5259e3e60619f83b2598a6b6a850eeca2c364113"} Feb 17 14:09:08 crc kubenswrapper[4768]: I0217 14:09:08.109827 4768 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b2d022bf406eb248c7cb0b8c5259e3e60619f83b2598a6b6a850eeca2c364113" Feb 17 14:09:08 crc kubenswrapper[4768]: I0217 14:09:08.109888 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-9nqqc" Feb 17 14:09:08 crc kubenswrapper[4768]: I0217 14:09:08.209050 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-zxlx2"] Feb 17 14:09:08 crc kubenswrapper[4768]: E0217 14:09:08.209447 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="20f7a484-7e3c-4df5-84b0-98bd83632fb1" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Feb 17 14:09:08 crc kubenswrapper[4768]: I0217 14:09:08.209467 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="20f7a484-7e3c-4df5-84b0-98bd83632fb1" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Feb 17 14:09:08 crc kubenswrapper[4768]: I0217 14:09:08.209631 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="20f7a484-7e3c-4df5-84b0-98bd83632fb1" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Feb 17 14:09:08 crc kubenswrapper[4768]: I0217 14:09:08.210354 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-zxlx2" Feb 17 14:09:08 crc kubenswrapper[4768]: I0217 14:09:08.212273 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 17 14:09:08 crc kubenswrapper[4768]: I0217 14:09:08.212300 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-ovn-metadata-agent-neutron-config" Feb 17 14:09:08 crc kubenswrapper[4768]: I0217 14:09:08.212564 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-neutron-config" Feb 17 14:09:08 crc kubenswrapper[4768]: I0217 14:09:08.212871 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-tr87q" Feb 17 14:09:08 crc kubenswrapper[4768]: I0217 14:09:08.213053 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 17 14:09:08 crc kubenswrapper[4768]: I0217 14:09:08.213424 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 17 14:09:08 crc kubenswrapper[4768]: I0217 14:09:08.219903 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-zxlx2"] Feb 17 14:09:08 crc kubenswrapper[4768]: I0217 14:09:08.295642 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4fa13453-9d50-4130-ad98-37c224390a7e-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-zxlx2\" (UID: \"4fa13453-9d50-4130-ad98-37c224390a7e\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-zxlx2" Feb 17 14:09:08 crc kubenswrapper[4768]: I0217 14:09:08.295727 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qv94q\" (UniqueName: \"kubernetes.io/projected/4fa13453-9d50-4130-ad98-37c224390a7e-kube-api-access-qv94q\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-zxlx2\" (UID: \"4fa13453-9d50-4130-ad98-37c224390a7e\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-zxlx2" Feb 17 14:09:08 crc kubenswrapper[4768]: I0217 14:09:08.295754 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/4fa13453-9d50-4130-ad98-37c224390a7e-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-zxlx2\" (UID: \"4fa13453-9d50-4130-ad98-37c224390a7e\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-zxlx2" Feb 17 14:09:08 crc kubenswrapper[4768]: I0217 14:09:08.295777 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4fa13453-9d50-4130-ad98-37c224390a7e-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-zxlx2\" (UID: \"4fa13453-9d50-4130-ad98-37c224390a7e\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-zxlx2" Feb 17 14:09:08 crc kubenswrapper[4768]: I0217 14:09:08.295800 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/4fa13453-9d50-4130-ad98-37c224390a7e-ssh-key-openstack-edpm-ipam\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-zxlx2\" (UID: \"4fa13453-9d50-4130-ad98-37c224390a7e\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-zxlx2" Feb 17 14:09:08 crc kubenswrapper[4768]: I0217 14:09:08.295851 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/4fa13453-9d50-4130-ad98-37c224390a7e-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-zxlx2\" (UID: \"4fa13453-9d50-4130-ad98-37c224390a7e\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-zxlx2" Feb 17 14:09:08 crc kubenswrapper[4768]: I0217 14:09:08.397945 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4fa13453-9d50-4130-ad98-37c224390a7e-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-zxlx2\" (UID: \"4fa13453-9d50-4130-ad98-37c224390a7e\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-zxlx2" Feb 17 14:09:08 crc kubenswrapper[4768]: I0217 14:09:08.398026 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qv94q\" (UniqueName: \"kubernetes.io/projected/4fa13453-9d50-4130-ad98-37c224390a7e-kube-api-access-qv94q\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-zxlx2\" (UID: \"4fa13453-9d50-4130-ad98-37c224390a7e\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-zxlx2" Feb 17 14:09:08 crc kubenswrapper[4768]: I0217 14:09:08.398052 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/4fa13453-9d50-4130-ad98-37c224390a7e-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-zxlx2\" (UID: \"4fa13453-9d50-4130-ad98-37c224390a7e\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-zxlx2" Feb 17 14:09:08 crc kubenswrapper[4768]: I0217 14:09:08.398075 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4fa13453-9d50-4130-ad98-37c224390a7e-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-zxlx2\" (UID: \"4fa13453-9d50-4130-ad98-37c224390a7e\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-zxlx2" Feb 17 14:09:08 crc kubenswrapper[4768]: I0217 14:09:08.398169 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/4fa13453-9d50-4130-ad98-37c224390a7e-ssh-key-openstack-edpm-ipam\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-zxlx2\" (UID: \"4fa13453-9d50-4130-ad98-37c224390a7e\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-zxlx2" Feb 17 14:09:08 crc kubenswrapper[4768]: I0217 14:09:08.398221 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/4fa13453-9d50-4130-ad98-37c224390a7e-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-zxlx2\" (UID: \"4fa13453-9d50-4130-ad98-37c224390a7e\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-zxlx2" Feb 17 14:09:08 crc kubenswrapper[4768]: I0217 14:09:08.401821 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/4fa13453-9d50-4130-ad98-37c224390a7e-ssh-key-openstack-edpm-ipam\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-zxlx2\" (UID: \"4fa13453-9d50-4130-ad98-37c224390a7e\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-zxlx2" Feb 17 14:09:08 crc kubenswrapper[4768]: I0217 14:09:08.402013 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/4fa13453-9d50-4130-ad98-37c224390a7e-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-zxlx2\" (UID: \"4fa13453-9d50-4130-ad98-37c224390a7e\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-zxlx2" Feb 17 14:09:08 crc kubenswrapper[4768]: I0217 14:09:08.402479 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4fa13453-9d50-4130-ad98-37c224390a7e-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-zxlx2\" (UID: \"4fa13453-9d50-4130-ad98-37c224390a7e\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-zxlx2" Feb 17 14:09:08 crc kubenswrapper[4768]: I0217 14:09:08.403758 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4fa13453-9d50-4130-ad98-37c224390a7e-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-zxlx2\" (UID: \"4fa13453-9d50-4130-ad98-37c224390a7e\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-zxlx2" Feb 17 14:09:08 crc kubenswrapper[4768]: I0217 14:09:08.404269 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/4fa13453-9d50-4130-ad98-37c224390a7e-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-zxlx2\" (UID: \"4fa13453-9d50-4130-ad98-37c224390a7e\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-zxlx2" Feb 17 14:09:08 crc kubenswrapper[4768]: I0217 14:09:08.420305 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qv94q\" (UniqueName: \"kubernetes.io/projected/4fa13453-9d50-4130-ad98-37c224390a7e-kube-api-access-qv94q\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-zxlx2\" (UID: \"4fa13453-9d50-4130-ad98-37c224390a7e\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-zxlx2" Feb 17 14:09:08 crc kubenswrapper[4768]: I0217 14:09:08.527114 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-zxlx2" Feb 17 14:09:09 crc kubenswrapper[4768]: I0217 14:09:09.046406 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-zxlx2"] Feb 17 14:09:09 crc kubenswrapper[4768]: I0217 14:09:09.120789 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-zxlx2" event={"ID":"4fa13453-9d50-4130-ad98-37c224390a7e","Type":"ContainerStarted","Data":"2d5220fde499781e588177280ab679c8918d683ec7c7b3e7cb9bd6882632b465"} Feb 17 14:09:11 crc kubenswrapper[4768]: I0217 14:09:11.140764 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-zxlx2" event={"ID":"4fa13453-9d50-4130-ad98-37c224390a7e","Type":"ContainerStarted","Data":"eeb9e5008117c43c2ac919bbe1d67017347dd08e6dbf830e80071dd1db4b9e4d"} Feb 17 14:09:11 crc kubenswrapper[4768]: I0217 14:09:11.164360 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-zxlx2" podStartSLOduration=2.220521687 podStartE2EDuration="3.164337049s" podCreationTimestamp="2026-02-17 14:09:08 +0000 UTC" firstStartedPulling="2026-02-17 14:09:09.055938894 +0000 UTC m=+1968.335325336" lastFinishedPulling="2026-02-17 14:09:09.999754236 +0000 UTC m=+1969.279140698" observedRunningTime="2026-02-17 14:09:11.159865737 +0000 UTC m=+1970.439252189" watchObservedRunningTime="2026-02-17 14:09:11.164337049 +0000 UTC m=+1970.443723501" Feb 17 14:09:28 crc kubenswrapper[4768]: I0217 14:09:28.060351 4768 patch_prober.go:28] interesting pod/machine-config-daemon-p97z4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 14:09:28 crc kubenswrapper[4768]: I0217 14:09:28.061137 4768 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-p97z4" podUID="10c685ba-8fe0-425c-958c-3fb6754d3d84" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 14:09:43 crc kubenswrapper[4768]: I0217 14:09:43.463393 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-4xqbs"] Feb 17 14:09:43 crc kubenswrapper[4768]: I0217 14:09:43.466210 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-4xqbs" Feb 17 14:09:43 crc kubenswrapper[4768]: I0217 14:09:43.482982 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-4xqbs"] Feb 17 14:09:43 crc kubenswrapper[4768]: I0217 14:09:43.537279 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/383cb77b-1bd9-496b-8c51-8e2aafcbfffe-catalog-content\") pod \"redhat-operators-4xqbs\" (UID: \"383cb77b-1bd9-496b-8c51-8e2aafcbfffe\") " pod="openshift-marketplace/redhat-operators-4xqbs" Feb 17 14:09:43 crc kubenswrapper[4768]: I0217 14:09:43.537510 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vj984\" (UniqueName: \"kubernetes.io/projected/383cb77b-1bd9-496b-8c51-8e2aafcbfffe-kube-api-access-vj984\") pod \"redhat-operators-4xqbs\" (UID: \"383cb77b-1bd9-496b-8c51-8e2aafcbfffe\") " pod="openshift-marketplace/redhat-operators-4xqbs" Feb 17 14:09:43 crc kubenswrapper[4768]: I0217 14:09:43.537715 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/383cb77b-1bd9-496b-8c51-8e2aafcbfffe-utilities\") pod \"redhat-operators-4xqbs\" (UID: \"383cb77b-1bd9-496b-8c51-8e2aafcbfffe\") " pod="openshift-marketplace/redhat-operators-4xqbs" Feb 17 14:09:43 crc kubenswrapper[4768]: I0217 14:09:43.640887 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/383cb77b-1bd9-496b-8c51-8e2aafcbfffe-utilities\") pod \"redhat-operators-4xqbs\" (UID: \"383cb77b-1bd9-496b-8c51-8e2aafcbfffe\") " pod="openshift-marketplace/redhat-operators-4xqbs" Feb 17 14:09:43 crc kubenswrapper[4768]: I0217 14:09:43.641139 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/383cb77b-1bd9-496b-8c51-8e2aafcbfffe-catalog-content\") pod \"redhat-operators-4xqbs\" (UID: \"383cb77b-1bd9-496b-8c51-8e2aafcbfffe\") " pod="openshift-marketplace/redhat-operators-4xqbs" Feb 17 14:09:43 crc kubenswrapper[4768]: I0217 14:09:43.641277 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vj984\" (UniqueName: \"kubernetes.io/projected/383cb77b-1bd9-496b-8c51-8e2aafcbfffe-kube-api-access-vj984\") pod \"redhat-operators-4xqbs\" (UID: \"383cb77b-1bd9-496b-8c51-8e2aafcbfffe\") " pod="openshift-marketplace/redhat-operators-4xqbs" Feb 17 14:09:43 crc kubenswrapper[4768]: I0217 14:09:43.641511 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/383cb77b-1bd9-496b-8c51-8e2aafcbfffe-utilities\") pod \"redhat-operators-4xqbs\" (UID: \"383cb77b-1bd9-496b-8c51-8e2aafcbfffe\") " pod="openshift-marketplace/redhat-operators-4xqbs" Feb 17 14:09:43 crc kubenswrapper[4768]: I0217 14:09:43.641638 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/383cb77b-1bd9-496b-8c51-8e2aafcbfffe-catalog-content\") pod \"redhat-operators-4xqbs\" (UID: \"383cb77b-1bd9-496b-8c51-8e2aafcbfffe\") " pod="openshift-marketplace/redhat-operators-4xqbs" Feb 17 14:09:43 crc kubenswrapper[4768]: I0217 14:09:43.662007 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vj984\" (UniqueName: \"kubernetes.io/projected/383cb77b-1bd9-496b-8c51-8e2aafcbfffe-kube-api-access-vj984\") pod \"redhat-operators-4xqbs\" (UID: \"383cb77b-1bd9-496b-8c51-8e2aafcbfffe\") " pod="openshift-marketplace/redhat-operators-4xqbs" Feb 17 14:09:43 crc kubenswrapper[4768]: I0217 14:09:43.790541 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-4xqbs" Feb 17 14:09:44 crc kubenswrapper[4768]: I0217 14:09:44.252476 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-4xqbs"] Feb 17 14:09:44 crc kubenswrapper[4768]: I0217 14:09:44.490660 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4xqbs" event={"ID":"383cb77b-1bd9-496b-8c51-8e2aafcbfffe","Type":"ContainerStarted","Data":"a1f28d4afc6e96f9a81059733a5a06874a7bcd29c4c71d8346d4202c2c1eafe2"} Feb 17 14:09:45 crc kubenswrapper[4768]: I0217 14:09:45.504358 4768 generic.go:334] "Generic (PLEG): container finished" podID="383cb77b-1bd9-496b-8c51-8e2aafcbfffe" containerID="c09d10ea87d1685e55ba71eb4e4e59224310302441428919cd9857afbbcdad29" exitCode=0 Feb 17 14:09:45 crc kubenswrapper[4768]: I0217 14:09:45.504773 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4xqbs" event={"ID":"383cb77b-1bd9-496b-8c51-8e2aafcbfffe","Type":"ContainerDied","Data":"c09d10ea87d1685e55ba71eb4e4e59224310302441428919cd9857afbbcdad29"} Feb 17 14:09:45 crc kubenswrapper[4768]: I0217 14:09:45.509575 4768 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 17 14:09:47 crc kubenswrapper[4768]: I0217 14:09:47.523050 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4xqbs" event={"ID":"383cb77b-1bd9-496b-8c51-8e2aafcbfffe","Type":"ContainerStarted","Data":"9a66c2b4ba29bbebed8d88512de8e0862f472c6390a77b50b1e350b247cd7838"} Feb 17 14:09:48 crc kubenswrapper[4768]: I0217 14:09:48.538916 4768 generic.go:334] "Generic (PLEG): container finished" podID="383cb77b-1bd9-496b-8c51-8e2aafcbfffe" containerID="9a66c2b4ba29bbebed8d88512de8e0862f472c6390a77b50b1e350b247cd7838" exitCode=0 Feb 17 14:09:48 crc kubenswrapper[4768]: I0217 14:09:48.539025 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4xqbs" event={"ID":"383cb77b-1bd9-496b-8c51-8e2aafcbfffe","Type":"ContainerDied","Data":"9a66c2b4ba29bbebed8d88512de8e0862f472c6390a77b50b1e350b247cd7838"} Feb 17 14:09:49 crc kubenswrapper[4768]: I0217 14:09:49.554417 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4xqbs" event={"ID":"383cb77b-1bd9-496b-8c51-8e2aafcbfffe","Type":"ContainerStarted","Data":"97a64d6af0e7928c933f8e4dd4fb9434a5adab2b5f473feb33fcde7ac782848b"} Feb 17 14:09:49 crc kubenswrapper[4768]: I0217 14:09:49.587720 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-4xqbs" podStartSLOduration=3.066936017 podStartE2EDuration="6.587691638s" podCreationTimestamp="2026-02-17 14:09:43 +0000 UTC" firstStartedPulling="2026-02-17 14:09:45.509187494 +0000 UTC m=+2004.788573946" lastFinishedPulling="2026-02-17 14:09:49.029943105 +0000 UTC m=+2008.309329567" observedRunningTime="2026-02-17 14:09:49.570821704 +0000 UTC m=+2008.850208166" watchObservedRunningTime="2026-02-17 14:09:49.587691638 +0000 UTC m=+2008.867078080" Feb 17 14:09:53 crc kubenswrapper[4768]: I0217 14:09:53.790938 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-4xqbs" Feb 17 14:09:53 crc kubenswrapper[4768]: I0217 14:09:53.791507 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-4xqbs" Feb 17 14:09:54 crc kubenswrapper[4768]: I0217 14:09:54.604856 4768 generic.go:334] "Generic (PLEG): container finished" podID="4fa13453-9d50-4130-ad98-37c224390a7e" containerID="eeb9e5008117c43c2ac919bbe1d67017347dd08e6dbf830e80071dd1db4b9e4d" exitCode=0 Feb 17 14:09:54 crc kubenswrapper[4768]: I0217 14:09:54.604908 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-zxlx2" event={"ID":"4fa13453-9d50-4130-ad98-37c224390a7e","Type":"ContainerDied","Data":"eeb9e5008117c43c2ac919bbe1d67017347dd08e6dbf830e80071dd1db4b9e4d"} Feb 17 14:09:54 crc kubenswrapper[4768]: I0217 14:09:54.850064 4768 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-4xqbs" podUID="383cb77b-1bd9-496b-8c51-8e2aafcbfffe" containerName="registry-server" probeResult="failure" output=< Feb 17 14:09:54 crc kubenswrapper[4768]: timeout: failed to connect service ":50051" within 1s Feb 17 14:09:54 crc kubenswrapper[4768]: > Feb 17 14:09:56 crc kubenswrapper[4768]: I0217 14:09:56.032506 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-zxlx2" Feb 17 14:09:56 crc kubenswrapper[4768]: I0217 14:09:56.229808 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4fa13453-9d50-4130-ad98-37c224390a7e-inventory\") pod \"4fa13453-9d50-4130-ad98-37c224390a7e\" (UID: \"4fa13453-9d50-4130-ad98-37c224390a7e\") " Feb 17 14:09:56 crc kubenswrapper[4768]: I0217 14:09:56.230199 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/4fa13453-9d50-4130-ad98-37c224390a7e-nova-metadata-neutron-config-0\") pod \"4fa13453-9d50-4130-ad98-37c224390a7e\" (UID: \"4fa13453-9d50-4130-ad98-37c224390a7e\") " Feb 17 14:09:56 crc kubenswrapper[4768]: I0217 14:09:56.230363 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qv94q\" (UniqueName: \"kubernetes.io/projected/4fa13453-9d50-4130-ad98-37c224390a7e-kube-api-access-qv94q\") pod \"4fa13453-9d50-4130-ad98-37c224390a7e\" (UID: \"4fa13453-9d50-4130-ad98-37c224390a7e\") " Feb 17 14:09:56 crc kubenswrapper[4768]: I0217 14:09:56.230444 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/4fa13453-9d50-4130-ad98-37c224390a7e-ssh-key-openstack-edpm-ipam\") pod \"4fa13453-9d50-4130-ad98-37c224390a7e\" (UID: \"4fa13453-9d50-4130-ad98-37c224390a7e\") " Feb 17 14:09:56 crc kubenswrapper[4768]: I0217 14:09:56.230474 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4fa13453-9d50-4130-ad98-37c224390a7e-neutron-metadata-combined-ca-bundle\") pod \"4fa13453-9d50-4130-ad98-37c224390a7e\" (UID: \"4fa13453-9d50-4130-ad98-37c224390a7e\") " Feb 17 14:09:56 crc kubenswrapper[4768]: I0217 14:09:56.230516 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/4fa13453-9d50-4130-ad98-37c224390a7e-neutron-ovn-metadata-agent-neutron-config-0\") pod \"4fa13453-9d50-4130-ad98-37c224390a7e\" (UID: \"4fa13453-9d50-4130-ad98-37c224390a7e\") " Feb 17 14:09:56 crc kubenswrapper[4768]: I0217 14:09:56.236260 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4fa13453-9d50-4130-ad98-37c224390a7e-kube-api-access-qv94q" (OuterVolumeSpecName: "kube-api-access-qv94q") pod "4fa13453-9d50-4130-ad98-37c224390a7e" (UID: "4fa13453-9d50-4130-ad98-37c224390a7e"). InnerVolumeSpecName "kube-api-access-qv94q". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 14:09:56 crc kubenswrapper[4768]: I0217 14:09:56.236474 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4fa13453-9d50-4130-ad98-37c224390a7e-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "4fa13453-9d50-4130-ad98-37c224390a7e" (UID: "4fa13453-9d50-4130-ad98-37c224390a7e"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 14:09:56 crc kubenswrapper[4768]: I0217 14:09:56.255224 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4fa13453-9d50-4130-ad98-37c224390a7e-inventory" (OuterVolumeSpecName: "inventory") pod "4fa13453-9d50-4130-ad98-37c224390a7e" (UID: "4fa13453-9d50-4130-ad98-37c224390a7e"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 14:09:56 crc kubenswrapper[4768]: I0217 14:09:56.256691 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4fa13453-9d50-4130-ad98-37c224390a7e-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "4fa13453-9d50-4130-ad98-37c224390a7e" (UID: "4fa13453-9d50-4130-ad98-37c224390a7e"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 14:09:56 crc kubenswrapper[4768]: E0217 14:09:56.259465 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4fa13453-9d50-4130-ad98-37c224390a7e-neutron-ovn-metadata-agent-neutron-config-0 podName:4fa13453-9d50-4130-ad98-37c224390a7e nodeName:}" failed. No retries permitted until 2026-02-17 14:09:56.759433128 +0000 UTC m=+2016.038819570 (durationBeforeRetry 500ms). Error: error cleaning subPath mounts for volume "neutron-ovn-metadata-agent-neutron-config-0" (UniqueName: "kubernetes.io/secret/4fa13453-9d50-4130-ad98-37c224390a7e-neutron-ovn-metadata-agent-neutron-config-0") pod "4fa13453-9d50-4130-ad98-37c224390a7e" (UID: "4fa13453-9d50-4130-ad98-37c224390a7e") : error deleting /var/lib/kubelet/pods/4fa13453-9d50-4130-ad98-37c224390a7e/volume-subpaths: remove /var/lib/kubelet/pods/4fa13453-9d50-4130-ad98-37c224390a7e/volume-subpaths: no such file or directory Feb 17 14:09:56 crc kubenswrapper[4768]: I0217 14:09:56.261436 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4fa13453-9d50-4130-ad98-37c224390a7e-nova-metadata-neutron-config-0" (OuterVolumeSpecName: "nova-metadata-neutron-config-0") pod "4fa13453-9d50-4130-ad98-37c224390a7e" (UID: "4fa13453-9d50-4130-ad98-37c224390a7e"). InnerVolumeSpecName "nova-metadata-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 14:09:56 crc kubenswrapper[4768]: I0217 14:09:56.333344 4768 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4fa13453-9d50-4130-ad98-37c224390a7e-inventory\") on node \"crc\" DevicePath \"\"" Feb 17 14:09:56 crc kubenswrapper[4768]: I0217 14:09:56.333385 4768 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/4fa13453-9d50-4130-ad98-37c224390a7e-nova-metadata-neutron-config-0\") on node \"crc\" DevicePath \"\"" Feb 17 14:09:56 crc kubenswrapper[4768]: I0217 14:09:56.333400 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qv94q\" (UniqueName: \"kubernetes.io/projected/4fa13453-9d50-4130-ad98-37c224390a7e-kube-api-access-qv94q\") on node \"crc\" DevicePath \"\"" Feb 17 14:09:56 crc kubenswrapper[4768]: I0217 14:09:56.333415 4768 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/4fa13453-9d50-4130-ad98-37c224390a7e-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 17 14:09:56 crc kubenswrapper[4768]: I0217 14:09:56.333428 4768 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4fa13453-9d50-4130-ad98-37c224390a7e-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 14:09:56 crc kubenswrapper[4768]: I0217 14:09:56.627282 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-zxlx2" event={"ID":"4fa13453-9d50-4130-ad98-37c224390a7e","Type":"ContainerDied","Data":"2d5220fde499781e588177280ab679c8918d683ec7c7b3e7cb9bd6882632b465"} Feb 17 14:09:56 crc kubenswrapper[4768]: I0217 14:09:56.627327 4768 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2d5220fde499781e588177280ab679c8918d683ec7c7b3e7cb9bd6882632b465" Feb 17 14:09:56 crc kubenswrapper[4768]: I0217 14:09:56.627387 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-zxlx2" Feb 17 14:09:56 crc kubenswrapper[4768]: I0217 14:09:56.799336 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-nnqwp"] Feb 17 14:09:56 crc kubenswrapper[4768]: E0217 14:09:56.799831 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4fa13453-9d50-4130-ad98-37c224390a7e" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Feb 17 14:09:56 crc kubenswrapper[4768]: I0217 14:09:56.799855 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="4fa13453-9d50-4130-ad98-37c224390a7e" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Feb 17 14:09:56 crc kubenswrapper[4768]: I0217 14:09:56.800127 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="4fa13453-9d50-4130-ad98-37c224390a7e" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Feb 17 14:09:56 crc kubenswrapper[4768]: I0217 14:09:56.800980 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-nnqwp" Feb 17 14:09:56 crc kubenswrapper[4768]: I0217 14:09:56.804832 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"libvirt-secret" Feb 17 14:09:56 crc kubenswrapper[4768]: I0217 14:09:56.809671 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-nnqwp"] Feb 17 14:09:56 crc kubenswrapper[4768]: I0217 14:09:56.842258 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/4fa13453-9d50-4130-ad98-37c224390a7e-neutron-ovn-metadata-agent-neutron-config-0\") pod \"4fa13453-9d50-4130-ad98-37c224390a7e\" (UID: \"4fa13453-9d50-4130-ad98-37c224390a7e\") " Feb 17 14:09:56 crc kubenswrapper[4768]: I0217 14:09:56.845620 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4fa13453-9d50-4130-ad98-37c224390a7e-neutron-ovn-metadata-agent-neutron-config-0" (OuterVolumeSpecName: "neutron-ovn-metadata-agent-neutron-config-0") pod "4fa13453-9d50-4130-ad98-37c224390a7e" (UID: "4fa13453-9d50-4130-ad98-37c224390a7e"). InnerVolumeSpecName "neutron-ovn-metadata-agent-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 14:09:56 crc kubenswrapper[4768]: I0217 14:09:56.943846 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8hhvj\" (UniqueName: \"kubernetes.io/projected/30a6ce8f-2b64-4ba9-803c-15c5bbde1cf8-kube-api-access-8hhvj\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-nnqwp\" (UID: \"30a6ce8f-2b64-4ba9-803c-15c5bbde1cf8\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-nnqwp" Feb 17 14:09:56 crc kubenswrapper[4768]: I0217 14:09:56.943932 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/30a6ce8f-2b64-4ba9-803c-15c5bbde1cf8-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-nnqwp\" (UID: \"30a6ce8f-2b64-4ba9-803c-15c5bbde1cf8\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-nnqwp" Feb 17 14:09:56 crc kubenswrapper[4768]: I0217 14:09:56.944075 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/30a6ce8f-2b64-4ba9-803c-15c5bbde1cf8-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-nnqwp\" (UID: \"30a6ce8f-2b64-4ba9-803c-15c5bbde1cf8\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-nnqwp" Feb 17 14:09:56 crc kubenswrapper[4768]: I0217 14:09:56.944136 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/30a6ce8f-2b64-4ba9-803c-15c5bbde1cf8-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-nnqwp\" (UID: \"30a6ce8f-2b64-4ba9-803c-15c5bbde1cf8\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-nnqwp" Feb 17 14:09:56 crc kubenswrapper[4768]: I0217 14:09:56.944161 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/30a6ce8f-2b64-4ba9-803c-15c5bbde1cf8-ssh-key-openstack-edpm-ipam\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-nnqwp\" (UID: \"30a6ce8f-2b64-4ba9-803c-15c5bbde1cf8\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-nnqwp" Feb 17 14:09:56 crc kubenswrapper[4768]: I0217 14:09:56.944413 4768 reconciler_common.go:293] "Volume detached for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/4fa13453-9d50-4130-ad98-37c224390a7e-neutron-ovn-metadata-agent-neutron-config-0\") on node \"crc\" DevicePath \"\"" Feb 17 14:09:57 crc kubenswrapper[4768]: I0217 14:09:57.046046 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8hhvj\" (UniqueName: \"kubernetes.io/projected/30a6ce8f-2b64-4ba9-803c-15c5bbde1cf8-kube-api-access-8hhvj\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-nnqwp\" (UID: \"30a6ce8f-2b64-4ba9-803c-15c5bbde1cf8\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-nnqwp" Feb 17 14:09:57 crc kubenswrapper[4768]: I0217 14:09:57.046235 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/30a6ce8f-2b64-4ba9-803c-15c5bbde1cf8-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-nnqwp\" (UID: \"30a6ce8f-2b64-4ba9-803c-15c5bbde1cf8\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-nnqwp" Feb 17 14:09:57 crc kubenswrapper[4768]: I0217 14:09:57.046404 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/30a6ce8f-2b64-4ba9-803c-15c5bbde1cf8-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-nnqwp\" (UID: \"30a6ce8f-2b64-4ba9-803c-15c5bbde1cf8\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-nnqwp" Feb 17 14:09:57 crc kubenswrapper[4768]: I0217 14:09:57.046485 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/30a6ce8f-2b64-4ba9-803c-15c5bbde1cf8-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-nnqwp\" (UID: \"30a6ce8f-2b64-4ba9-803c-15c5bbde1cf8\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-nnqwp" Feb 17 14:09:57 crc kubenswrapper[4768]: I0217 14:09:57.046521 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/30a6ce8f-2b64-4ba9-803c-15c5bbde1cf8-ssh-key-openstack-edpm-ipam\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-nnqwp\" (UID: \"30a6ce8f-2b64-4ba9-803c-15c5bbde1cf8\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-nnqwp" Feb 17 14:09:57 crc kubenswrapper[4768]: I0217 14:09:57.050435 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/30a6ce8f-2b64-4ba9-803c-15c5bbde1cf8-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-nnqwp\" (UID: \"30a6ce8f-2b64-4ba9-803c-15c5bbde1cf8\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-nnqwp" Feb 17 14:09:57 crc kubenswrapper[4768]: I0217 14:09:57.051021 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/30a6ce8f-2b64-4ba9-803c-15c5bbde1cf8-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-nnqwp\" (UID: \"30a6ce8f-2b64-4ba9-803c-15c5bbde1cf8\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-nnqwp" Feb 17 14:09:57 crc kubenswrapper[4768]: I0217 14:09:57.051539 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/30a6ce8f-2b64-4ba9-803c-15c5bbde1cf8-ssh-key-openstack-edpm-ipam\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-nnqwp\" (UID: \"30a6ce8f-2b64-4ba9-803c-15c5bbde1cf8\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-nnqwp" Feb 17 14:09:57 crc kubenswrapper[4768]: I0217 14:09:57.053574 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/30a6ce8f-2b64-4ba9-803c-15c5bbde1cf8-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-nnqwp\" (UID: \"30a6ce8f-2b64-4ba9-803c-15c5bbde1cf8\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-nnqwp" Feb 17 14:09:57 crc kubenswrapper[4768]: I0217 14:09:57.073719 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8hhvj\" (UniqueName: \"kubernetes.io/projected/30a6ce8f-2b64-4ba9-803c-15c5bbde1cf8-kube-api-access-8hhvj\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-nnqwp\" (UID: \"30a6ce8f-2b64-4ba9-803c-15c5bbde1cf8\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-nnqwp" Feb 17 14:09:57 crc kubenswrapper[4768]: I0217 14:09:57.132736 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-nnqwp" Feb 17 14:09:57 crc kubenswrapper[4768]: W0217 14:09:57.680577 4768 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod30a6ce8f_2b64_4ba9_803c_15c5bbde1cf8.slice/crio-0c80b54ebb6977b8fe644b57bfbdc155888936892344dab7b2fd257944e6a8b3 WatchSource:0}: Error finding container 0c80b54ebb6977b8fe644b57bfbdc155888936892344dab7b2fd257944e6a8b3: Status 404 returned error can't find the container with id 0c80b54ebb6977b8fe644b57bfbdc155888936892344dab7b2fd257944e6a8b3 Feb 17 14:09:57 crc kubenswrapper[4768]: I0217 14:09:57.683864 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-nnqwp"] Feb 17 14:09:58 crc kubenswrapper[4768]: I0217 14:09:58.060438 4768 patch_prober.go:28] interesting pod/machine-config-daemon-p97z4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 14:09:58 crc kubenswrapper[4768]: I0217 14:09:58.060507 4768 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-p97z4" podUID="10c685ba-8fe0-425c-958c-3fb6754d3d84" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 14:09:58 crc kubenswrapper[4768]: I0217 14:09:58.060560 4768 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-p97z4" Feb 17 14:09:58 crc kubenswrapper[4768]: I0217 14:09:58.061445 4768 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"f4b1ed5ad4696f245b46f42d7dd4597fdcb14a363987811db0ee8a9896aa7bd9"} pod="openshift-machine-config-operator/machine-config-daemon-p97z4" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 17 14:09:58 crc kubenswrapper[4768]: I0217 14:09:58.061496 4768 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-p97z4" podUID="10c685ba-8fe0-425c-958c-3fb6754d3d84" containerName="machine-config-daemon" containerID="cri-o://f4b1ed5ad4696f245b46f42d7dd4597fdcb14a363987811db0ee8a9896aa7bd9" gracePeriod=600 Feb 17 14:09:58 crc kubenswrapper[4768]: I0217 14:09:58.647739 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-nnqwp" event={"ID":"30a6ce8f-2b64-4ba9-803c-15c5bbde1cf8","Type":"ContainerStarted","Data":"0c80b54ebb6977b8fe644b57bfbdc155888936892344dab7b2fd257944e6a8b3"} Feb 17 14:09:58 crc kubenswrapper[4768]: I0217 14:09:58.650298 4768 generic.go:334] "Generic (PLEG): container finished" podID="10c685ba-8fe0-425c-958c-3fb6754d3d84" containerID="f4b1ed5ad4696f245b46f42d7dd4597fdcb14a363987811db0ee8a9896aa7bd9" exitCode=0 Feb 17 14:09:58 crc kubenswrapper[4768]: I0217 14:09:58.650328 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-p97z4" event={"ID":"10c685ba-8fe0-425c-958c-3fb6754d3d84","Type":"ContainerDied","Data":"f4b1ed5ad4696f245b46f42d7dd4597fdcb14a363987811db0ee8a9896aa7bd9"} Feb 17 14:09:58 crc kubenswrapper[4768]: I0217 14:09:58.650344 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-p97z4" event={"ID":"10c685ba-8fe0-425c-958c-3fb6754d3d84","Type":"ContainerStarted","Data":"f1e36808cf48ba534d66b2ba15d35f7785c1fe424a7230a2ddcd4dc0fb81f016"} Feb 17 14:09:58 crc kubenswrapper[4768]: I0217 14:09:58.650362 4768 scope.go:117] "RemoveContainer" containerID="1a561dcdfe0c02cbee0707f068057b98f49569f2f0f313ebde599d9ef2366bc7" Feb 17 14:09:59 crc kubenswrapper[4768]: I0217 14:09:59.660491 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-nnqwp" event={"ID":"30a6ce8f-2b64-4ba9-803c-15c5bbde1cf8","Type":"ContainerStarted","Data":"fb1b5fd978f27f8b955485db2b38435e646b6141ee484caa95756fa597741f79"} Feb 17 14:09:59 crc kubenswrapper[4768]: I0217 14:09:59.680325 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-nnqwp" podStartSLOduration=2.88167925 podStartE2EDuration="3.680301316s" podCreationTimestamp="2026-02-17 14:09:56 +0000 UTC" firstStartedPulling="2026-02-17 14:09:57.683756141 +0000 UTC m=+2016.963142583" lastFinishedPulling="2026-02-17 14:09:58.482378197 +0000 UTC m=+2017.761764649" observedRunningTime="2026-02-17 14:09:59.673233741 +0000 UTC m=+2018.952620203" watchObservedRunningTime="2026-02-17 14:09:59.680301316 +0000 UTC m=+2018.959687768" Feb 17 14:10:03 crc kubenswrapper[4768]: I0217 14:10:03.839618 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-4xqbs" Feb 17 14:10:03 crc kubenswrapper[4768]: I0217 14:10:03.889157 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-4xqbs" Feb 17 14:10:04 crc kubenswrapper[4768]: I0217 14:10:04.092612 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-4xqbs"] Feb 17 14:10:05 crc kubenswrapper[4768]: I0217 14:10:05.727554 4768 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-4xqbs" podUID="383cb77b-1bd9-496b-8c51-8e2aafcbfffe" containerName="registry-server" containerID="cri-o://97a64d6af0e7928c933f8e4dd4fb9434a5adab2b5f473feb33fcde7ac782848b" gracePeriod=2 Feb 17 14:10:06 crc kubenswrapper[4768]: I0217 14:10:06.169256 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-4xqbs" Feb 17 14:10:06 crc kubenswrapper[4768]: I0217 14:10:06.348207 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/383cb77b-1bd9-496b-8c51-8e2aafcbfffe-utilities\") pod \"383cb77b-1bd9-496b-8c51-8e2aafcbfffe\" (UID: \"383cb77b-1bd9-496b-8c51-8e2aafcbfffe\") " Feb 17 14:10:06 crc kubenswrapper[4768]: I0217 14:10:06.348646 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/383cb77b-1bd9-496b-8c51-8e2aafcbfffe-catalog-content\") pod \"383cb77b-1bd9-496b-8c51-8e2aafcbfffe\" (UID: \"383cb77b-1bd9-496b-8c51-8e2aafcbfffe\") " Feb 17 14:10:06 crc kubenswrapper[4768]: I0217 14:10:06.348785 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vj984\" (UniqueName: \"kubernetes.io/projected/383cb77b-1bd9-496b-8c51-8e2aafcbfffe-kube-api-access-vj984\") pod \"383cb77b-1bd9-496b-8c51-8e2aafcbfffe\" (UID: \"383cb77b-1bd9-496b-8c51-8e2aafcbfffe\") " Feb 17 14:10:06 crc kubenswrapper[4768]: I0217 14:10:06.349419 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/383cb77b-1bd9-496b-8c51-8e2aafcbfffe-utilities" (OuterVolumeSpecName: "utilities") pod "383cb77b-1bd9-496b-8c51-8e2aafcbfffe" (UID: "383cb77b-1bd9-496b-8c51-8e2aafcbfffe"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 14:10:06 crc kubenswrapper[4768]: I0217 14:10:06.355197 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/383cb77b-1bd9-496b-8c51-8e2aafcbfffe-kube-api-access-vj984" (OuterVolumeSpecName: "kube-api-access-vj984") pod "383cb77b-1bd9-496b-8c51-8e2aafcbfffe" (UID: "383cb77b-1bd9-496b-8c51-8e2aafcbfffe"). InnerVolumeSpecName "kube-api-access-vj984". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 14:10:06 crc kubenswrapper[4768]: I0217 14:10:06.452023 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vj984\" (UniqueName: \"kubernetes.io/projected/383cb77b-1bd9-496b-8c51-8e2aafcbfffe-kube-api-access-vj984\") on node \"crc\" DevicePath \"\"" Feb 17 14:10:06 crc kubenswrapper[4768]: I0217 14:10:06.452073 4768 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/383cb77b-1bd9-496b-8c51-8e2aafcbfffe-utilities\") on node \"crc\" DevicePath \"\"" Feb 17 14:10:06 crc kubenswrapper[4768]: I0217 14:10:06.500518 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/383cb77b-1bd9-496b-8c51-8e2aafcbfffe-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "383cb77b-1bd9-496b-8c51-8e2aafcbfffe" (UID: "383cb77b-1bd9-496b-8c51-8e2aafcbfffe"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 14:10:06 crc kubenswrapper[4768]: I0217 14:10:06.553699 4768 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/383cb77b-1bd9-496b-8c51-8e2aafcbfffe-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 17 14:10:06 crc kubenswrapper[4768]: I0217 14:10:06.745035 4768 generic.go:334] "Generic (PLEG): container finished" podID="383cb77b-1bd9-496b-8c51-8e2aafcbfffe" containerID="97a64d6af0e7928c933f8e4dd4fb9434a5adab2b5f473feb33fcde7ac782848b" exitCode=0 Feb 17 14:10:06 crc kubenswrapper[4768]: I0217 14:10:06.745079 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4xqbs" event={"ID":"383cb77b-1bd9-496b-8c51-8e2aafcbfffe","Type":"ContainerDied","Data":"97a64d6af0e7928c933f8e4dd4fb9434a5adab2b5f473feb33fcde7ac782848b"} Feb 17 14:10:06 crc kubenswrapper[4768]: I0217 14:10:06.745150 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4xqbs" event={"ID":"383cb77b-1bd9-496b-8c51-8e2aafcbfffe","Type":"ContainerDied","Data":"a1f28d4afc6e96f9a81059733a5a06874a7bcd29c4c71d8346d4202c2c1eafe2"} Feb 17 14:10:06 crc kubenswrapper[4768]: I0217 14:10:06.745207 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-4xqbs" Feb 17 14:10:06 crc kubenswrapper[4768]: I0217 14:10:06.745456 4768 scope.go:117] "RemoveContainer" containerID="97a64d6af0e7928c933f8e4dd4fb9434a5adab2b5f473feb33fcde7ac782848b" Feb 17 14:10:06 crc kubenswrapper[4768]: I0217 14:10:06.789599 4768 scope.go:117] "RemoveContainer" containerID="9a66c2b4ba29bbebed8d88512de8e0862f472c6390a77b50b1e350b247cd7838" Feb 17 14:10:06 crc kubenswrapper[4768]: I0217 14:10:06.804656 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-4xqbs"] Feb 17 14:10:06 crc kubenswrapper[4768]: I0217 14:10:06.812470 4768 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-4xqbs"] Feb 17 14:10:06 crc kubenswrapper[4768]: I0217 14:10:06.835432 4768 scope.go:117] "RemoveContainer" containerID="c09d10ea87d1685e55ba71eb4e4e59224310302441428919cd9857afbbcdad29" Feb 17 14:10:06 crc kubenswrapper[4768]: I0217 14:10:06.906355 4768 scope.go:117] "RemoveContainer" containerID="97a64d6af0e7928c933f8e4dd4fb9434a5adab2b5f473feb33fcde7ac782848b" Feb 17 14:10:06 crc kubenswrapper[4768]: E0217 14:10:06.907080 4768 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"97a64d6af0e7928c933f8e4dd4fb9434a5adab2b5f473feb33fcde7ac782848b\": container with ID starting with 97a64d6af0e7928c933f8e4dd4fb9434a5adab2b5f473feb33fcde7ac782848b not found: ID does not exist" containerID="97a64d6af0e7928c933f8e4dd4fb9434a5adab2b5f473feb33fcde7ac782848b" Feb 17 14:10:06 crc kubenswrapper[4768]: I0217 14:10:06.907194 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"97a64d6af0e7928c933f8e4dd4fb9434a5adab2b5f473feb33fcde7ac782848b"} err="failed to get container status \"97a64d6af0e7928c933f8e4dd4fb9434a5adab2b5f473feb33fcde7ac782848b\": rpc error: code = NotFound desc = could not find container \"97a64d6af0e7928c933f8e4dd4fb9434a5adab2b5f473feb33fcde7ac782848b\": container with ID starting with 97a64d6af0e7928c933f8e4dd4fb9434a5adab2b5f473feb33fcde7ac782848b not found: ID does not exist" Feb 17 14:10:06 crc kubenswrapper[4768]: I0217 14:10:06.907230 4768 scope.go:117] "RemoveContainer" containerID="9a66c2b4ba29bbebed8d88512de8e0862f472c6390a77b50b1e350b247cd7838" Feb 17 14:10:06 crc kubenswrapper[4768]: E0217 14:10:06.907655 4768 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9a66c2b4ba29bbebed8d88512de8e0862f472c6390a77b50b1e350b247cd7838\": container with ID starting with 9a66c2b4ba29bbebed8d88512de8e0862f472c6390a77b50b1e350b247cd7838 not found: ID does not exist" containerID="9a66c2b4ba29bbebed8d88512de8e0862f472c6390a77b50b1e350b247cd7838" Feb 17 14:10:06 crc kubenswrapper[4768]: I0217 14:10:06.907685 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9a66c2b4ba29bbebed8d88512de8e0862f472c6390a77b50b1e350b247cd7838"} err="failed to get container status \"9a66c2b4ba29bbebed8d88512de8e0862f472c6390a77b50b1e350b247cd7838\": rpc error: code = NotFound desc = could not find container \"9a66c2b4ba29bbebed8d88512de8e0862f472c6390a77b50b1e350b247cd7838\": container with ID starting with 9a66c2b4ba29bbebed8d88512de8e0862f472c6390a77b50b1e350b247cd7838 not found: ID does not exist" Feb 17 14:10:06 crc kubenswrapper[4768]: I0217 14:10:06.907701 4768 scope.go:117] "RemoveContainer" containerID="c09d10ea87d1685e55ba71eb4e4e59224310302441428919cd9857afbbcdad29" Feb 17 14:10:06 crc kubenswrapper[4768]: E0217 14:10:06.908179 4768 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c09d10ea87d1685e55ba71eb4e4e59224310302441428919cd9857afbbcdad29\": container with ID starting with c09d10ea87d1685e55ba71eb4e4e59224310302441428919cd9857afbbcdad29 not found: ID does not exist" containerID="c09d10ea87d1685e55ba71eb4e4e59224310302441428919cd9857afbbcdad29" Feb 17 14:10:06 crc kubenswrapper[4768]: I0217 14:10:06.908211 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c09d10ea87d1685e55ba71eb4e4e59224310302441428919cd9857afbbcdad29"} err="failed to get container status \"c09d10ea87d1685e55ba71eb4e4e59224310302441428919cd9857afbbcdad29\": rpc error: code = NotFound desc = could not find container \"c09d10ea87d1685e55ba71eb4e4e59224310302441428919cd9857afbbcdad29\": container with ID starting with c09d10ea87d1685e55ba71eb4e4e59224310302441428919cd9857afbbcdad29 not found: ID does not exist" Feb 17 14:10:07 crc kubenswrapper[4768]: I0217 14:10:07.551323 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="383cb77b-1bd9-496b-8c51-8e2aafcbfffe" path="/var/lib/kubelet/pods/383cb77b-1bd9-496b-8c51-8e2aafcbfffe/volumes" Feb 17 14:10:40 crc kubenswrapper[4768]: I0217 14:10:40.512846 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-n9rns"] Feb 17 14:10:40 crc kubenswrapper[4768]: E0217 14:10:40.514659 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="383cb77b-1bd9-496b-8c51-8e2aafcbfffe" containerName="registry-server" Feb 17 14:10:40 crc kubenswrapper[4768]: I0217 14:10:40.514747 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="383cb77b-1bd9-496b-8c51-8e2aafcbfffe" containerName="registry-server" Feb 17 14:10:40 crc kubenswrapper[4768]: E0217 14:10:40.514841 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="383cb77b-1bd9-496b-8c51-8e2aafcbfffe" containerName="extract-content" Feb 17 14:10:40 crc kubenswrapper[4768]: I0217 14:10:40.514900 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="383cb77b-1bd9-496b-8c51-8e2aafcbfffe" containerName="extract-content" Feb 17 14:10:40 crc kubenswrapper[4768]: E0217 14:10:40.514954 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="383cb77b-1bd9-496b-8c51-8e2aafcbfffe" containerName="extract-utilities" Feb 17 14:10:40 crc kubenswrapper[4768]: I0217 14:10:40.515008 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="383cb77b-1bd9-496b-8c51-8e2aafcbfffe" containerName="extract-utilities" Feb 17 14:10:40 crc kubenswrapper[4768]: I0217 14:10:40.515287 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="383cb77b-1bd9-496b-8c51-8e2aafcbfffe" containerName="registry-server" Feb 17 14:10:40 crc kubenswrapper[4768]: I0217 14:10:40.516944 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-n9rns" Feb 17 14:10:40 crc kubenswrapper[4768]: I0217 14:10:40.535185 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-n9rns"] Feb 17 14:10:40 crc kubenswrapper[4768]: I0217 14:10:40.615961 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/495f0251-d960-442b-89a8-290293e5cc0c-utilities\") pod \"redhat-marketplace-n9rns\" (UID: \"495f0251-d960-442b-89a8-290293e5cc0c\") " pod="openshift-marketplace/redhat-marketplace-n9rns" Feb 17 14:10:40 crc kubenswrapper[4768]: I0217 14:10:40.616308 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p46rl\" (UniqueName: \"kubernetes.io/projected/495f0251-d960-442b-89a8-290293e5cc0c-kube-api-access-p46rl\") pod \"redhat-marketplace-n9rns\" (UID: \"495f0251-d960-442b-89a8-290293e5cc0c\") " pod="openshift-marketplace/redhat-marketplace-n9rns" Feb 17 14:10:40 crc kubenswrapper[4768]: I0217 14:10:40.616451 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/495f0251-d960-442b-89a8-290293e5cc0c-catalog-content\") pod \"redhat-marketplace-n9rns\" (UID: \"495f0251-d960-442b-89a8-290293e5cc0c\") " pod="openshift-marketplace/redhat-marketplace-n9rns" Feb 17 14:10:40 crc kubenswrapper[4768]: I0217 14:10:40.717826 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p46rl\" (UniqueName: \"kubernetes.io/projected/495f0251-d960-442b-89a8-290293e5cc0c-kube-api-access-p46rl\") pod \"redhat-marketplace-n9rns\" (UID: \"495f0251-d960-442b-89a8-290293e5cc0c\") " pod="openshift-marketplace/redhat-marketplace-n9rns" Feb 17 14:10:40 crc kubenswrapper[4768]: I0217 14:10:40.718127 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/495f0251-d960-442b-89a8-290293e5cc0c-catalog-content\") pod \"redhat-marketplace-n9rns\" (UID: \"495f0251-d960-442b-89a8-290293e5cc0c\") " pod="openshift-marketplace/redhat-marketplace-n9rns" Feb 17 14:10:40 crc kubenswrapper[4768]: I0217 14:10:40.718356 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/495f0251-d960-442b-89a8-290293e5cc0c-utilities\") pod \"redhat-marketplace-n9rns\" (UID: \"495f0251-d960-442b-89a8-290293e5cc0c\") " pod="openshift-marketplace/redhat-marketplace-n9rns" Feb 17 14:10:40 crc kubenswrapper[4768]: I0217 14:10:40.718666 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/495f0251-d960-442b-89a8-290293e5cc0c-catalog-content\") pod \"redhat-marketplace-n9rns\" (UID: \"495f0251-d960-442b-89a8-290293e5cc0c\") " pod="openshift-marketplace/redhat-marketplace-n9rns" Feb 17 14:10:40 crc kubenswrapper[4768]: I0217 14:10:40.718721 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/495f0251-d960-442b-89a8-290293e5cc0c-utilities\") pod \"redhat-marketplace-n9rns\" (UID: \"495f0251-d960-442b-89a8-290293e5cc0c\") " pod="openshift-marketplace/redhat-marketplace-n9rns" Feb 17 14:10:40 crc kubenswrapper[4768]: I0217 14:10:40.744459 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p46rl\" (UniqueName: \"kubernetes.io/projected/495f0251-d960-442b-89a8-290293e5cc0c-kube-api-access-p46rl\") pod \"redhat-marketplace-n9rns\" (UID: \"495f0251-d960-442b-89a8-290293e5cc0c\") " pod="openshift-marketplace/redhat-marketplace-n9rns" Feb 17 14:10:40 crc kubenswrapper[4768]: I0217 14:10:40.837984 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-n9rns" Feb 17 14:10:41 crc kubenswrapper[4768]: I0217 14:10:41.328487 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-n9rns"] Feb 17 14:10:41 crc kubenswrapper[4768]: E0217 14:10:41.723764 4768 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod495f0251_d960_442b_89a8_290293e5cc0c.slice/crio-ee9010ad4e72f3ab10f5245953a89035d1a2f2ff4c9bf27a1f8d9d477ae01a09.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod495f0251_d960_442b_89a8_290293e5cc0c.slice/crio-conmon-ee9010ad4e72f3ab10f5245953a89035d1a2f2ff4c9bf27a1f8d9d477ae01a09.scope\": RecentStats: unable to find data in memory cache]" Feb 17 14:10:42 crc kubenswrapper[4768]: I0217 14:10:42.095594 4768 generic.go:334] "Generic (PLEG): container finished" podID="495f0251-d960-442b-89a8-290293e5cc0c" containerID="ee9010ad4e72f3ab10f5245953a89035d1a2f2ff4c9bf27a1f8d9d477ae01a09" exitCode=0 Feb 17 14:10:42 crc kubenswrapper[4768]: I0217 14:10:42.095678 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-n9rns" event={"ID":"495f0251-d960-442b-89a8-290293e5cc0c","Type":"ContainerDied","Data":"ee9010ad4e72f3ab10f5245953a89035d1a2f2ff4c9bf27a1f8d9d477ae01a09"} Feb 17 14:10:42 crc kubenswrapper[4768]: I0217 14:10:42.095996 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-n9rns" event={"ID":"495f0251-d960-442b-89a8-290293e5cc0c","Type":"ContainerStarted","Data":"778ed1bde146dd130a427d304f9df366b27f2818669cc88334ad178255d9a2d6"} Feb 17 14:10:43 crc kubenswrapper[4768]: I0217 14:10:43.110441 4768 generic.go:334] "Generic (PLEG): container finished" podID="495f0251-d960-442b-89a8-290293e5cc0c" containerID="308b8f6771cc38f9e8031bde75b23cf182e873e518474ebc038f50efcced86e6" exitCode=0 Feb 17 14:10:43 crc kubenswrapper[4768]: I0217 14:10:43.110512 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-n9rns" event={"ID":"495f0251-d960-442b-89a8-290293e5cc0c","Type":"ContainerDied","Data":"308b8f6771cc38f9e8031bde75b23cf182e873e518474ebc038f50efcced86e6"} Feb 17 14:10:44 crc kubenswrapper[4768]: I0217 14:10:44.123955 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-n9rns" event={"ID":"495f0251-d960-442b-89a8-290293e5cc0c","Type":"ContainerStarted","Data":"69d8d2dd77fe4a9d29cd4b838a449f3adfa9844a2116aa208f3715a718334156"} Feb 17 14:10:44 crc kubenswrapper[4768]: I0217 14:10:44.165318 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-n9rns" podStartSLOduration=2.760673598 podStartE2EDuration="4.165279832s" podCreationTimestamp="2026-02-17 14:10:40 +0000 UTC" firstStartedPulling="2026-02-17 14:10:42.098037867 +0000 UTC m=+2061.377424309" lastFinishedPulling="2026-02-17 14:10:43.502644091 +0000 UTC m=+2062.782030543" observedRunningTime="2026-02-17 14:10:44.15061639 +0000 UTC m=+2063.430002852" watchObservedRunningTime="2026-02-17 14:10:44.165279832 +0000 UTC m=+2063.444666314" Feb 17 14:10:50 crc kubenswrapper[4768]: I0217 14:10:50.838827 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-n9rns" Feb 17 14:10:50 crc kubenswrapper[4768]: I0217 14:10:50.839618 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-n9rns" Feb 17 14:10:50 crc kubenswrapper[4768]: I0217 14:10:50.915842 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-n9rns" Feb 17 14:10:51 crc kubenswrapper[4768]: I0217 14:10:51.233015 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-n9rns" Feb 17 14:10:51 crc kubenswrapper[4768]: I0217 14:10:51.331238 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-n9rns"] Feb 17 14:10:53 crc kubenswrapper[4768]: I0217 14:10:53.205872 4768 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-n9rns" podUID="495f0251-d960-442b-89a8-290293e5cc0c" containerName="registry-server" containerID="cri-o://69d8d2dd77fe4a9d29cd4b838a449f3adfa9844a2116aa208f3715a718334156" gracePeriod=2 Feb 17 14:10:53 crc kubenswrapper[4768]: I0217 14:10:53.653519 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-n9rns" Feb 17 14:10:53 crc kubenswrapper[4768]: I0217 14:10:53.770962 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/495f0251-d960-442b-89a8-290293e5cc0c-catalog-content\") pod \"495f0251-d960-442b-89a8-290293e5cc0c\" (UID: \"495f0251-d960-442b-89a8-290293e5cc0c\") " Feb 17 14:10:53 crc kubenswrapper[4768]: I0217 14:10:53.771173 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/495f0251-d960-442b-89a8-290293e5cc0c-utilities\") pod \"495f0251-d960-442b-89a8-290293e5cc0c\" (UID: \"495f0251-d960-442b-89a8-290293e5cc0c\") " Feb 17 14:10:53 crc kubenswrapper[4768]: I0217 14:10:53.771300 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p46rl\" (UniqueName: \"kubernetes.io/projected/495f0251-d960-442b-89a8-290293e5cc0c-kube-api-access-p46rl\") pod \"495f0251-d960-442b-89a8-290293e5cc0c\" (UID: \"495f0251-d960-442b-89a8-290293e5cc0c\") " Feb 17 14:10:53 crc kubenswrapper[4768]: I0217 14:10:53.772390 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/495f0251-d960-442b-89a8-290293e5cc0c-utilities" (OuterVolumeSpecName: "utilities") pod "495f0251-d960-442b-89a8-290293e5cc0c" (UID: "495f0251-d960-442b-89a8-290293e5cc0c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 14:10:53 crc kubenswrapper[4768]: I0217 14:10:53.781178 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/495f0251-d960-442b-89a8-290293e5cc0c-kube-api-access-p46rl" (OuterVolumeSpecName: "kube-api-access-p46rl") pod "495f0251-d960-442b-89a8-290293e5cc0c" (UID: "495f0251-d960-442b-89a8-290293e5cc0c"). InnerVolumeSpecName "kube-api-access-p46rl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 14:10:53 crc kubenswrapper[4768]: I0217 14:10:53.797508 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/495f0251-d960-442b-89a8-290293e5cc0c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "495f0251-d960-442b-89a8-290293e5cc0c" (UID: "495f0251-d960-442b-89a8-290293e5cc0c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 14:10:53 crc kubenswrapper[4768]: I0217 14:10:53.873429 4768 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/495f0251-d960-442b-89a8-290293e5cc0c-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 17 14:10:53 crc kubenswrapper[4768]: I0217 14:10:53.873478 4768 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/495f0251-d960-442b-89a8-290293e5cc0c-utilities\") on node \"crc\" DevicePath \"\"" Feb 17 14:10:53 crc kubenswrapper[4768]: I0217 14:10:53.873492 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p46rl\" (UniqueName: \"kubernetes.io/projected/495f0251-d960-442b-89a8-290293e5cc0c-kube-api-access-p46rl\") on node \"crc\" DevicePath \"\"" Feb 17 14:10:54 crc kubenswrapper[4768]: I0217 14:10:54.217538 4768 generic.go:334] "Generic (PLEG): container finished" podID="495f0251-d960-442b-89a8-290293e5cc0c" containerID="69d8d2dd77fe4a9d29cd4b838a449f3adfa9844a2116aa208f3715a718334156" exitCode=0 Feb 17 14:10:54 crc kubenswrapper[4768]: I0217 14:10:54.217589 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-n9rns" Feb 17 14:10:54 crc kubenswrapper[4768]: I0217 14:10:54.217605 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-n9rns" event={"ID":"495f0251-d960-442b-89a8-290293e5cc0c","Type":"ContainerDied","Data":"69d8d2dd77fe4a9d29cd4b838a449f3adfa9844a2116aa208f3715a718334156"} Feb 17 14:10:54 crc kubenswrapper[4768]: I0217 14:10:54.217665 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-n9rns" event={"ID":"495f0251-d960-442b-89a8-290293e5cc0c","Type":"ContainerDied","Data":"778ed1bde146dd130a427d304f9df366b27f2818669cc88334ad178255d9a2d6"} Feb 17 14:10:54 crc kubenswrapper[4768]: I0217 14:10:54.217707 4768 scope.go:117] "RemoveContainer" containerID="69d8d2dd77fe4a9d29cd4b838a449f3adfa9844a2116aa208f3715a718334156" Feb 17 14:10:54 crc kubenswrapper[4768]: I0217 14:10:54.250331 4768 scope.go:117] "RemoveContainer" containerID="308b8f6771cc38f9e8031bde75b23cf182e873e518474ebc038f50efcced86e6" Feb 17 14:10:54 crc kubenswrapper[4768]: I0217 14:10:54.261137 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-n9rns"] Feb 17 14:10:54 crc kubenswrapper[4768]: I0217 14:10:54.274591 4768 scope.go:117] "RemoveContainer" containerID="ee9010ad4e72f3ab10f5245953a89035d1a2f2ff4c9bf27a1f8d9d477ae01a09" Feb 17 14:10:54 crc kubenswrapper[4768]: I0217 14:10:54.281847 4768 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-n9rns"] Feb 17 14:10:54 crc kubenswrapper[4768]: I0217 14:10:54.342238 4768 scope.go:117] "RemoveContainer" containerID="69d8d2dd77fe4a9d29cd4b838a449f3adfa9844a2116aa208f3715a718334156" Feb 17 14:10:54 crc kubenswrapper[4768]: E0217 14:10:54.342747 4768 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"69d8d2dd77fe4a9d29cd4b838a449f3adfa9844a2116aa208f3715a718334156\": container with ID starting with 69d8d2dd77fe4a9d29cd4b838a449f3adfa9844a2116aa208f3715a718334156 not found: ID does not exist" containerID="69d8d2dd77fe4a9d29cd4b838a449f3adfa9844a2116aa208f3715a718334156" Feb 17 14:10:54 crc kubenswrapper[4768]: I0217 14:10:54.342812 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"69d8d2dd77fe4a9d29cd4b838a449f3adfa9844a2116aa208f3715a718334156"} err="failed to get container status \"69d8d2dd77fe4a9d29cd4b838a449f3adfa9844a2116aa208f3715a718334156\": rpc error: code = NotFound desc = could not find container \"69d8d2dd77fe4a9d29cd4b838a449f3adfa9844a2116aa208f3715a718334156\": container with ID starting with 69d8d2dd77fe4a9d29cd4b838a449f3adfa9844a2116aa208f3715a718334156 not found: ID does not exist" Feb 17 14:10:54 crc kubenswrapper[4768]: I0217 14:10:54.342854 4768 scope.go:117] "RemoveContainer" containerID="308b8f6771cc38f9e8031bde75b23cf182e873e518474ebc038f50efcced86e6" Feb 17 14:10:54 crc kubenswrapper[4768]: E0217 14:10:54.343247 4768 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"308b8f6771cc38f9e8031bde75b23cf182e873e518474ebc038f50efcced86e6\": container with ID starting with 308b8f6771cc38f9e8031bde75b23cf182e873e518474ebc038f50efcced86e6 not found: ID does not exist" containerID="308b8f6771cc38f9e8031bde75b23cf182e873e518474ebc038f50efcced86e6" Feb 17 14:10:54 crc kubenswrapper[4768]: I0217 14:10:54.343289 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"308b8f6771cc38f9e8031bde75b23cf182e873e518474ebc038f50efcced86e6"} err="failed to get container status \"308b8f6771cc38f9e8031bde75b23cf182e873e518474ebc038f50efcced86e6\": rpc error: code = NotFound desc = could not find container \"308b8f6771cc38f9e8031bde75b23cf182e873e518474ebc038f50efcced86e6\": container with ID starting with 308b8f6771cc38f9e8031bde75b23cf182e873e518474ebc038f50efcced86e6 not found: ID does not exist" Feb 17 14:10:54 crc kubenswrapper[4768]: I0217 14:10:54.343317 4768 scope.go:117] "RemoveContainer" containerID="ee9010ad4e72f3ab10f5245953a89035d1a2f2ff4c9bf27a1f8d9d477ae01a09" Feb 17 14:10:54 crc kubenswrapper[4768]: E0217 14:10:54.343702 4768 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ee9010ad4e72f3ab10f5245953a89035d1a2f2ff4c9bf27a1f8d9d477ae01a09\": container with ID starting with ee9010ad4e72f3ab10f5245953a89035d1a2f2ff4c9bf27a1f8d9d477ae01a09 not found: ID does not exist" containerID="ee9010ad4e72f3ab10f5245953a89035d1a2f2ff4c9bf27a1f8d9d477ae01a09" Feb 17 14:10:54 crc kubenswrapper[4768]: I0217 14:10:54.343806 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ee9010ad4e72f3ab10f5245953a89035d1a2f2ff4c9bf27a1f8d9d477ae01a09"} err="failed to get container status \"ee9010ad4e72f3ab10f5245953a89035d1a2f2ff4c9bf27a1f8d9d477ae01a09\": rpc error: code = NotFound desc = could not find container \"ee9010ad4e72f3ab10f5245953a89035d1a2f2ff4c9bf27a1f8d9d477ae01a09\": container with ID starting with ee9010ad4e72f3ab10f5245953a89035d1a2f2ff4c9bf27a1f8d9d477ae01a09 not found: ID does not exist" Feb 17 14:10:55 crc kubenswrapper[4768]: I0217 14:10:55.554647 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="495f0251-d960-442b-89a8-290293e5cc0c" path="/var/lib/kubelet/pods/495f0251-d960-442b-89a8-290293e5cc0c/volumes" Feb 17 14:11:58 crc kubenswrapper[4768]: I0217 14:11:58.060376 4768 patch_prober.go:28] interesting pod/machine-config-daemon-p97z4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 14:11:58 crc kubenswrapper[4768]: I0217 14:11:58.060958 4768 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-p97z4" podUID="10c685ba-8fe0-425c-958c-3fb6754d3d84" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 14:12:28 crc kubenswrapper[4768]: I0217 14:12:28.059768 4768 patch_prober.go:28] interesting pod/machine-config-daemon-p97z4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 14:12:28 crc kubenswrapper[4768]: I0217 14:12:28.060364 4768 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-p97z4" podUID="10c685ba-8fe0-425c-958c-3fb6754d3d84" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 14:12:58 crc kubenswrapper[4768]: I0217 14:12:58.059606 4768 patch_prober.go:28] interesting pod/machine-config-daemon-p97z4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 14:12:58 crc kubenswrapper[4768]: I0217 14:12:58.060204 4768 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-p97z4" podUID="10c685ba-8fe0-425c-958c-3fb6754d3d84" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 14:12:58 crc kubenswrapper[4768]: I0217 14:12:58.060250 4768 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-p97z4" Feb 17 14:12:58 crc kubenswrapper[4768]: I0217 14:12:58.060929 4768 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"f1e36808cf48ba534d66b2ba15d35f7785c1fe424a7230a2ddcd4dc0fb81f016"} pod="openshift-machine-config-operator/machine-config-daemon-p97z4" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 17 14:12:58 crc kubenswrapper[4768]: I0217 14:12:58.060975 4768 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-p97z4" podUID="10c685ba-8fe0-425c-958c-3fb6754d3d84" containerName="machine-config-daemon" containerID="cri-o://f1e36808cf48ba534d66b2ba15d35f7785c1fe424a7230a2ddcd4dc0fb81f016" gracePeriod=600 Feb 17 14:12:58 crc kubenswrapper[4768]: E0217 14:12:58.190294 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p97z4_openshift-machine-config-operator(10c685ba-8fe0-425c-958c-3fb6754d3d84)\"" pod="openshift-machine-config-operator/machine-config-daemon-p97z4" podUID="10c685ba-8fe0-425c-958c-3fb6754d3d84" Feb 17 14:12:58 crc kubenswrapper[4768]: I0217 14:12:58.363924 4768 generic.go:334] "Generic (PLEG): container finished" podID="10c685ba-8fe0-425c-958c-3fb6754d3d84" containerID="f1e36808cf48ba534d66b2ba15d35f7785c1fe424a7230a2ddcd4dc0fb81f016" exitCode=0 Feb 17 14:12:58 crc kubenswrapper[4768]: I0217 14:12:58.364391 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-p97z4" event={"ID":"10c685ba-8fe0-425c-958c-3fb6754d3d84","Type":"ContainerDied","Data":"f1e36808cf48ba534d66b2ba15d35f7785c1fe424a7230a2ddcd4dc0fb81f016"} Feb 17 14:12:58 crc kubenswrapper[4768]: I0217 14:12:58.364492 4768 scope.go:117] "RemoveContainer" containerID="f4b1ed5ad4696f245b46f42d7dd4597fdcb14a363987811db0ee8a9896aa7bd9" Feb 17 14:12:58 crc kubenswrapper[4768]: I0217 14:12:58.366170 4768 scope.go:117] "RemoveContainer" containerID="f1e36808cf48ba534d66b2ba15d35f7785c1fe424a7230a2ddcd4dc0fb81f016" Feb 17 14:12:58 crc kubenswrapper[4768]: E0217 14:12:58.366800 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p97z4_openshift-machine-config-operator(10c685ba-8fe0-425c-958c-3fb6754d3d84)\"" pod="openshift-machine-config-operator/machine-config-daemon-p97z4" podUID="10c685ba-8fe0-425c-958c-3fb6754d3d84" Feb 17 14:13:03 crc kubenswrapper[4768]: I0217 14:13:03.109842 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-z9w2w"] Feb 17 14:13:03 crc kubenswrapper[4768]: E0217 14:13:03.111083 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="495f0251-d960-442b-89a8-290293e5cc0c" containerName="extract-utilities" Feb 17 14:13:03 crc kubenswrapper[4768]: I0217 14:13:03.111101 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="495f0251-d960-442b-89a8-290293e5cc0c" containerName="extract-utilities" Feb 17 14:13:03 crc kubenswrapper[4768]: E0217 14:13:03.111142 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="495f0251-d960-442b-89a8-290293e5cc0c" containerName="extract-content" Feb 17 14:13:03 crc kubenswrapper[4768]: I0217 14:13:03.111148 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="495f0251-d960-442b-89a8-290293e5cc0c" containerName="extract-content" Feb 17 14:13:03 crc kubenswrapper[4768]: E0217 14:13:03.111167 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="495f0251-d960-442b-89a8-290293e5cc0c" containerName="registry-server" Feb 17 14:13:03 crc kubenswrapper[4768]: I0217 14:13:03.111173 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="495f0251-d960-442b-89a8-290293e5cc0c" containerName="registry-server" Feb 17 14:13:03 crc kubenswrapper[4768]: I0217 14:13:03.111485 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="495f0251-d960-442b-89a8-290293e5cc0c" containerName="registry-server" Feb 17 14:13:03 crc kubenswrapper[4768]: I0217 14:13:03.113526 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-z9w2w" Feb 17 14:13:03 crc kubenswrapper[4768]: I0217 14:13:03.124341 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-z9w2w"] Feb 17 14:13:03 crc kubenswrapper[4768]: I0217 14:13:03.142320 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g2g5p\" (UniqueName: \"kubernetes.io/projected/11ac5bd1-8374-4e07-8e0c-6e9fe426130d-kube-api-access-g2g5p\") pod \"community-operators-z9w2w\" (UID: \"11ac5bd1-8374-4e07-8e0c-6e9fe426130d\") " pod="openshift-marketplace/community-operators-z9w2w" Feb 17 14:13:03 crc kubenswrapper[4768]: I0217 14:13:03.142370 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/11ac5bd1-8374-4e07-8e0c-6e9fe426130d-catalog-content\") pod \"community-operators-z9w2w\" (UID: \"11ac5bd1-8374-4e07-8e0c-6e9fe426130d\") " pod="openshift-marketplace/community-operators-z9w2w" Feb 17 14:13:03 crc kubenswrapper[4768]: I0217 14:13:03.142569 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/11ac5bd1-8374-4e07-8e0c-6e9fe426130d-utilities\") pod \"community-operators-z9w2w\" (UID: \"11ac5bd1-8374-4e07-8e0c-6e9fe426130d\") " pod="openshift-marketplace/community-operators-z9w2w" Feb 17 14:13:03 crc kubenswrapper[4768]: I0217 14:13:03.244023 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g2g5p\" (UniqueName: \"kubernetes.io/projected/11ac5bd1-8374-4e07-8e0c-6e9fe426130d-kube-api-access-g2g5p\") pod \"community-operators-z9w2w\" (UID: \"11ac5bd1-8374-4e07-8e0c-6e9fe426130d\") " pod="openshift-marketplace/community-operators-z9w2w" Feb 17 14:13:03 crc kubenswrapper[4768]: I0217 14:13:03.244086 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/11ac5bd1-8374-4e07-8e0c-6e9fe426130d-catalog-content\") pod \"community-operators-z9w2w\" (UID: \"11ac5bd1-8374-4e07-8e0c-6e9fe426130d\") " pod="openshift-marketplace/community-operators-z9w2w" Feb 17 14:13:03 crc kubenswrapper[4768]: I0217 14:13:03.244197 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/11ac5bd1-8374-4e07-8e0c-6e9fe426130d-utilities\") pod \"community-operators-z9w2w\" (UID: \"11ac5bd1-8374-4e07-8e0c-6e9fe426130d\") " pod="openshift-marketplace/community-operators-z9w2w" Feb 17 14:13:03 crc kubenswrapper[4768]: I0217 14:13:03.244650 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/11ac5bd1-8374-4e07-8e0c-6e9fe426130d-utilities\") pod \"community-operators-z9w2w\" (UID: \"11ac5bd1-8374-4e07-8e0c-6e9fe426130d\") " pod="openshift-marketplace/community-operators-z9w2w" Feb 17 14:13:03 crc kubenswrapper[4768]: I0217 14:13:03.244867 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/11ac5bd1-8374-4e07-8e0c-6e9fe426130d-catalog-content\") pod \"community-operators-z9w2w\" (UID: \"11ac5bd1-8374-4e07-8e0c-6e9fe426130d\") " pod="openshift-marketplace/community-operators-z9w2w" Feb 17 14:13:03 crc kubenswrapper[4768]: I0217 14:13:03.264357 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g2g5p\" (UniqueName: \"kubernetes.io/projected/11ac5bd1-8374-4e07-8e0c-6e9fe426130d-kube-api-access-g2g5p\") pod \"community-operators-z9w2w\" (UID: \"11ac5bd1-8374-4e07-8e0c-6e9fe426130d\") " pod="openshift-marketplace/community-operators-z9w2w" Feb 17 14:13:03 crc kubenswrapper[4768]: I0217 14:13:03.442661 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-z9w2w" Feb 17 14:13:04 crc kubenswrapper[4768]: I0217 14:13:04.036410 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-z9w2w"] Feb 17 14:13:04 crc kubenswrapper[4768]: I0217 14:13:04.418805 4768 generic.go:334] "Generic (PLEG): container finished" podID="11ac5bd1-8374-4e07-8e0c-6e9fe426130d" containerID="835fedf213a218af972703725c8bd6dce31cfd27a6e48732557e4bc4226ffc3f" exitCode=0 Feb 17 14:13:04 crc kubenswrapper[4768]: I0217 14:13:04.418912 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-z9w2w" event={"ID":"11ac5bd1-8374-4e07-8e0c-6e9fe426130d","Type":"ContainerDied","Data":"835fedf213a218af972703725c8bd6dce31cfd27a6e48732557e4bc4226ffc3f"} Feb 17 14:13:04 crc kubenswrapper[4768]: I0217 14:13:04.419160 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-z9w2w" event={"ID":"11ac5bd1-8374-4e07-8e0c-6e9fe426130d","Type":"ContainerStarted","Data":"0f016222df5214aa1f6305f0e4935fd1172c653e2588d34a8f92ead349931f6a"} Feb 17 14:13:06 crc kubenswrapper[4768]: I0217 14:13:06.434228 4768 generic.go:334] "Generic (PLEG): container finished" podID="11ac5bd1-8374-4e07-8e0c-6e9fe426130d" containerID="1bb195c693d1b9a8399e45aeef70ebdffcfd5b1edb1e2fe91ec54b2a2d789cdf" exitCode=0 Feb 17 14:13:06 crc kubenswrapper[4768]: I0217 14:13:06.434282 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-z9w2w" event={"ID":"11ac5bd1-8374-4e07-8e0c-6e9fe426130d","Type":"ContainerDied","Data":"1bb195c693d1b9a8399e45aeef70ebdffcfd5b1edb1e2fe91ec54b2a2d789cdf"} Feb 17 14:13:07 crc kubenswrapper[4768]: I0217 14:13:07.444503 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-z9w2w" event={"ID":"11ac5bd1-8374-4e07-8e0c-6e9fe426130d","Type":"ContainerStarted","Data":"e12a1e496e46a20054144b78f32c8d4615b6b958bc3ddf0aea18c07ea33930cd"} Feb 17 14:13:07 crc kubenswrapper[4768]: I0217 14:13:07.460831 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-z9w2w" podStartSLOduration=1.8222155 podStartE2EDuration="4.460810042s" podCreationTimestamp="2026-02-17 14:13:03 +0000 UTC" firstStartedPulling="2026-02-17 14:13:04.42016999 +0000 UTC m=+2203.699556432" lastFinishedPulling="2026-02-17 14:13:07.058764512 +0000 UTC m=+2206.338150974" observedRunningTime="2026-02-17 14:13:07.460262097 +0000 UTC m=+2206.739648539" watchObservedRunningTime="2026-02-17 14:13:07.460810042 +0000 UTC m=+2206.740196504" Feb 17 14:13:10 crc kubenswrapper[4768]: I0217 14:13:10.534637 4768 scope.go:117] "RemoveContainer" containerID="f1e36808cf48ba534d66b2ba15d35f7785c1fe424a7230a2ddcd4dc0fb81f016" Feb 17 14:13:10 crc kubenswrapper[4768]: E0217 14:13:10.535355 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p97z4_openshift-machine-config-operator(10c685ba-8fe0-425c-958c-3fb6754d3d84)\"" pod="openshift-machine-config-operator/machine-config-daemon-p97z4" podUID="10c685ba-8fe0-425c-958c-3fb6754d3d84" Feb 17 14:13:13 crc kubenswrapper[4768]: I0217 14:13:13.443636 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-z9w2w" Feb 17 14:13:13 crc kubenswrapper[4768]: I0217 14:13:13.444178 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-z9w2w" Feb 17 14:13:13 crc kubenswrapper[4768]: I0217 14:13:13.500026 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-z9w2w" Feb 17 14:13:13 crc kubenswrapper[4768]: I0217 14:13:13.587845 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-z9w2w" Feb 17 14:13:14 crc kubenswrapper[4768]: I0217 14:13:14.918368 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-z9w2w"] Feb 17 14:13:15 crc kubenswrapper[4768]: I0217 14:13:15.509030 4768 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-z9w2w" podUID="11ac5bd1-8374-4e07-8e0c-6e9fe426130d" containerName="registry-server" containerID="cri-o://e12a1e496e46a20054144b78f32c8d4615b6b958bc3ddf0aea18c07ea33930cd" gracePeriod=2 Feb 17 14:13:15 crc kubenswrapper[4768]: I0217 14:13:15.953115 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-z9w2w" Feb 17 14:13:16 crc kubenswrapper[4768]: I0217 14:13:16.096341 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/11ac5bd1-8374-4e07-8e0c-6e9fe426130d-utilities\") pod \"11ac5bd1-8374-4e07-8e0c-6e9fe426130d\" (UID: \"11ac5bd1-8374-4e07-8e0c-6e9fe426130d\") " Feb 17 14:13:16 crc kubenswrapper[4768]: I0217 14:13:16.096395 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g2g5p\" (UniqueName: \"kubernetes.io/projected/11ac5bd1-8374-4e07-8e0c-6e9fe426130d-kube-api-access-g2g5p\") pod \"11ac5bd1-8374-4e07-8e0c-6e9fe426130d\" (UID: \"11ac5bd1-8374-4e07-8e0c-6e9fe426130d\") " Feb 17 14:13:16 crc kubenswrapper[4768]: I0217 14:13:16.096420 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/11ac5bd1-8374-4e07-8e0c-6e9fe426130d-catalog-content\") pod \"11ac5bd1-8374-4e07-8e0c-6e9fe426130d\" (UID: \"11ac5bd1-8374-4e07-8e0c-6e9fe426130d\") " Feb 17 14:13:16 crc kubenswrapper[4768]: I0217 14:13:16.099044 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/11ac5bd1-8374-4e07-8e0c-6e9fe426130d-utilities" (OuterVolumeSpecName: "utilities") pod "11ac5bd1-8374-4e07-8e0c-6e9fe426130d" (UID: "11ac5bd1-8374-4e07-8e0c-6e9fe426130d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 14:13:16 crc kubenswrapper[4768]: I0217 14:13:16.103362 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/11ac5bd1-8374-4e07-8e0c-6e9fe426130d-kube-api-access-g2g5p" (OuterVolumeSpecName: "kube-api-access-g2g5p") pod "11ac5bd1-8374-4e07-8e0c-6e9fe426130d" (UID: "11ac5bd1-8374-4e07-8e0c-6e9fe426130d"). InnerVolumeSpecName "kube-api-access-g2g5p". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 14:13:16 crc kubenswrapper[4768]: I0217 14:13:16.147729 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/11ac5bd1-8374-4e07-8e0c-6e9fe426130d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "11ac5bd1-8374-4e07-8e0c-6e9fe426130d" (UID: "11ac5bd1-8374-4e07-8e0c-6e9fe426130d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 14:13:16 crc kubenswrapper[4768]: I0217 14:13:16.198347 4768 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/11ac5bd1-8374-4e07-8e0c-6e9fe426130d-utilities\") on node \"crc\" DevicePath \"\"" Feb 17 14:13:16 crc kubenswrapper[4768]: I0217 14:13:16.198400 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g2g5p\" (UniqueName: \"kubernetes.io/projected/11ac5bd1-8374-4e07-8e0c-6e9fe426130d-kube-api-access-g2g5p\") on node \"crc\" DevicePath \"\"" Feb 17 14:13:16 crc kubenswrapper[4768]: I0217 14:13:16.198416 4768 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/11ac5bd1-8374-4e07-8e0c-6e9fe426130d-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 17 14:13:16 crc kubenswrapper[4768]: I0217 14:13:16.521575 4768 generic.go:334] "Generic (PLEG): container finished" podID="11ac5bd1-8374-4e07-8e0c-6e9fe426130d" containerID="e12a1e496e46a20054144b78f32c8d4615b6b958bc3ddf0aea18c07ea33930cd" exitCode=0 Feb 17 14:13:16 crc kubenswrapper[4768]: I0217 14:13:16.521628 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-z9w2w" event={"ID":"11ac5bd1-8374-4e07-8e0c-6e9fe426130d","Type":"ContainerDied","Data":"e12a1e496e46a20054144b78f32c8d4615b6b958bc3ddf0aea18c07ea33930cd"} Feb 17 14:13:16 crc kubenswrapper[4768]: I0217 14:13:16.521659 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-z9w2w" event={"ID":"11ac5bd1-8374-4e07-8e0c-6e9fe426130d","Type":"ContainerDied","Data":"0f016222df5214aa1f6305f0e4935fd1172c653e2588d34a8f92ead349931f6a"} Feb 17 14:13:16 crc kubenswrapper[4768]: I0217 14:13:16.521680 4768 scope.go:117] "RemoveContainer" containerID="e12a1e496e46a20054144b78f32c8d4615b6b958bc3ddf0aea18c07ea33930cd" Feb 17 14:13:16 crc kubenswrapper[4768]: I0217 14:13:16.521690 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-z9w2w" Feb 17 14:13:16 crc kubenswrapper[4768]: I0217 14:13:16.552891 4768 scope.go:117] "RemoveContainer" containerID="1bb195c693d1b9a8399e45aeef70ebdffcfd5b1edb1e2fe91ec54b2a2d789cdf" Feb 17 14:13:16 crc kubenswrapper[4768]: I0217 14:13:16.558916 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-z9w2w"] Feb 17 14:13:16 crc kubenswrapper[4768]: I0217 14:13:16.566953 4768 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-z9w2w"] Feb 17 14:13:16 crc kubenswrapper[4768]: I0217 14:13:16.579136 4768 scope.go:117] "RemoveContainer" containerID="835fedf213a218af972703725c8bd6dce31cfd27a6e48732557e4bc4226ffc3f" Feb 17 14:13:16 crc kubenswrapper[4768]: I0217 14:13:16.623803 4768 scope.go:117] "RemoveContainer" containerID="e12a1e496e46a20054144b78f32c8d4615b6b958bc3ddf0aea18c07ea33930cd" Feb 17 14:13:16 crc kubenswrapper[4768]: E0217 14:13:16.624280 4768 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e12a1e496e46a20054144b78f32c8d4615b6b958bc3ddf0aea18c07ea33930cd\": container with ID starting with e12a1e496e46a20054144b78f32c8d4615b6b958bc3ddf0aea18c07ea33930cd not found: ID does not exist" containerID="e12a1e496e46a20054144b78f32c8d4615b6b958bc3ddf0aea18c07ea33930cd" Feb 17 14:13:16 crc kubenswrapper[4768]: I0217 14:13:16.624333 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e12a1e496e46a20054144b78f32c8d4615b6b958bc3ddf0aea18c07ea33930cd"} err="failed to get container status \"e12a1e496e46a20054144b78f32c8d4615b6b958bc3ddf0aea18c07ea33930cd\": rpc error: code = NotFound desc = could not find container \"e12a1e496e46a20054144b78f32c8d4615b6b958bc3ddf0aea18c07ea33930cd\": container with ID starting with e12a1e496e46a20054144b78f32c8d4615b6b958bc3ddf0aea18c07ea33930cd not found: ID does not exist" Feb 17 14:13:16 crc kubenswrapper[4768]: I0217 14:13:16.624364 4768 scope.go:117] "RemoveContainer" containerID="1bb195c693d1b9a8399e45aeef70ebdffcfd5b1edb1e2fe91ec54b2a2d789cdf" Feb 17 14:13:16 crc kubenswrapper[4768]: E0217 14:13:16.624740 4768 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1bb195c693d1b9a8399e45aeef70ebdffcfd5b1edb1e2fe91ec54b2a2d789cdf\": container with ID starting with 1bb195c693d1b9a8399e45aeef70ebdffcfd5b1edb1e2fe91ec54b2a2d789cdf not found: ID does not exist" containerID="1bb195c693d1b9a8399e45aeef70ebdffcfd5b1edb1e2fe91ec54b2a2d789cdf" Feb 17 14:13:16 crc kubenswrapper[4768]: I0217 14:13:16.624769 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1bb195c693d1b9a8399e45aeef70ebdffcfd5b1edb1e2fe91ec54b2a2d789cdf"} err="failed to get container status \"1bb195c693d1b9a8399e45aeef70ebdffcfd5b1edb1e2fe91ec54b2a2d789cdf\": rpc error: code = NotFound desc = could not find container \"1bb195c693d1b9a8399e45aeef70ebdffcfd5b1edb1e2fe91ec54b2a2d789cdf\": container with ID starting with 1bb195c693d1b9a8399e45aeef70ebdffcfd5b1edb1e2fe91ec54b2a2d789cdf not found: ID does not exist" Feb 17 14:13:16 crc kubenswrapper[4768]: I0217 14:13:16.624788 4768 scope.go:117] "RemoveContainer" containerID="835fedf213a218af972703725c8bd6dce31cfd27a6e48732557e4bc4226ffc3f" Feb 17 14:13:16 crc kubenswrapper[4768]: E0217 14:13:16.625134 4768 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"835fedf213a218af972703725c8bd6dce31cfd27a6e48732557e4bc4226ffc3f\": container with ID starting with 835fedf213a218af972703725c8bd6dce31cfd27a6e48732557e4bc4226ffc3f not found: ID does not exist" containerID="835fedf213a218af972703725c8bd6dce31cfd27a6e48732557e4bc4226ffc3f" Feb 17 14:13:16 crc kubenswrapper[4768]: I0217 14:13:16.625184 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"835fedf213a218af972703725c8bd6dce31cfd27a6e48732557e4bc4226ffc3f"} err="failed to get container status \"835fedf213a218af972703725c8bd6dce31cfd27a6e48732557e4bc4226ffc3f\": rpc error: code = NotFound desc = could not find container \"835fedf213a218af972703725c8bd6dce31cfd27a6e48732557e4bc4226ffc3f\": container with ID starting with 835fedf213a218af972703725c8bd6dce31cfd27a6e48732557e4bc4226ffc3f not found: ID does not exist" Feb 17 14:13:17 crc kubenswrapper[4768]: I0217 14:13:17.544746 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="11ac5bd1-8374-4e07-8e0c-6e9fe426130d" path="/var/lib/kubelet/pods/11ac5bd1-8374-4e07-8e0c-6e9fe426130d/volumes" Feb 17 14:13:25 crc kubenswrapper[4768]: I0217 14:13:25.534220 4768 scope.go:117] "RemoveContainer" containerID="f1e36808cf48ba534d66b2ba15d35f7785c1fe424a7230a2ddcd4dc0fb81f016" Feb 17 14:13:25 crc kubenswrapper[4768]: E0217 14:13:25.534888 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p97z4_openshift-machine-config-operator(10c685ba-8fe0-425c-958c-3fb6754d3d84)\"" pod="openshift-machine-config-operator/machine-config-daemon-p97z4" podUID="10c685ba-8fe0-425c-958c-3fb6754d3d84" Feb 17 14:13:26 crc kubenswrapper[4768]: I0217 14:13:26.785163 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-89898"] Feb 17 14:13:26 crc kubenswrapper[4768]: E0217 14:13:26.786124 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="11ac5bd1-8374-4e07-8e0c-6e9fe426130d" containerName="extract-content" Feb 17 14:13:26 crc kubenswrapper[4768]: I0217 14:13:26.786143 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="11ac5bd1-8374-4e07-8e0c-6e9fe426130d" containerName="extract-content" Feb 17 14:13:26 crc kubenswrapper[4768]: E0217 14:13:26.786168 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="11ac5bd1-8374-4e07-8e0c-6e9fe426130d" containerName="extract-utilities" Feb 17 14:13:26 crc kubenswrapper[4768]: I0217 14:13:26.786176 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="11ac5bd1-8374-4e07-8e0c-6e9fe426130d" containerName="extract-utilities" Feb 17 14:13:26 crc kubenswrapper[4768]: E0217 14:13:26.786199 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="11ac5bd1-8374-4e07-8e0c-6e9fe426130d" containerName="registry-server" Feb 17 14:13:26 crc kubenswrapper[4768]: I0217 14:13:26.786216 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="11ac5bd1-8374-4e07-8e0c-6e9fe426130d" containerName="registry-server" Feb 17 14:13:26 crc kubenswrapper[4768]: I0217 14:13:26.786436 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="11ac5bd1-8374-4e07-8e0c-6e9fe426130d" containerName="registry-server" Feb 17 14:13:26 crc kubenswrapper[4768]: I0217 14:13:26.787803 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-89898" Feb 17 14:13:26 crc kubenswrapper[4768]: I0217 14:13:26.805924 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-89898"] Feb 17 14:13:26 crc kubenswrapper[4768]: I0217 14:13:26.895266 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b3996e64-3ade-462a-8a48-e8c2b4fab078-catalog-content\") pod \"certified-operators-89898\" (UID: \"b3996e64-3ade-462a-8a48-e8c2b4fab078\") " pod="openshift-marketplace/certified-operators-89898" Feb 17 14:13:26 crc kubenswrapper[4768]: I0217 14:13:26.895714 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bksdz\" (UniqueName: \"kubernetes.io/projected/b3996e64-3ade-462a-8a48-e8c2b4fab078-kube-api-access-bksdz\") pod \"certified-operators-89898\" (UID: \"b3996e64-3ade-462a-8a48-e8c2b4fab078\") " pod="openshift-marketplace/certified-operators-89898" Feb 17 14:13:26 crc kubenswrapper[4768]: I0217 14:13:26.895743 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b3996e64-3ade-462a-8a48-e8c2b4fab078-utilities\") pod \"certified-operators-89898\" (UID: \"b3996e64-3ade-462a-8a48-e8c2b4fab078\") " pod="openshift-marketplace/certified-operators-89898" Feb 17 14:13:26 crc kubenswrapper[4768]: I0217 14:13:26.997761 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bksdz\" (UniqueName: \"kubernetes.io/projected/b3996e64-3ade-462a-8a48-e8c2b4fab078-kube-api-access-bksdz\") pod \"certified-operators-89898\" (UID: \"b3996e64-3ade-462a-8a48-e8c2b4fab078\") " pod="openshift-marketplace/certified-operators-89898" Feb 17 14:13:26 crc kubenswrapper[4768]: I0217 14:13:26.997814 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b3996e64-3ade-462a-8a48-e8c2b4fab078-utilities\") pod \"certified-operators-89898\" (UID: \"b3996e64-3ade-462a-8a48-e8c2b4fab078\") " pod="openshift-marketplace/certified-operators-89898" Feb 17 14:13:26 crc kubenswrapper[4768]: I0217 14:13:26.997837 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b3996e64-3ade-462a-8a48-e8c2b4fab078-catalog-content\") pod \"certified-operators-89898\" (UID: \"b3996e64-3ade-462a-8a48-e8c2b4fab078\") " pod="openshift-marketplace/certified-operators-89898" Feb 17 14:13:26 crc kubenswrapper[4768]: I0217 14:13:26.998288 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b3996e64-3ade-462a-8a48-e8c2b4fab078-catalog-content\") pod \"certified-operators-89898\" (UID: \"b3996e64-3ade-462a-8a48-e8c2b4fab078\") " pod="openshift-marketplace/certified-operators-89898" Feb 17 14:13:26 crc kubenswrapper[4768]: I0217 14:13:26.998853 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b3996e64-3ade-462a-8a48-e8c2b4fab078-utilities\") pod \"certified-operators-89898\" (UID: \"b3996e64-3ade-462a-8a48-e8c2b4fab078\") " pod="openshift-marketplace/certified-operators-89898" Feb 17 14:13:27 crc kubenswrapper[4768]: I0217 14:13:27.028277 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bksdz\" (UniqueName: \"kubernetes.io/projected/b3996e64-3ade-462a-8a48-e8c2b4fab078-kube-api-access-bksdz\") pod \"certified-operators-89898\" (UID: \"b3996e64-3ade-462a-8a48-e8c2b4fab078\") " pod="openshift-marketplace/certified-operators-89898" Feb 17 14:13:27 crc kubenswrapper[4768]: I0217 14:13:27.127880 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-89898" Feb 17 14:13:27 crc kubenswrapper[4768]: I0217 14:13:27.471870 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-89898"] Feb 17 14:13:27 crc kubenswrapper[4768]: W0217 14:13:27.476938 4768 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb3996e64_3ade_462a_8a48_e8c2b4fab078.slice/crio-b91a82fd36637df8815a5e9fe491f7393921dfa527e908f1935dc0f0fcbc16b4 WatchSource:0}: Error finding container b91a82fd36637df8815a5e9fe491f7393921dfa527e908f1935dc0f0fcbc16b4: Status 404 returned error can't find the container with id b91a82fd36637df8815a5e9fe491f7393921dfa527e908f1935dc0f0fcbc16b4 Feb 17 14:13:27 crc kubenswrapper[4768]: I0217 14:13:27.626317 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-89898" event={"ID":"b3996e64-3ade-462a-8a48-e8c2b4fab078","Type":"ContainerStarted","Data":"b91a82fd36637df8815a5e9fe491f7393921dfa527e908f1935dc0f0fcbc16b4"} Feb 17 14:13:28 crc kubenswrapper[4768]: I0217 14:13:28.638826 4768 generic.go:334] "Generic (PLEG): container finished" podID="b3996e64-3ade-462a-8a48-e8c2b4fab078" containerID="0d4d51fb1b5558cc248d89a4e80274632c9c77ad8ec2fb0cd55d7401e57d8ad6" exitCode=0 Feb 17 14:13:28 crc kubenswrapper[4768]: I0217 14:13:28.638976 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-89898" event={"ID":"b3996e64-3ade-462a-8a48-e8c2b4fab078","Type":"ContainerDied","Data":"0d4d51fb1b5558cc248d89a4e80274632c9c77ad8ec2fb0cd55d7401e57d8ad6"} Feb 17 14:13:30 crc kubenswrapper[4768]: I0217 14:13:30.663341 4768 generic.go:334] "Generic (PLEG): container finished" podID="b3996e64-3ade-462a-8a48-e8c2b4fab078" containerID="79fb4ce27d03353fa5a04b0b3eafcdbe27abad9ac92c8d5ac59db0d56074c4bd" exitCode=0 Feb 17 14:13:30 crc kubenswrapper[4768]: I0217 14:13:30.663558 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-89898" event={"ID":"b3996e64-3ade-462a-8a48-e8c2b4fab078","Type":"ContainerDied","Data":"79fb4ce27d03353fa5a04b0b3eafcdbe27abad9ac92c8d5ac59db0d56074c4bd"} Feb 17 14:13:31 crc kubenswrapper[4768]: I0217 14:13:31.671982 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-89898" event={"ID":"b3996e64-3ade-462a-8a48-e8c2b4fab078","Type":"ContainerStarted","Data":"63f803f6996c33e2773234c5dc60dcb694e1d1f1dddd9c2d7700734ee0a49b7e"} Feb 17 14:13:31 crc kubenswrapper[4768]: I0217 14:13:31.704957 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-89898" podStartSLOduration=3.261731152 podStartE2EDuration="5.704936169s" podCreationTimestamp="2026-02-17 14:13:26 +0000 UTC" firstStartedPulling="2026-02-17 14:13:28.641297886 +0000 UTC m=+2227.920684328" lastFinishedPulling="2026-02-17 14:13:31.084502883 +0000 UTC m=+2230.363889345" observedRunningTime="2026-02-17 14:13:31.696768565 +0000 UTC m=+2230.976155007" watchObservedRunningTime="2026-02-17 14:13:31.704936169 +0000 UTC m=+2230.984322621" Feb 17 14:13:36 crc kubenswrapper[4768]: I0217 14:13:36.535038 4768 scope.go:117] "RemoveContainer" containerID="f1e36808cf48ba534d66b2ba15d35f7785c1fe424a7230a2ddcd4dc0fb81f016" Feb 17 14:13:36 crc kubenswrapper[4768]: E0217 14:13:36.536582 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p97z4_openshift-machine-config-operator(10c685ba-8fe0-425c-958c-3fb6754d3d84)\"" pod="openshift-machine-config-operator/machine-config-daemon-p97z4" podUID="10c685ba-8fe0-425c-958c-3fb6754d3d84" Feb 17 14:13:37 crc kubenswrapper[4768]: I0217 14:13:37.128870 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-89898" Feb 17 14:13:37 crc kubenswrapper[4768]: I0217 14:13:37.128960 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-89898" Feb 17 14:13:37 crc kubenswrapper[4768]: I0217 14:13:37.177098 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-89898" Feb 17 14:13:37 crc kubenswrapper[4768]: I0217 14:13:37.805251 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-89898" Feb 17 14:13:39 crc kubenswrapper[4768]: I0217 14:13:39.182841 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-89898"] Feb 17 14:13:39 crc kubenswrapper[4768]: I0217 14:13:39.762222 4768 generic.go:334] "Generic (PLEG): container finished" podID="30a6ce8f-2b64-4ba9-803c-15c5bbde1cf8" containerID="fb1b5fd978f27f8b955485db2b38435e646b6141ee484caa95756fa597741f79" exitCode=0 Feb 17 14:13:39 crc kubenswrapper[4768]: I0217 14:13:39.762329 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-nnqwp" event={"ID":"30a6ce8f-2b64-4ba9-803c-15c5bbde1cf8","Type":"ContainerDied","Data":"fb1b5fd978f27f8b955485db2b38435e646b6141ee484caa95756fa597741f79"} Feb 17 14:13:39 crc kubenswrapper[4768]: I0217 14:13:39.763149 4768 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-89898" podUID="b3996e64-3ade-462a-8a48-e8c2b4fab078" containerName="registry-server" containerID="cri-o://63f803f6996c33e2773234c5dc60dcb694e1d1f1dddd9c2d7700734ee0a49b7e" gracePeriod=2 Feb 17 14:13:40 crc kubenswrapper[4768]: I0217 14:13:40.275733 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-89898" Feb 17 14:13:40 crc kubenswrapper[4768]: I0217 14:13:40.369256 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b3996e64-3ade-462a-8a48-e8c2b4fab078-utilities\") pod \"b3996e64-3ade-462a-8a48-e8c2b4fab078\" (UID: \"b3996e64-3ade-462a-8a48-e8c2b4fab078\") " Feb 17 14:13:40 crc kubenswrapper[4768]: I0217 14:13:40.369299 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bksdz\" (UniqueName: \"kubernetes.io/projected/b3996e64-3ade-462a-8a48-e8c2b4fab078-kube-api-access-bksdz\") pod \"b3996e64-3ade-462a-8a48-e8c2b4fab078\" (UID: \"b3996e64-3ade-462a-8a48-e8c2b4fab078\") " Feb 17 14:13:40 crc kubenswrapper[4768]: I0217 14:13:40.369357 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b3996e64-3ade-462a-8a48-e8c2b4fab078-catalog-content\") pod \"b3996e64-3ade-462a-8a48-e8c2b4fab078\" (UID: \"b3996e64-3ade-462a-8a48-e8c2b4fab078\") " Feb 17 14:13:40 crc kubenswrapper[4768]: I0217 14:13:40.370481 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b3996e64-3ade-462a-8a48-e8c2b4fab078-utilities" (OuterVolumeSpecName: "utilities") pod "b3996e64-3ade-462a-8a48-e8c2b4fab078" (UID: "b3996e64-3ade-462a-8a48-e8c2b4fab078"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 14:13:40 crc kubenswrapper[4768]: I0217 14:13:40.375190 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b3996e64-3ade-462a-8a48-e8c2b4fab078-kube-api-access-bksdz" (OuterVolumeSpecName: "kube-api-access-bksdz") pod "b3996e64-3ade-462a-8a48-e8c2b4fab078" (UID: "b3996e64-3ade-462a-8a48-e8c2b4fab078"). InnerVolumeSpecName "kube-api-access-bksdz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 14:13:40 crc kubenswrapper[4768]: I0217 14:13:40.472060 4768 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b3996e64-3ade-462a-8a48-e8c2b4fab078-utilities\") on node \"crc\" DevicePath \"\"" Feb 17 14:13:40 crc kubenswrapper[4768]: I0217 14:13:40.472118 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bksdz\" (UniqueName: \"kubernetes.io/projected/b3996e64-3ade-462a-8a48-e8c2b4fab078-kube-api-access-bksdz\") on node \"crc\" DevicePath \"\"" Feb 17 14:13:40 crc kubenswrapper[4768]: I0217 14:13:40.785948 4768 generic.go:334] "Generic (PLEG): container finished" podID="b3996e64-3ade-462a-8a48-e8c2b4fab078" containerID="63f803f6996c33e2773234c5dc60dcb694e1d1f1dddd9c2d7700734ee0a49b7e" exitCode=0 Feb 17 14:13:40 crc kubenswrapper[4768]: I0217 14:13:40.786224 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-89898" Feb 17 14:13:40 crc kubenswrapper[4768]: I0217 14:13:40.786790 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-89898" event={"ID":"b3996e64-3ade-462a-8a48-e8c2b4fab078","Type":"ContainerDied","Data":"63f803f6996c33e2773234c5dc60dcb694e1d1f1dddd9c2d7700734ee0a49b7e"} Feb 17 14:13:40 crc kubenswrapper[4768]: I0217 14:13:40.786843 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-89898" event={"ID":"b3996e64-3ade-462a-8a48-e8c2b4fab078","Type":"ContainerDied","Data":"b91a82fd36637df8815a5e9fe491f7393921dfa527e908f1935dc0f0fcbc16b4"} Feb 17 14:13:40 crc kubenswrapper[4768]: I0217 14:13:40.786871 4768 scope.go:117] "RemoveContainer" containerID="63f803f6996c33e2773234c5dc60dcb694e1d1f1dddd9c2d7700734ee0a49b7e" Feb 17 14:13:40 crc kubenswrapper[4768]: I0217 14:13:40.849168 4768 scope.go:117] "RemoveContainer" containerID="79fb4ce27d03353fa5a04b0b3eafcdbe27abad9ac92c8d5ac59db0d56074c4bd" Feb 17 14:13:40 crc kubenswrapper[4768]: I0217 14:13:40.877141 4768 scope.go:117] "RemoveContainer" containerID="0d4d51fb1b5558cc248d89a4e80274632c9c77ad8ec2fb0cd55d7401e57d8ad6" Feb 17 14:13:40 crc kubenswrapper[4768]: I0217 14:13:40.929201 4768 scope.go:117] "RemoveContainer" containerID="63f803f6996c33e2773234c5dc60dcb694e1d1f1dddd9c2d7700734ee0a49b7e" Feb 17 14:13:40 crc kubenswrapper[4768]: E0217 14:13:40.929687 4768 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"63f803f6996c33e2773234c5dc60dcb694e1d1f1dddd9c2d7700734ee0a49b7e\": container with ID starting with 63f803f6996c33e2773234c5dc60dcb694e1d1f1dddd9c2d7700734ee0a49b7e not found: ID does not exist" containerID="63f803f6996c33e2773234c5dc60dcb694e1d1f1dddd9c2d7700734ee0a49b7e" Feb 17 14:13:40 crc kubenswrapper[4768]: I0217 14:13:40.929731 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"63f803f6996c33e2773234c5dc60dcb694e1d1f1dddd9c2d7700734ee0a49b7e"} err="failed to get container status \"63f803f6996c33e2773234c5dc60dcb694e1d1f1dddd9c2d7700734ee0a49b7e\": rpc error: code = NotFound desc = could not find container \"63f803f6996c33e2773234c5dc60dcb694e1d1f1dddd9c2d7700734ee0a49b7e\": container with ID starting with 63f803f6996c33e2773234c5dc60dcb694e1d1f1dddd9c2d7700734ee0a49b7e not found: ID does not exist" Feb 17 14:13:40 crc kubenswrapper[4768]: I0217 14:13:40.929771 4768 scope.go:117] "RemoveContainer" containerID="79fb4ce27d03353fa5a04b0b3eafcdbe27abad9ac92c8d5ac59db0d56074c4bd" Feb 17 14:13:40 crc kubenswrapper[4768]: E0217 14:13:40.930051 4768 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"79fb4ce27d03353fa5a04b0b3eafcdbe27abad9ac92c8d5ac59db0d56074c4bd\": container with ID starting with 79fb4ce27d03353fa5a04b0b3eafcdbe27abad9ac92c8d5ac59db0d56074c4bd not found: ID does not exist" containerID="79fb4ce27d03353fa5a04b0b3eafcdbe27abad9ac92c8d5ac59db0d56074c4bd" Feb 17 14:13:40 crc kubenswrapper[4768]: I0217 14:13:40.930071 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"79fb4ce27d03353fa5a04b0b3eafcdbe27abad9ac92c8d5ac59db0d56074c4bd"} err="failed to get container status \"79fb4ce27d03353fa5a04b0b3eafcdbe27abad9ac92c8d5ac59db0d56074c4bd\": rpc error: code = NotFound desc = could not find container \"79fb4ce27d03353fa5a04b0b3eafcdbe27abad9ac92c8d5ac59db0d56074c4bd\": container with ID starting with 79fb4ce27d03353fa5a04b0b3eafcdbe27abad9ac92c8d5ac59db0d56074c4bd not found: ID does not exist" Feb 17 14:13:40 crc kubenswrapper[4768]: I0217 14:13:40.930126 4768 scope.go:117] "RemoveContainer" containerID="0d4d51fb1b5558cc248d89a4e80274632c9c77ad8ec2fb0cd55d7401e57d8ad6" Feb 17 14:13:40 crc kubenswrapper[4768]: E0217 14:13:40.930426 4768 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0d4d51fb1b5558cc248d89a4e80274632c9c77ad8ec2fb0cd55d7401e57d8ad6\": container with ID starting with 0d4d51fb1b5558cc248d89a4e80274632c9c77ad8ec2fb0cd55d7401e57d8ad6 not found: ID does not exist" containerID="0d4d51fb1b5558cc248d89a4e80274632c9c77ad8ec2fb0cd55d7401e57d8ad6" Feb 17 14:13:40 crc kubenswrapper[4768]: I0217 14:13:40.930454 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0d4d51fb1b5558cc248d89a4e80274632c9c77ad8ec2fb0cd55d7401e57d8ad6"} err="failed to get container status \"0d4d51fb1b5558cc248d89a4e80274632c9c77ad8ec2fb0cd55d7401e57d8ad6\": rpc error: code = NotFound desc = could not find container \"0d4d51fb1b5558cc248d89a4e80274632c9c77ad8ec2fb0cd55d7401e57d8ad6\": container with ID starting with 0d4d51fb1b5558cc248d89a4e80274632c9c77ad8ec2fb0cd55d7401e57d8ad6 not found: ID does not exist" Feb 17 14:13:41 crc kubenswrapper[4768]: I0217 14:13:41.029612 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b3996e64-3ade-462a-8a48-e8c2b4fab078-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b3996e64-3ade-462a-8a48-e8c2b4fab078" (UID: "b3996e64-3ade-462a-8a48-e8c2b4fab078"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 14:13:41 crc kubenswrapper[4768]: I0217 14:13:41.087147 4768 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b3996e64-3ade-462a-8a48-e8c2b4fab078-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 17 14:13:41 crc kubenswrapper[4768]: I0217 14:13:41.137494 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-89898"] Feb 17 14:13:41 crc kubenswrapper[4768]: I0217 14:13:41.150188 4768 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-89898"] Feb 17 14:13:41 crc kubenswrapper[4768]: I0217 14:13:41.259993 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-nnqwp" Feb 17 14:13:41 crc kubenswrapper[4768]: I0217 14:13:41.392023 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/30a6ce8f-2b64-4ba9-803c-15c5bbde1cf8-libvirt-secret-0\") pod \"30a6ce8f-2b64-4ba9-803c-15c5bbde1cf8\" (UID: \"30a6ce8f-2b64-4ba9-803c-15c5bbde1cf8\") " Feb 17 14:13:41 crc kubenswrapper[4768]: I0217 14:13:41.392317 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/30a6ce8f-2b64-4ba9-803c-15c5bbde1cf8-inventory\") pod \"30a6ce8f-2b64-4ba9-803c-15c5bbde1cf8\" (UID: \"30a6ce8f-2b64-4ba9-803c-15c5bbde1cf8\") " Feb 17 14:13:41 crc kubenswrapper[4768]: I0217 14:13:41.392539 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/30a6ce8f-2b64-4ba9-803c-15c5bbde1cf8-libvirt-combined-ca-bundle\") pod \"30a6ce8f-2b64-4ba9-803c-15c5bbde1cf8\" (UID: \"30a6ce8f-2b64-4ba9-803c-15c5bbde1cf8\") " Feb 17 14:13:41 crc kubenswrapper[4768]: I0217 14:13:41.392579 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8hhvj\" (UniqueName: \"kubernetes.io/projected/30a6ce8f-2b64-4ba9-803c-15c5bbde1cf8-kube-api-access-8hhvj\") pod \"30a6ce8f-2b64-4ba9-803c-15c5bbde1cf8\" (UID: \"30a6ce8f-2b64-4ba9-803c-15c5bbde1cf8\") " Feb 17 14:13:41 crc kubenswrapper[4768]: I0217 14:13:41.392626 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/30a6ce8f-2b64-4ba9-803c-15c5bbde1cf8-ssh-key-openstack-edpm-ipam\") pod \"30a6ce8f-2b64-4ba9-803c-15c5bbde1cf8\" (UID: \"30a6ce8f-2b64-4ba9-803c-15c5bbde1cf8\") " Feb 17 14:13:41 crc kubenswrapper[4768]: I0217 14:13:41.406396 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/30a6ce8f-2b64-4ba9-803c-15c5bbde1cf8-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "30a6ce8f-2b64-4ba9-803c-15c5bbde1cf8" (UID: "30a6ce8f-2b64-4ba9-803c-15c5bbde1cf8"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 14:13:41 crc kubenswrapper[4768]: I0217 14:13:41.406606 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/30a6ce8f-2b64-4ba9-803c-15c5bbde1cf8-kube-api-access-8hhvj" (OuterVolumeSpecName: "kube-api-access-8hhvj") pod "30a6ce8f-2b64-4ba9-803c-15c5bbde1cf8" (UID: "30a6ce8f-2b64-4ba9-803c-15c5bbde1cf8"). InnerVolumeSpecName "kube-api-access-8hhvj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 14:13:41 crc kubenswrapper[4768]: I0217 14:13:41.423837 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/30a6ce8f-2b64-4ba9-803c-15c5bbde1cf8-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "30a6ce8f-2b64-4ba9-803c-15c5bbde1cf8" (UID: "30a6ce8f-2b64-4ba9-803c-15c5bbde1cf8"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 14:13:41 crc kubenswrapper[4768]: I0217 14:13:41.430881 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/30a6ce8f-2b64-4ba9-803c-15c5bbde1cf8-inventory" (OuterVolumeSpecName: "inventory") pod "30a6ce8f-2b64-4ba9-803c-15c5bbde1cf8" (UID: "30a6ce8f-2b64-4ba9-803c-15c5bbde1cf8"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 14:13:41 crc kubenswrapper[4768]: I0217 14:13:41.440287 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/30a6ce8f-2b64-4ba9-803c-15c5bbde1cf8-libvirt-secret-0" (OuterVolumeSpecName: "libvirt-secret-0") pod "30a6ce8f-2b64-4ba9-803c-15c5bbde1cf8" (UID: "30a6ce8f-2b64-4ba9-803c-15c5bbde1cf8"). InnerVolumeSpecName "libvirt-secret-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 14:13:41 crc kubenswrapper[4768]: I0217 14:13:41.496092 4768 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/30a6ce8f-2b64-4ba9-803c-15c5bbde1cf8-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 17 14:13:41 crc kubenswrapper[4768]: I0217 14:13:41.496285 4768 reconciler_common.go:293] "Volume detached for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/30a6ce8f-2b64-4ba9-803c-15c5bbde1cf8-libvirt-secret-0\") on node \"crc\" DevicePath \"\"" Feb 17 14:13:41 crc kubenswrapper[4768]: I0217 14:13:41.496301 4768 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/30a6ce8f-2b64-4ba9-803c-15c5bbde1cf8-inventory\") on node \"crc\" DevicePath \"\"" Feb 17 14:13:41 crc kubenswrapper[4768]: I0217 14:13:41.496313 4768 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/30a6ce8f-2b64-4ba9-803c-15c5bbde1cf8-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 14:13:41 crc kubenswrapper[4768]: I0217 14:13:41.496325 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8hhvj\" (UniqueName: \"kubernetes.io/projected/30a6ce8f-2b64-4ba9-803c-15c5bbde1cf8-kube-api-access-8hhvj\") on node \"crc\" DevicePath \"\"" Feb 17 14:13:41 crc kubenswrapper[4768]: I0217 14:13:41.550577 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b3996e64-3ade-462a-8a48-e8c2b4fab078" path="/var/lib/kubelet/pods/b3996e64-3ade-462a-8a48-e8c2b4fab078/volumes" Feb 17 14:13:41 crc kubenswrapper[4768]: I0217 14:13:41.799934 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-nnqwp" Feb 17 14:13:41 crc kubenswrapper[4768]: I0217 14:13:41.799939 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-nnqwp" event={"ID":"30a6ce8f-2b64-4ba9-803c-15c5bbde1cf8","Type":"ContainerDied","Data":"0c80b54ebb6977b8fe644b57bfbdc155888936892344dab7b2fd257944e6a8b3"} Feb 17 14:13:41 crc kubenswrapper[4768]: I0217 14:13:41.800015 4768 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0c80b54ebb6977b8fe644b57bfbdc155888936892344dab7b2fd257944e6a8b3" Feb 17 14:13:41 crc kubenswrapper[4768]: I0217 14:13:41.886926 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-4tgnr"] Feb 17 14:13:41 crc kubenswrapper[4768]: E0217 14:13:41.887877 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b3996e64-3ade-462a-8a48-e8c2b4fab078" containerName="extract-utilities" Feb 17 14:13:41 crc kubenswrapper[4768]: I0217 14:13:41.888018 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="b3996e64-3ade-462a-8a48-e8c2b4fab078" containerName="extract-utilities" Feb 17 14:13:41 crc kubenswrapper[4768]: E0217 14:13:41.888225 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b3996e64-3ade-462a-8a48-e8c2b4fab078" containerName="extract-content" Feb 17 14:13:41 crc kubenswrapper[4768]: I0217 14:13:41.888379 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="b3996e64-3ade-462a-8a48-e8c2b4fab078" containerName="extract-content" Feb 17 14:13:41 crc kubenswrapper[4768]: E0217 14:13:41.888497 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b3996e64-3ade-462a-8a48-e8c2b4fab078" containerName="registry-server" Feb 17 14:13:41 crc kubenswrapper[4768]: I0217 14:13:41.888618 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="b3996e64-3ade-462a-8a48-e8c2b4fab078" containerName="registry-server" Feb 17 14:13:41 crc kubenswrapper[4768]: E0217 14:13:41.888749 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="30a6ce8f-2b64-4ba9-803c-15c5bbde1cf8" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Feb 17 14:13:41 crc kubenswrapper[4768]: I0217 14:13:41.888858 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="30a6ce8f-2b64-4ba9-803c-15c5bbde1cf8" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Feb 17 14:13:41 crc kubenswrapper[4768]: I0217 14:13:41.889382 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="b3996e64-3ade-462a-8a48-e8c2b4fab078" containerName="registry-server" Feb 17 14:13:41 crc kubenswrapper[4768]: I0217 14:13:41.889551 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="30a6ce8f-2b64-4ba9-803c-15c5bbde1cf8" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Feb 17 14:13:41 crc kubenswrapper[4768]: I0217 14:13:41.890660 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-4tgnr" Feb 17 14:13:41 crc kubenswrapper[4768]: I0217 14:13:41.894174 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 17 14:13:41 crc kubenswrapper[4768]: I0217 14:13:41.894502 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 17 14:13:41 crc kubenswrapper[4768]: I0217 14:13:41.894750 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-tr87q" Feb 17 14:13:41 crc kubenswrapper[4768]: I0217 14:13:41.894942 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-migration-ssh-key" Feb 17 14:13:41 crc kubenswrapper[4768]: I0217 14:13:41.895089 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"nova-extra-config" Feb 17 14:13:41 crc kubenswrapper[4768]: I0217 14:13:41.895203 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 17 14:13:41 crc kubenswrapper[4768]: I0217 14:13:41.897696 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-compute-config" Feb 17 14:13:41 crc kubenswrapper[4768]: I0217 14:13:41.900458 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-4tgnr"] Feb 17 14:13:42 crc kubenswrapper[4768]: I0217 14:13:42.011698 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/7df23c60-d5f8-47e9-a852-ba39850823cb-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-4tgnr\" (UID: \"7df23c60-d5f8-47e9-a852-ba39850823cb\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-4tgnr" Feb 17 14:13:42 crc kubenswrapper[4768]: I0217 14:13:42.011760 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7df23c60-d5f8-47e9-a852-ba39850823cb-ssh-key-openstack-edpm-ipam\") pod \"nova-edpm-deployment-openstack-edpm-ipam-4tgnr\" (UID: \"7df23c60-d5f8-47e9-a852-ba39850823cb\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-4tgnr" Feb 17 14:13:42 crc kubenswrapper[4768]: I0217 14:13:42.011792 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/7df23c60-d5f8-47e9-a852-ba39850823cb-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-4tgnr\" (UID: \"7df23c60-d5f8-47e9-a852-ba39850823cb\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-4tgnr" Feb 17 14:13:42 crc kubenswrapper[4768]: I0217 14:13:42.011826 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7df23c60-d5f8-47e9-a852-ba39850823cb-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-4tgnr\" (UID: \"7df23c60-d5f8-47e9-a852-ba39850823cb\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-4tgnr" Feb 17 14:13:42 crc kubenswrapper[4768]: I0217 14:13:42.011882 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/7df23c60-d5f8-47e9-a852-ba39850823cb-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-4tgnr\" (UID: \"7df23c60-d5f8-47e9-a852-ba39850823cb\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-4tgnr" Feb 17 14:13:42 crc kubenswrapper[4768]: I0217 14:13:42.011907 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/7df23c60-d5f8-47e9-a852-ba39850823cb-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-4tgnr\" (UID: \"7df23c60-d5f8-47e9-a852-ba39850823cb\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-4tgnr" Feb 17 14:13:42 crc kubenswrapper[4768]: I0217 14:13:42.011931 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-3\" (UniqueName: \"kubernetes.io/secret/7df23c60-d5f8-47e9-a852-ba39850823cb-nova-cell1-compute-config-3\") pod \"nova-edpm-deployment-openstack-edpm-ipam-4tgnr\" (UID: \"7df23c60-d5f8-47e9-a852-ba39850823cb\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-4tgnr" Feb 17 14:13:42 crc kubenswrapper[4768]: I0217 14:13:42.011964 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-2\" (UniqueName: \"kubernetes.io/secret/7df23c60-d5f8-47e9-a852-ba39850823cb-nova-cell1-compute-config-2\") pod \"nova-edpm-deployment-openstack-edpm-ipam-4tgnr\" (UID: \"7df23c60-d5f8-47e9-a852-ba39850823cb\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-4tgnr" Feb 17 14:13:42 crc kubenswrapper[4768]: I0217 14:13:42.012061 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jjzhz\" (UniqueName: \"kubernetes.io/projected/7df23c60-d5f8-47e9-a852-ba39850823cb-kube-api-access-jjzhz\") pod \"nova-edpm-deployment-openstack-edpm-ipam-4tgnr\" (UID: \"7df23c60-d5f8-47e9-a852-ba39850823cb\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-4tgnr" Feb 17 14:13:42 crc kubenswrapper[4768]: I0217 14:13:42.012181 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7df23c60-d5f8-47e9-a852-ba39850823cb-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-4tgnr\" (UID: \"7df23c60-d5f8-47e9-a852-ba39850823cb\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-4tgnr" Feb 17 14:13:42 crc kubenswrapper[4768]: I0217 14:13:42.012315 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/7df23c60-d5f8-47e9-a852-ba39850823cb-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-4tgnr\" (UID: \"7df23c60-d5f8-47e9-a852-ba39850823cb\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-4tgnr" Feb 17 14:13:42 crc kubenswrapper[4768]: I0217 14:13:42.114086 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/7df23c60-d5f8-47e9-a852-ba39850823cb-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-4tgnr\" (UID: \"7df23c60-d5f8-47e9-a852-ba39850823cb\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-4tgnr" Feb 17 14:13:42 crc kubenswrapper[4768]: I0217 14:13:42.114170 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7df23c60-d5f8-47e9-a852-ba39850823cb-ssh-key-openstack-edpm-ipam\") pod \"nova-edpm-deployment-openstack-edpm-ipam-4tgnr\" (UID: \"7df23c60-d5f8-47e9-a852-ba39850823cb\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-4tgnr" Feb 17 14:13:42 crc kubenswrapper[4768]: I0217 14:13:42.114205 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/7df23c60-d5f8-47e9-a852-ba39850823cb-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-4tgnr\" (UID: \"7df23c60-d5f8-47e9-a852-ba39850823cb\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-4tgnr" Feb 17 14:13:42 crc kubenswrapper[4768]: I0217 14:13:42.114232 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7df23c60-d5f8-47e9-a852-ba39850823cb-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-4tgnr\" (UID: \"7df23c60-d5f8-47e9-a852-ba39850823cb\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-4tgnr" Feb 17 14:13:42 crc kubenswrapper[4768]: I0217 14:13:42.114255 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/7df23c60-d5f8-47e9-a852-ba39850823cb-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-4tgnr\" (UID: \"7df23c60-d5f8-47e9-a852-ba39850823cb\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-4tgnr" Feb 17 14:13:42 crc kubenswrapper[4768]: I0217 14:13:42.114271 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/7df23c60-d5f8-47e9-a852-ba39850823cb-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-4tgnr\" (UID: \"7df23c60-d5f8-47e9-a852-ba39850823cb\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-4tgnr" Feb 17 14:13:42 crc kubenswrapper[4768]: I0217 14:13:42.114289 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-3\" (UniqueName: \"kubernetes.io/secret/7df23c60-d5f8-47e9-a852-ba39850823cb-nova-cell1-compute-config-3\") pod \"nova-edpm-deployment-openstack-edpm-ipam-4tgnr\" (UID: \"7df23c60-d5f8-47e9-a852-ba39850823cb\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-4tgnr" Feb 17 14:13:42 crc kubenswrapper[4768]: I0217 14:13:42.114315 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-2\" (UniqueName: \"kubernetes.io/secret/7df23c60-d5f8-47e9-a852-ba39850823cb-nova-cell1-compute-config-2\") pod \"nova-edpm-deployment-openstack-edpm-ipam-4tgnr\" (UID: \"7df23c60-d5f8-47e9-a852-ba39850823cb\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-4tgnr" Feb 17 14:13:42 crc kubenswrapper[4768]: I0217 14:13:42.114330 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jjzhz\" (UniqueName: \"kubernetes.io/projected/7df23c60-d5f8-47e9-a852-ba39850823cb-kube-api-access-jjzhz\") pod \"nova-edpm-deployment-openstack-edpm-ipam-4tgnr\" (UID: \"7df23c60-d5f8-47e9-a852-ba39850823cb\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-4tgnr" Feb 17 14:13:42 crc kubenswrapper[4768]: I0217 14:13:42.114469 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7df23c60-d5f8-47e9-a852-ba39850823cb-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-4tgnr\" (UID: \"7df23c60-d5f8-47e9-a852-ba39850823cb\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-4tgnr" Feb 17 14:13:42 crc kubenswrapper[4768]: I0217 14:13:42.114497 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/7df23c60-d5f8-47e9-a852-ba39850823cb-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-4tgnr\" (UID: \"7df23c60-d5f8-47e9-a852-ba39850823cb\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-4tgnr" Feb 17 14:13:42 crc kubenswrapper[4768]: I0217 14:13:42.115264 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/7df23c60-d5f8-47e9-a852-ba39850823cb-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-4tgnr\" (UID: \"7df23c60-d5f8-47e9-a852-ba39850823cb\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-4tgnr" Feb 17 14:13:42 crc kubenswrapper[4768]: I0217 14:13:42.118640 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/7df23c60-d5f8-47e9-a852-ba39850823cb-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-4tgnr\" (UID: \"7df23c60-d5f8-47e9-a852-ba39850823cb\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-4tgnr" Feb 17 14:13:42 crc kubenswrapper[4768]: I0217 14:13:42.118738 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7df23c60-d5f8-47e9-a852-ba39850823cb-ssh-key-openstack-edpm-ipam\") pod \"nova-edpm-deployment-openstack-edpm-ipam-4tgnr\" (UID: \"7df23c60-d5f8-47e9-a852-ba39850823cb\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-4tgnr" Feb 17 14:13:42 crc kubenswrapper[4768]: I0217 14:13:42.119163 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-3\" (UniqueName: \"kubernetes.io/secret/7df23c60-d5f8-47e9-a852-ba39850823cb-nova-cell1-compute-config-3\") pod \"nova-edpm-deployment-openstack-edpm-ipam-4tgnr\" (UID: \"7df23c60-d5f8-47e9-a852-ba39850823cb\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-4tgnr" Feb 17 14:13:42 crc kubenswrapper[4768]: I0217 14:13:42.121726 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/7df23c60-d5f8-47e9-a852-ba39850823cb-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-4tgnr\" (UID: \"7df23c60-d5f8-47e9-a852-ba39850823cb\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-4tgnr" Feb 17 14:13:42 crc kubenswrapper[4768]: I0217 14:13:42.122413 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7df23c60-d5f8-47e9-a852-ba39850823cb-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-4tgnr\" (UID: \"7df23c60-d5f8-47e9-a852-ba39850823cb\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-4tgnr" Feb 17 14:13:42 crc kubenswrapper[4768]: I0217 14:13:42.124815 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/7df23c60-d5f8-47e9-a852-ba39850823cb-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-4tgnr\" (UID: \"7df23c60-d5f8-47e9-a852-ba39850823cb\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-4tgnr" Feb 17 14:13:42 crc kubenswrapper[4768]: I0217 14:13:42.126710 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-2\" (UniqueName: \"kubernetes.io/secret/7df23c60-d5f8-47e9-a852-ba39850823cb-nova-cell1-compute-config-2\") pod \"nova-edpm-deployment-openstack-edpm-ipam-4tgnr\" (UID: \"7df23c60-d5f8-47e9-a852-ba39850823cb\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-4tgnr" Feb 17 14:13:42 crc kubenswrapper[4768]: I0217 14:13:42.126849 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/7df23c60-d5f8-47e9-a852-ba39850823cb-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-4tgnr\" (UID: \"7df23c60-d5f8-47e9-a852-ba39850823cb\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-4tgnr" Feb 17 14:13:42 crc kubenswrapper[4768]: I0217 14:13:42.128846 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7df23c60-d5f8-47e9-a852-ba39850823cb-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-4tgnr\" (UID: \"7df23c60-d5f8-47e9-a852-ba39850823cb\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-4tgnr" Feb 17 14:13:42 crc kubenswrapper[4768]: I0217 14:13:42.136689 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jjzhz\" (UniqueName: \"kubernetes.io/projected/7df23c60-d5f8-47e9-a852-ba39850823cb-kube-api-access-jjzhz\") pod \"nova-edpm-deployment-openstack-edpm-ipam-4tgnr\" (UID: \"7df23c60-d5f8-47e9-a852-ba39850823cb\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-4tgnr" Feb 17 14:13:42 crc kubenswrapper[4768]: I0217 14:13:42.220402 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-4tgnr" Feb 17 14:13:42 crc kubenswrapper[4768]: I0217 14:13:42.666732 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-4tgnr"] Feb 17 14:13:42 crc kubenswrapper[4768]: I0217 14:13:42.808608 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-4tgnr" event={"ID":"7df23c60-d5f8-47e9-a852-ba39850823cb","Type":"ContainerStarted","Data":"2396a50957e366f008b304a69edb3ac5141223689044a796d14c3039e7e5c11f"} Feb 17 14:13:43 crc kubenswrapper[4768]: I0217 14:13:43.818735 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-4tgnr" event={"ID":"7df23c60-d5f8-47e9-a852-ba39850823cb","Type":"ContainerStarted","Data":"ff5d2e7d42a9738b6c0a7b4fb551ab2e00a0001dc7a99bc47fb442072063b2de"} Feb 17 14:13:43 crc kubenswrapper[4768]: I0217 14:13:43.843369 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-4tgnr" podStartSLOduration=2.426999206 podStartE2EDuration="2.843344897s" podCreationTimestamp="2026-02-17 14:13:41 +0000 UTC" firstStartedPulling="2026-02-17 14:13:42.671193822 +0000 UTC m=+2241.950580274" lastFinishedPulling="2026-02-17 14:13:43.087539523 +0000 UTC m=+2242.366925965" observedRunningTime="2026-02-17 14:13:43.834226117 +0000 UTC m=+2243.113612569" watchObservedRunningTime="2026-02-17 14:13:43.843344897 +0000 UTC m=+2243.122731339" Feb 17 14:13:47 crc kubenswrapper[4768]: I0217 14:13:47.535214 4768 scope.go:117] "RemoveContainer" containerID="f1e36808cf48ba534d66b2ba15d35f7785c1fe424a7230a2ddcd4dc0fb81f016" Feb 17 14:13:47 crc kubenswrapper[4768]: E0217 14:13:47.535948 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p97z4_openshift-machine-config-operator(10c685ba-8fe0-425c-958c-3fb6754d3d84)\"" pod="openshift-machine-config-operator/machine-config-daemon-p97z4" podUID="10c685ba-8fe0-425c-958c-3fb6754d3d84" Feb 17 14:13:58 crc kubenswrapper[4768]: I0217 14:13:58.534985 4768 scope.go:117] "RemoveContainer" containerID="f1e36808cf48ba534d66b2ba15d35f7785c1fe424a7230a2ddcd4dc0fb81f016" Feb 17 14:13:58 crc kubenswrapper[4768]: E0217 14:13:58.536072 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p97z4_openshift-machine-config-operator(10c685ba-8fe0-425c-958c-3fb6754d3d84)\"" pod="openshift-machine-config-operator/machine-config-daemon-p97z4" podUID="10c685ba-8fe0-425c-958c-3fb6754d3d84" Feb 17 14:14:13 crc kubenswrapper[4768]: I0217 14:14:13.534380 4768 scope.go:117] "RemoveContainer" containerID="f1e36808cf48ba534d66b2ba15d35f7785c1fe424a7230a2ddcd4dc0fb81f016" Feb 17 14:14:13 crc kubenswrapper[4768]: E0217 14:14:13.535309 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p97z4_openshift-machine-config-operator(10c685ba-8fe0-425c-958c-3fb6754d3d84)\"" pod="openshift-machine-config-operator/machine-config-daemon-p97z4" podUID="10c685ba-8fe0-425c-958c-3fb6754d3d84" Feb 17 14:14:24 crc kubenswrapper[4768]: I0217 14:14:24.534247 4768 scope.go:117] "RemoveContainer" containerID="f1e36808cf48ba534d66b2ba15d35f7785c1fe424a7230a2ddcd4dc0fb81f016" Feb 17 14:14:24 crc kubenswrapper[4768]: E0217 14:14:24.535167 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p97z4_openshift-machine-config-operator(10c685ba-8fe0-425c-958c-3fb6754d3d84)\"" pod="openshift-machine-config-operator/machine-config-daemon-p97z4" podUID="10c685ba-8fe0-425c-958c-3fb6754d3d84" Feb 17 14:14:38 crc kubenswrapper[4768]: I0217 14:14:38.534520 4768 scope.go:117] "RemoveContainer" containerID="f1e36808cf48ba534d66b2ba15d35f7785c1fe424a7230a2ddcd4dc0fb81f016" Feb 17 14:14:38 crc kubenswrapper[4768]: E0217 14:14:38.535362 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p97z4_openshift-machine-config-operator(10c685ba-8fe0-425c-958c-3fb6754d3d84)\"" pod="openshift-machine-config-operator/machine-config-daemon-p97z4" podUID="10c685ba-8fe0-425c-958c-3fb6754d3d84" Feb 17 14:14:52 crc kubenswrapper[4768]: I0217 14:14:52.534277 4768 scope.go:117] "RemoveContainer" containerID="f1e36808cf48ba534d66b2ba15d35f7785c1fe424a7230a2ddcd4dc0fb81f016" Feb 17 14:14:52 crc kubenswrapper[4768]: E0217 14:14:52.534989 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p97z4_openshift-machine-config-operator(10c685ba-8fe0-425c-958c-3fb6754d3d84)\"" pod="openshift-machine-config-operator/machine-config-daemon-p97z4" podUID="10c685ba-8fe0-425c-958c-3fb6754d3d84" Feb 17 14:15:00 crc kubenswrapper[4768]: I0217 14:15:00.155142 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29522295-tvq6k"] Feb 17 14:15:00 crc kubenswrapper[4768]: I0217 14:15:00.156930 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29522295-tvq6k" Feb 17 14:15:00 crc kubenswrapper[4768]: I0217 14:15:00.159328 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 17 14:15:00 crc kubenswrapper[4768]: I0217 14:15:00.159340 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 17 14:15:00 crc kubenswrapper[4768]: I0217 14:15:00.164603 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-86tvl\" (UniqueName: \"kubernetes.io/projected/84645fb8-5ec8-447f-85dd-045f06004115-kube-api-access-86tvl\") pod \"collect-profiles-29522295-tvq6k\" (UID: \"84645fb8-5ec8-447f-85dd-045f06004115\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522295-tvq6k" Feb 17 14:15:00 crc kubenswrapper[4768]: I0217 14:15:00.164670 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/84645fb8-5ec8-447f-85dd-045f06004115-config-volume\") pod \"collect-profiles-29522295-tvq6k\" (UID: \"84645fb8-5ec8-447f-85dd-045f06004115\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522295-tvq6k" Feb 17 14:15:00 crc kubenswrapper[4768]: I0217 14:15:00.164771 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/84645fb8-5ec8-447f-85dd-045f06004115-secret-volume\") pod \"collect-profiles-29522295-tvq6k\" (UID: \"84645fb8-5ec8-447f-85dd-045f06004115\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522295-tvq6k" Feb 17 14:15:00 crc kubenswrapper[4768]: I0217 14:15:00.176792 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29522295-tvq6k"] Feb 17 14:15:00 crc kubenswrapper[4768]: I0217 14:15:00.269501 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-86tvl\" (UniqueName: \"kubernetes.io/projected/84645fb8-5ec8-447f-85dd-045f06004115-kube-api-access-86tvl\") pod \"collect-profiles-29522295-tvq6k\" (UID: \"84645fb8-5ec8-447f-85dd-045f06004115\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522295-tvq6k" Feb 17 14:15:00 crc kubenswrapper[4768]: I0217 14:15:00.269567 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/84645fb8-5ec8-447f-85dd-045f06004115-config-volume\") pod \"collect-profiles-29522295-tvq6k\" (UID: \"84645fb8-5ec8-447f-85dd-045f06004115\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522295-tvq6k" Feb 17 14:15:00 crc kubenswrapper[4768]: I0217 14:15:00.269818 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/84645fb8-5ec8-447f-85dd-045f06004115-secret-volume\") pod \"collect-profiles-29522295-tvq6k\" (UID: \"84645fb8-5ec8-447f-85dd-045f06004115\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522295-tvq6k" Feb 17 14:15:00 crc kubenswrapper[4768]: I0217 14:15:00.270922 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/84645fb8-5ec8-447f-85dd-045f06004115-config-volume\") pod \"collect-profiles-29522295-tvq6k\" (UID: \"84645fb8-5ec8-447f-85dd-045f06004115\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522295-tvq6k" Feb 17 14:15:00 crc kubenswrapper[4768]: I0217 14:15:00.275670 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/84645fb8-5ec8-447f-85dd-045f06004115-secret-volume\") pod \"collect-profiles-29522295-tvq6k\" (UID: \"84645fb8-5ec8-447f-85dd-045f06004115\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522295-tvq6k" Feb 17 14:15:00 crc kubenswrapper[4768]: I0217 14:15:00.287203 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-86tvl\" (UniqueName: \"kubernetes.io/projected/84645fb8-5ec8-447f-85dd-045f06004115-kube-api-access-86tvl\") pod \"collect-profiles-29522295-tvq6k\" (UID: \"84645fb8-5ec8-447f-85dd-045f06004115\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522295-tvq6k" Feb 17 14:15:00 crc kubenswrapper[4768]: I0217 14:15:00.475687 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29522295-tvq6k" Feb 17 14:15:00 crc kubenswrapper[4768]: I0217 14:15:00.945864 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29522295-tvq6k"] Feb 17 14:15:01 crc kubenswrapper[4768]: I0217 14:15:01.503448 4768 generic.go:334] "Generic (PLEG): container finished" podID="84645fb8-5ec8-447f-85dd-045f06004115" containerID="94ba44ffad20fe8ccbf8e6fe37f195c00cabcbaeade7a23d05b92515853bd8ee" exitCode=0 Feb 17 14:15:01 crc kubenswrapper[4768]: I0217 14:15:01.503498 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29522295-tvq6k" event={"ID":"84645fb8-5ec8-447f-85dd-045f06004115","Type":"ContainerDied","Data":"94ba44ffad20fe8ccbf8e6fe37f195c00cabcbaeade7a23d05b92515853bd8ee"} Feb 17 14:15:01 crc kubenswrapper[4768]: I0217 14:15:01.503745 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29522295-tvq6k" event={"ID":"84645fb8-5ec8-447f-85dd-045f06004115","Type":"ContainerStarted","Data":"1bebd517aa9696c6660064de902f573d58679cfc56a3eff67e76608615c6b285"} Feb 17 14:15:02 crc kubenswrapper[4768]: I0217 14:15:02.810895 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29522295-tvq6k" Feb 17 14:15:02 crc kubenswrapper[4768]: I0217 14:15:02.917182 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/84645fb8-5ec8-447f-85dd-045f06004115-config-volume\") pod \"84645fb8-5ec8-447f-85dd-045f06004115\" (UID: \"84645fb8-5ec8-447f-85dd-045f06004115\") " Feb 17 14:15:02 crc kubenswrapper[4768]: I0217 14:15:02.917263 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-86tvl\" (UniqueName: \"kubernetes.io/projected/84645fb8-5ec8-447f-85dd-045f06004115-kube-api-access-86tvl\") pod \"84645fb8-5ec8-447f-85dd-045f06004115\" (UID: \"84645fb8-5ec8-447f-85dd-045f06004115\") " Feb 17 14:15:02 crc kubenswrapper[4768]: I0217 14:15:02.917467 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/84645fb8-5ec8-447f-85dd-045f06004115-secret-volume\") pod \"84645fb8-5ec8-447f-85dd-045f06004115\" (UID: \"84645fb8-5ec8-447f-85dd-045f06004115\") " Feb 17 14:15:02 crc kubenswrapper[4768]: I0217 14:15:02.918151 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/84645fb8-5ec8-447f-85dd-045f06004115-config-volume" (OuterVolumeSpecName: "config-volume") pod "84645fb8-5ec8-447f-85dd-045f06004115" (UID: "84645fb8-5ec8-447f-85dd-045f06004115"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 14:15:02 crc kubenswrapper[4768]: I0217 14:15:02.919302 4768 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/84645fb8-5ec8-447f-85dd-045f06004115-config-volume\") on node \"crc\" DevicePath \"\"" Feb 17 14:15:02 crc kubenswrapper[4768]: I0217 14:15:02.923585 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/84645fb8-5ec8-447f-85dd-045f06004115-kube-api-access-86tvl" (OuterVolumeSpecName: "kube-api-access-86tvl") pod "84645fb8-5ec8-447f-85dd-045f06004115" (UID: "84645fb8-5ec8-447f-85dd-045f06004115"). InnerVolumeSpecName "kube-api-access-86tvl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 14:15:02 crc kubenswrapper[4768]: I0217 14:15:02.924952 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/84645fb8-5ec8-447f-85dd-045f06004115-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "84645fb8-5ec8-447f-85dd-045f06004115" (UID: "84645fb8-5ec8-447f-85dd-045f06004115"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 14:15:03 crc kubenswrapper[4768]: I0217 14:15:03.020888 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-86tvl\" (UniqueName: \"kubernetes.io/projected/84645fb8-5ec8-447f-85dd-045f06004115-kube-api-access-86tvl\") on node \"crc\" DevicePath \"\"" Feb 17 14:15:03 crc kubenswrapper[4768]: I0217 14:15:03.021192 4768 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/84645fb8-5ec8-447f-85dd-045f06004115-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 17 14:15:03 crc kubenswrapper[4768]: I0217 14:15:03.520223 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29522295-tvq6k" event={"ID":"84645fb8-5ec8-447f-85dd-045f06004115","Type":"ContainerDied","Data":"1bebd517aa9696c6660064de902f573d58679cfc56a3eff67e76608615c6b285"} Feb 17 14:15:03 crc kubenswrapper[4768]: I0217 14:15:03.520261 4768 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1bebd517aa9696c6660064de902f573d58679cfc56a3eff67e76608615c6b285" Feb 17 14:15:03 crc kubenswrapper[4768]: I0217 14:15:03.520265 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29522295-tvq6k" Feb 17 14:15:04 crc kubenswrapper[4768]: I0217 14:15:04.416031 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29522250-hq8zz"] Feb 17 14:15:04 crc kubenswrapper[4768]: I0217 14:15:04.429763 4768 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29522250-hq8zz"] Feb 17 14:15:05 crc kubenswrapper[4768]: I0217 14:15:05.555593 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0face492-83c1-49d4-bc1e-7de407151988" path="/var/lib/kubelet/pods/0face492-83c1-49d4-bc1e-7de407151988/volumes" Feb 17 14:15:06 crc kubenswrapper[4768]: I0217 14:15:06.535545 4768 scope.go:117] "RemoveContainer" containerID="f1e36808cf48ba534d66b2ba15d35f7785c1fe424a7230a2ddcd4dc0fb81f016" Feb 17 14:15:06 crc kubenswrapper[4768]: E0217 14:15:06.536253 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p97z4_openshift-machine-config-operator(10c685ba-8fe0-425c-958c-3fb6754d3d84)\"" pod="openshift-machine-config-operator/machine-config-daemon-p97z4" podUID="10c685ba-8fe0-425c-958c-3fb6754d3d84" Feb 17 14:15:17 crc kubenswrapper[4768]: I0217 14:15:17.535231 4768 scope.go:117] "RemoveContainer" containerID="f1e36808cf48ba534d66b2ba15d35f7785c1fe424a7230a2ddcd4dc0fb81f016" Feb 17 14:15:17 crc kubenswrapper[4768]: E0217 14:15:17.536339 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p97z4_openshift-machine-config-operator(10c685ba-8fe0-425c-958c-3fb6754d3d84)\"" pod="openshift-machine-config-operator/machine-config-daemon-p97z4" podUID="10c685ba-8fe0-425c-958c-3fb6754d3d84" Feb 17 14:15:29 crc kubenswrapper[4768]: I0217 14:15:29.535318 4768 scope.go:117] "RemoveContainer" containerID="493e305c0fb6a166430cf00522a68a94918c96e88bc59e9b421c8bf6ccc2800b" Feb 17 14:15:32 crc kubenswrapper[4768]: I0217 14:15:32.534868 4768 scope.go:117] "RemoveContainer" containerID="f1e36808cf48ba534d66b2ba15d35f7785c1fe424a7230a2ddcd4dc0fb81f016" Feb 17 14:15:32 crc kubenswrapper[4768]: E0217 14:15:32.535870 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p97z4_openshift-machine-config-operator(10c685ba-8fe0-425c-958c-3fb6754d3d84)\"" pod="openshift-machine-config-operator/machine-config-daemon-p97z4" podUID="10c685ba-8fe0-425c-958c-3fb6754d3d84" Feb 17 14:15:46 crc kubenswrapper[4768]: I0217 14:15:46.536556 4768 scope.go:117] "RemoveContainer" containerID="f1e36808cf48ba534d66b2ba15d35f7785c1fe424a7230a2ddcd4dc0fb81f016" Feb 17 14:15:46 crc kubenswrapper[4768]: E0217 14:15:46.537545 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p97z4_openshift-machine-config-operator(10c685ba-8fe0-425c-958c-3fb6754d3d84)\"" pod="openshift-machine-config-operator/machine-config-daemon-p97z4" podUID="10c685ba-8fe0-425c-958c-3fb6754d3d84" Feb 17 14:15:54 crc kubenswrapper[4768]: I0217 14:15:54.978501 4768 generic.go:334] "Generic (PLEG): container finished" podID="7df23c60-d5f8-47e9-a852-ba39850823cb" containerID="ff5d2e7d42a9738b6c0a7b4fb551ab2e00a0001dc7a99bc47fb442072063b2de" exitCode=0 Feb 17 14:15:54 crc kubenswrapper[4768]: I0217 14:15:54.978586 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-4tgnr" event={"ID":"7df23c60-d5f8-47e9-a852-ba39850823cb","Type":"ContainerDied","Data":"ff5d2e7d42a9738b6c0a7b4fb551ab2e00a0001dc7a99bc47fb442072063b2de"} Feb 17 14:15:56 crc kubenswrapper[4768]: I0217 14:15:56.422342 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-4tgnr" Feb 17 14:15:56 crc kubenswrapper[4768]: I0217 14:15:56.501244 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/7df23c60-d5f8-47e9-a852-ba39850823cb-nova-extra-config-0\") pod \"7df23c60-d5f8-47e9-a852-ba39850823cb\" (UID: \"7df23c60-d5f8-47e9-a852-ba39850823cb\") " Feb 17 14:15:56 crc kubenswrapper[4768]: I0217 14:15:56.501404 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/7df23c60-d5f8-47e9-a852-ba39850823cb-nova-cell1-compute-config-1\") pod \"7df23c60-d5f8-47e9-a852-ba39850823cb\" (UID: \"7df23c60-d5f8-47e9-a852-ba39850823cb\") " Feb 17 14:15:56 crc kubenswrapper[4768]: I0217 14:15:56.501513 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-2\" (UniqueName: \"kubernetes.io/secret/7df23c60-d5f8-47e9-a852-ba39850823cb-nova-cell1-compute-config-2\") pod \"7df23c60-d5f8-47e9-a852-ba39850823cb\" (UID: \"7df23c60-d5f8-47e9-a852-ba39850823cb\") " Feb 17 14:15:56 crc kubenswrapper[4768]: I0217 14:15:56.501546 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jjzhz\" (UniqueName: \"kubernetes.io/projected/7df23c60-d5f8-47e9-a852-ba39850823cb-kube-api-access-jjzhz\") pod \"7df23c60-d5f8-47e9-a852-ba39850823cb\" (UID: \"7df23c60-d5f8-47e9-a852-ba39850823cb\") " Feb 17 14:15:56 crc kubenswrapper[4768]: I0217 14:15:56.501682 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7df23c60-d5f8-47e9-a852-ba39850823cb-ssh-key-openstack-edpm-ipam\") pod \"7df23c60-d5f8-47e9-a852-ba39850823cb\" (UID: \"7df23c60-d5f8-47e9-a852-ba39850823cb\") " Feb 17 14:15:56 crc kubenswrapper[4768]: I0217 14:15:56.501706 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/7df23c60-d5f8-47e9-a852-ba39850823cb-nova-cell1-compute-config-0\") pod \"7df23c60-d5f8-47e9-a852-ba39850823cb\" (UID: \"7df23c60-d5f8-47e9-a852-ba39850823cb\") " Feb 17 14:15:56 crc kubenswrapper[4768]: I0217 14:15:56.501758 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-3\" (UniqueName: \"kubernetes.io/secret/7df23c60-d5f8-47e9-a852-ba39850823cb-nova-cell1-compute-config-3\") pod \"7df23c60-d5f8-47e9-a852-ba39850823cb\" (UID: \"7df23c60-d5f8-47e9-a852-ba39850823cb\") " Feb 17 14:15:56 crc kubenswrapper[4768]: I0217 14:15:56.501798 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/7df23c60-d5f8-47e9-a852-ba39850823cb-nova-migration-ssh-key-0\") pod \"7df23c60-d5f8-47e9-a852-ba39850823cb\" (UID: \"7df23c60-d5f8-47e9-a852-ba39850823cb\") " Feb 17 14:15:56 crc kubenswrapper[4768]: I0217 14:15:56.501866 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/7df23c60-d5f8-47e9-a852-ba39850823cb-nova-migration-ssh-key-1\") pod \"7df23c60-d5f8-47e9-a852-ba39850823cb\" (UID: \"7df23c60-d5f8-47e9-a852-ba39850823cb\") " Feb 17 14:15:56 crc kubenswrapper[4768]: I0217 14:15:56.501899 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7df23c60-d5f8-47e9-a852-ba39850823cb-nova-combined-ca-bundle\") pod \"7df23c60-d5f8-47e9-a852-ba39850823cb\" (UID: \"7df23c60-d5f8-47e9-a852-ba39850823cb\") " Feb 17 14:15:56 crc kubenswrapper[4768]: I0217 14:15:56.501959 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7df23c60-d5f8-47e9-a852-ba39850823cb-inventory\") pod \"7df23c60-d5f8-47e9-a852-ba39850823cb\" (UID: \"7df23c60-d5f8-47e9-a852-ba39850823cb\") " Feb 17 14:15:56 crc kubenswrapper[4768]: I0217 14:15:56.509621 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7df23c60-d5f8-47e9-a852-ba39850823cb-nova-combined-ca-bundle" (OuterVolumeSpecName: "nova-combined-ca-bundle") pod "7df23c60-d5f8-47e9-a852-ba39850823cb" (UID: "7df23c60-d5f8-47e9-a852-ba39850823cb"). InnerVolumeSpecName "nova-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 14:15:56 crc kubenswrapper[4768]: I0217 14:15:56.509883 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7df23c60-d5f8-47e9-a852-ba39850823cb-kube-api-access-jjzhz" (OuterVolumeSpecName: "kube-api-access-jjzhz") pod "7df23c60-d5f8-47e9-a852-ba39850823cb" (UID: "7df23c60-d5f8-47e9-a852-ba39850823cb"). InnerVolumeSpecName "kube-api-access-jjzhz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 14:15:56 crc kubenswrapper[4768]: I0217 14:15:56.534485 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7df23c60-d5f8-47e9-a852-ba39850823cb-nova-cell1-compute-config-3" (OuterVolumeSpecName: "nova-cell1-compute-config-3") pod "7df23c60-d5f8-47e9-a852-ba39850823cb" (UID: "7df23c60-d5f8-47e9-a852-ba39850823cb"). InnerVolumeSpecName "nova-cell1-compute-config-3". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 14:15:56 crc kubenswrapper[4768]: I0217 14:15:56.544194 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7df23c60-d5f8-47e9-a852-ba39850823cb-nova-extra-config-0" (OuterVolumeSpecName: "nova-extra-config-0") pod "7df23c60-d5f8-47e9-a852-ba39850823cb" (UID: "7df23c60-d5f8-47e9-a852-ba39850823cb"). InnerVolumeSpecName "nova-extra-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 14:15:56 crc kubenswrapper[4768]: I0217 14:15:56.545051 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7df23c60-d5f8-47e9-a852-ba39850823cb-nova-cell1-compute-config-2" (OuterVolumeSpecName: "nova-cell1-compute-config-2") pod "7df23c60-d5f8-47e9-a852-ba39850823cb" (UID: "7df23c60-d5f8-47e9-a852-ba39850823cb"). InnerVolumeSpecName "nova-cell1-compute-config-2". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 14:15:56 crc kubenswrapper[4768]: I0217 14:15:56.546957 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7df23c60-d5f8-47e9-a852-ba39850823cb-nova-migration-ssh-key-1" (OuterVolumeSpecName: "nova-migration-ssh-key-1") pod "7df23c60-d5f8-47e9-a852-ba39850823cb" (UID: "7df23c60-d5f8-47e9-a852-ba39850823cb"). InnerVolumeSpecName "nova-migration-ssh-key-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 14:15:56 crc kubenswrapper[4768]: I0217 14:15:56.550505 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7df23c60-d5f8-47e9-a852-ba39850823cb-nova-migration-ssh-key-0" (OuterVolumeSpecName: "nova-migration-ssh-key-0") pod "7df23c60-d5f8-47e9-a852-ba39850823cb" (UID: "7df23c60-d5f8-47e9-a852-ba39850823cb"). InnerVolumeSpecName "nova-migration-ssh-key-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 14:15:56 crc kubenswrapper[4768]: I0217 14:15:56.552737 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7df23c60-d5f8-47e9-a852-ba39850823cb-nova-cell1-compute-config-0" (OuterVolumeSpecName: "nova-cell1-compute-config-0") pod "7df23c60-d5f8-47e9-a852-ba39850823cb" (UID: "7df23c60-d5f8-47e9-a852-ba39850823cb"). InnerVolumeSpecName "nova-cell1-compute-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 14:15:56 crc kubenswrapper[4768]: I0217 14:15:56.555913 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7df23c60-d5f8-47e9-a852-ba39850823cb-nova-cell1-compute-config-1" (OuterVolumeSpecName: "nova-cell1-compute-config-1") pod "7df23c60-d5f8-47e9-a852-ba39850823cb" (UID: "7df23c60-d5f8-47e9-a852-ba39850823cb"). InnerVolumeSpecName "nova-cell1-compute-config-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 14:15:56 crc kubenswrapper[4768]: I0217 14:15:56.564157 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7df23c60-d5f8-47e9-a852-ba39850823cb-inventory" (OuterVolumeSpecName: "inventory") pod "7df23c60-d5f8-47e9-a852-ba39850823cb" (UID: "7df23c60-d5f8-47e9-a852-ba39850823cb"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 14:15:56 crc kubenswrapper[4768]: I0217 14:15:56.570823 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7df23c60-d5f8-47e9-a852-ba39850823cb-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "7df23c60-d5f8-47e9-a852-ba39850823cb" (UID: "7df23c60-d5f8-47e9-a852-ba39850823cb"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 14:15:56 crc kubenswrapper[4768]: I0217 14:15:56.605192 4768 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-3\" (UniqueName: \"kubernetes.io/secret/7df23c60-d5f8-47e9-a852-ba39850823cb-nova-cell1-compute-config-3\") on node \"crc\" DevicePath \"\"" Feb 17 14:15:56 crc kubenswrapper[4768]: I0217 14:15:56.605241 4768 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/7df23c60-d5f8-47e9-a852-ba39850823cb-nova-migration-ssh-key-0\") on node \"crc\" DevicePath \"\"" Feb 17 14:15:56 crc kubenswrapper[4768]: I0217 14:15:56.605253 4768 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/7df23c60-d5f8-47e9-a852-ba39850823cb-nova-migration-ssh-key-1\") on node \"crc\" DevicePath \"\"" Feb 17 14:15:56 crc kubenswrapper[4768]: I0217 14:15:56.605267 4768 reconciler_common.go:293] "Volume detached for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7df23c60-d5f8-47e9-a852-ba39850823cb-nova-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 14:15:56 crc kubenswrapper[4768]: I0217 14:15:56.605280 4768 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7df23c60-d5f8-47e9-a852-ba39850823cb-inventory\") on node \"crc\" DevicePath \"\"" Feb 17 14:15:56 crc kubenswrapper[4768]: I0217 14:15:56.605292 4768 reconciler_common.go:293] "Volume detached for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/7df23c60-d5f8-47e9-a852-ba39850823cb-nova-extra-config-0\") on node \"crc\" DevicePath \"\"" Feb 17 14:15:56 crc kubenswrapper[4768]: I0217 14:15:56.605300 4768 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/7df23c60-d5f8-47e9-a852-ba39850823cb-nova-cell1-compute-config-1\") on node \"crc\" DevicePath \"\"" Feb 17 14:15:56 crc kubenswrapper[4768]: I0217 14:15:56.605309 4768 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-2\" (UniqueName: \"kubernetes.io/secret/7df23c60-d5f8-47e9-a852-ba39850823cb-nova-cell1-compute-config-2\") on node \"crc\" DevicePath \"\"" Feb 17 14:15:56 crc kubenswrapper[4768]: I0217 14:15:56.605318 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jjzhz\" (UniqueName: \"kubernetes.io/projected/7df23c60-d5f8-47e9-a852-ba39850823cb-kube-api-access-jjzhz\") on node \"crc\" DevicePath \"\"" Feb 17 14:15:56 crc kubenswrapper[4768]: I0217 14:15:56.605327 4768 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7df23c60-d5f8-47e9-a852-ba39850823cb-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 17 14:15:56 crc kubenswrapper[4768]: I0217 14:15:56.605336 4768 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/7df23c60-d5f8-47e9-a852-ba39850823cb-nova-cell1-compute-config-0\") on node \"crc\" DevicePath \"\"" Feb 17 14:15:57 crc kubenswrapper[4768]: I0217 14:15:57.001730 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-4tgnr" event={"ID":"7df23c60-d5f8-47e9-a852-ba39850823cb","Type":"ContainerDied","Data":"2396a50957e366f008b304a69edb3ac5141223689044a796d14c3039e7e5c11f"} Feb 17 14:15:57 crc kubenswrapper[4768]: I0217 14:15:57.002250 4768 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2396a50957e366f008b304a69edb3ac5141223689044a796d14c3039e7e5c11f" Feb 17 14:15:57 crc kubenswrapper[4768]: I0217 14:15:57.001786 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-4tgnr" Feb 17 14:15:57 crc kubenswrapper[4768]: I0217 14:15:57.102831 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-txckp"] Feb 17 14:15:57 crc kubenswrapper[4768]: E0217 14:15:57.103191 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="84645fb8-5ec8-447f-85dd-045f06004115" containerName="collect-profiles" Feb 17 14:15:57 crc kubenswrapper[4768]: I0217 14:15:57.103209 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="84645fb8-5ec8-447f-85dd-045f06004115" containerName="collect-profiles" Feb 17 14:15:57 crc kubenswrapper[4768]: E0217 14:15:57.103240 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7df23c60-d5f8-47e9-a852-ba39850823cb" containerName="nova-edpm-deployment-openstack-edpm-ipam" Feb 17 14:15:57 crc kubenswrapper[4768]: I0217 14:15:57.103248 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="7df23c60-d5f8-47e9-a852-ba39850823cb" containerName="nova-edpm-deployment-openstack-edpm-ipam" Feb 17 14:15:57 crc kubenswrapper[4768]: I0217 14:15:57.103416 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="84645fb8-5ec8-447f-85dd-045f06004115" containerName="collect-profiles" Feb 17 14:15:57 crc kubenswrapper[4768]: I0217 14:15:57.103440 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="7df23c60-d5f8-47e9-a852-ba39850823cb" containerName="nova-edpm-deployment-openstack-edpm-ipam" Feb 17 14:15:57 crc kubenswrapper[4768]: I0217 14:15:57.103999 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-txckp" Feb 17 14:15:57 crc kubenswrapper[4768]: I0217 14:15:57.106273 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-compute-config-data" Feb 17 14:15:57 crc kubenswrapper[4768]: I0217 14:15:57.106460 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 17 14:15:57 crc kubenswrapper[4768]: I0217 14:15:57.106585 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 17 14:15:57 crc kubenswrapper[4768]: I0217 14:15:57.106705 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 17 14:15:57 crc kubenswrapper[4768]: I0217 14:15:57.106838 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-tr87q" Feb 17 14:15:57 crc kubenswrapper[4768]: I0217 14:15:57.126055 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-txckp"] Feb 17 14:15:57 crc kubenswrapper[4768]: I0217 14:15:57.212583 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/037854ba-d107-4be1-8a90-914e9180957d-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-txckp\" (UID: \"037854ba-d107-4be1-8a90-914e9180957d\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-txckp" Feb 17 14:15:57 crc kubenswrapper[4768]: I0217 14:15:57.212630 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pchkc\" (UniqueName: \"kubernetes.io/projected/037854ba-d107-4be1-8a90-914e9180957d-kube-api-access-pchkc\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-txckp\" (UID: \"037854ba-d107-4be1-8a90-914e9180957d\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-txckp" Feb 17 14:15:57 crc kubenswrapper[4768]: I0217 14:15:57.212670 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/037854ba-d107-4be1-8a90-914e9180957d-ssh-key-openstack-edpm-ipam\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-txckp\" (UID: \"037854ba-d107-4be1-8a90-914e9180957d\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-txckp" Feb 17 14:15:57 crc kubenswrapper[4768]: I0217 14:15:57.212772 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/037854ba-d107-4be1-8a90-914e9180957d-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-txckp\" (UID: \"037854ba-d107-4be1-8a90-914e9180957d\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-txckp" Feb 17 14:15:57 crc kubenswrapper[4768]: I0217 14:15:57.212958 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/037854ba-d107-4be1-8a90-914e9180957d-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-txckp\" (UID: \"037854ba-d107-4be1-8a90-914e9180957d\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-txckp" Feb 17 14:15:57 crc kubenswrapper[4768]: I0217 14:15:57.212985 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/037854ba-d107-4be1-8a90-914e9180957d-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-txckp\" (UID: \"037854ba-d107-4be1-8a90-914e9180957d\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-txckp" Feb 17 14:15:57 crc kubenswrapper[4768]: I0217 14:15:57.213154 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/037854ba-d107-4be1-8a90-914e9180957d-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-txckp\" (UID: \"037854ba-d107-4be1-8a90-914e9180957d\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-txckp" Feb 17 14:15:57 crc kubenswrapper[4768]: I0217 14:15:57.314051 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/037854ba-d107-4be1-8a90-914e9180957d-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-txckp\" (UID: \"037854ba-d107-4be1-8a90-914e9180957d\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-txckp" Feb 17 14:15:57 crc kubenswrapper[4768]: I0217 14:15:57.314181 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/037854ba-d107-4be1-8a90-914e9180957d-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-txckp\" (UID: \"037854ba-d107-4be1-8a90-914e9180957d\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-txckp" Feb 17 14:15:57 crc kubenswrapper[4768]: I0217 14:15:57.314202 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pchkc\" (UniqueName: \"kubernetes.io/projected/037854ba-d107-4be1-8a90-914e9180957d-kube-api-access-pchkc\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-txckp\" (UID: \"037854ba-d107-4be1-8a90-914e9180957d\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-txckp" Feb 17 14:15:57 crc kubenswrapper[4768]: I0217 14:15:57.314238 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/037854ba-d107-4be1-8a90-914e9180957d-ssh-key-openstack-edpm-ipam\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-txckp\" (UID: \"037854ba-d107-4be1-8a90-914e9180957d\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-txckp" Feb 17 14:15:57 crc kubenswrapper[4768]: I0217 14:15:57.314258 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/037854ba-d107-4be1-8a90-914e9180957d-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-txckp\" (UID: \"037854ba-d107-4be1-8a90-914e9180957d\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-txckp" Feb 17 14:15:57 crc kubenswrapper[4768]: I0217 14:15:57.314305 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/037854ba-d107-4be1-8a90-914e9180957d-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-txckp\" (UID: \"037854ba-d107-4be1-8a90-914e9180957d\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-txckp" Feb 17 14:15:57 crc kubenswrapper[4768]: I0217 14:15:57.314330 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/037854ba-d107-4be1-8a90-914e9180957d-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-txckp\" (UID: \"037854ba-d107-4be1-8a90-914e9180957d\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-txckp" Feb 17 14:15:57 crc kubenswrapper[4768]: I0217 14:15:57.318377 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/037854ba-d107-4be1-8a90-914e9180957d-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-txckp\" (UID: \"037854ba-d107-4be1-8a90-914e9180957d\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-txckp" Feb 17 14:15:57 crc kubenswrapper[4768]: I0217 14:15:57.318620 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/037854ba-d107-4be1-8a90-914e9180957d-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-txckp\" (UID: \"037854ba-d107-4be1-8a90-914e9180957d\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-txckp" Feb 17 14:15:57 crc kubenswrapper[4768]: I0217 14:15:57.318933 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/037854ba-d107-4be1-8a90-914e9180957d-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-txckp\" (UID: \"037854ba-d107-4be1-8a90-914e9180957d\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-txckp" Feb 17 14:15:57 crc kubenswrapper[4768]: I0217 14:15:57.319053 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/037854ba-d107-4be1-8a90-914e9180957d-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-txckp\" (UID: \"037854ba-d107-4be1-8a90-914e9180957d\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-txckp" Feb 17 14:15:57 crc kubenswrapper[4768]: I0217 14:15:57.319187 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/037854ba-d107-4be1-8a90-914e9180957d-ssh-key-openstack-edpm-ipam\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-txckp\" (UID: \"037854ba-d107-4be1-8a90-914e9180957d\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-txckp" Feb 17 14:15:57 crc kubenswrapper[4768]: I0217 14:15:57.320088 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/037854ba-d107-4be1-8a90-914e9180957d-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-txckp\" (UID: \"037854ba-d107-4be1-8a90-914e9180957d\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-txckp" Feb 17 14:15:57 crc kubenswrapper[4768]: I0217 14:15:57.337279 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pchkc\" (UniqueName: \"kubernetes.io/projected/037854ba-d107-4be1-8a90-914e9180957d-kube-api-access-pchkc\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-txckp\" (UID: \"037854ba-d107-4be1-8a90-914e9180957d\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-txckp" Feb 17 14:15:57 crc kubenswrapper[4768]: I0217 14:15:57.437431 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-txckp" Feb 17 14:15:57 crc kubenswrapper[4768]: I0217 14:15:57.985926 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-txckp"] Feb 17 14:15:57 crc kubenswrapper[4768]: W0217 14:15:57.988579 4768 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod037854ba_d107_4be1_8a90_914e9180957d.slice/crio-04025dfee28da6c4d196bfd5e7bc160117db8b594fa9a3d69f1eeb439b0dfb13 WatchSource:0}: Error finding container 04025dfee28da6c4d196bfd5e7bc160117db8b594fa9a3d69f1eeb439b0dfb13: Status 404 returned error can't find the container with id 04025dfee28da6c4d196bfd5e7bc160117db8b594fa9a3d69f1eeb439b0dfb13 Feb 17 14:15:57 crc kubenswrapper[4768]: I0217 14:15:57.990930 4768 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 17 14:15:58 crc kubenswrapper[4768]: I0217 14:15:58.018432 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-txckp" event={"ID":"037854ba-d107-4be1-8a90-914e9180957d","Type":"ContainerStarted","Data":"04025dfee28da6c4d196bfd5e7bc160117db8b594fa9a3d69f1eeb439b0dfb13"} Feb 17 14:15:58 crc kubenswrapper[4768]: I0217 14:15:58.534679 4768 scope.go:117] "RemoveContainer" containerID="f1e36808cf48ba534d66b2ba15d35f7785c1fe424a7230a2ddcd4dc0fb81f016" Feb 17 14:15:58 crc kubenswrapper[4768]: E0217 14:15:58.534987 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p97z4_openshift-machine-config-operator(10c685ba-8fe0-425c-958c-3fb6754d3d84)\"" pod="openshift-machine-config-operator/machine-config-daemon-p97z4" podUID="10c685ba-8fe0-425c-958c-3fb6754d3d84" Feb 17 14:15:59 crc kubenswrapper[4768]: I0217 14:15:59.029547 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-txckp" event={"ID":"037854ba-d107-4be1-8a90-914e9180957d","Type":"ContainerStarted","Data":"73e5668d9623f1d78aa4467a052e528ced85af4221bf9bc495c369ec19220f47"} Feb 17 14:15:59 crc kubenswrapper[4768]: I0217 14:15:59.058490 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-txckp" podStartSLOduration=1.602787711 podStartE2EDuration="2.058469184s" podCreationTimestamp="2026-02-17 14:15:57 +0000 UTC" firstStartedPulling="2026-02-17 14:15:57.990726134 +0000 UTC m=+2377.270112566" lastFinishedPulling="2026-02-17 14:15:58.446407587 +0000 UTC m=+2377.725794039" observedRunningTime="2026-02-17 14:15:59.055971245 +0000 UTC m=+2378.335357717" watchObservedRunningTime="2026-02-17 14:15:59.058469184 +0000 UTC m=+2378.337855626" Feb 17 14:16:13 crc kubenswrapper[4768]: I0217 14:16:13.534995 4768 scope.go:117] "RemoveContainer" containerID="f1e36808cf48ba534d66b2ba15d35f7785c1fe424a7230a2ddcd4dc0fb81f016" Feb 17 14:16:13 crc kubenswrapper[4768]: E0217 14:16:13.535902 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p97z4_openshift-machine-config-operator(10c685ba-8fe0-425c-958c-3fb6754d3d84)\"" pod="openshift-machine-config-operator/machine-config-daemon-p97z4" podUID="10c685ba-8fe0-425c-958c-3fb6754d3d84" Feb 17 14:16:26 crc kubenswrapper[4768]: I0217 14:16:26.535537 4768 scope.go:117] "RemoveContainer" containerID="f1e36808cf48ba534d66b2ba15d35f7785c1fe424a7230a2ddcd4dc0fb81f016" Feb 17 14:16:26 crc kubenswrapper[4768]: E0217 14:16:26.536731 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p97z4_openshift-machine-config-operator(10c685ba-8fe0-425c-958c-3fb6754d3d84)\"" pod="openshift-machine-config-operator/machine-config-daemon-p97z4" podUID="10c685ba-8fe0-425c-958c-3fb6754d3d84" Feb 17 14:16:38 crc kubenswrapper[4768]: I0217 14:16:38.534950 4768 scope.go:117] "RemoveContainer" containerID="f1e36808cf48ba534d66b2ba15d35f7785c1fe424a7230a2ddcd4dc0fb81f016" Feb 17 14:16:38 crc kubenswrapper[4768]: E0217 14:16:38.535721 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p97z4_openshift-machine-config-operator(10c685ba-8fe0-425c-958c-3fb6754d3d84)\"" pod="openshift-machine-config-operator/machine-config-daemon-p97z4" podUID="10c685ba-8fe0-425c-958c-3fb6754d3d84" Feb 17 14:16:49 crc kubenswrapper[4768]: I0217 14:16:49.534940 4768 scope.go:117] "RemoveContainer" containerID="f1e36808cf48ba534d66b2ba15d35f7785c1fe424a7230a2ddcd4dc0fb81f016" Feb 17 14:16:49 crc kubenswrapper[4768]: E0217 14:16:49.536087 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p97z4_openshift-machine-config-operator(10c685ba-8fe0-425c-958c-3fb6754d3d84)\"" pod="openshift-machine-config-operator/machine-config-daemon-p97z4" podUID="10c685ba-8fe0-425c-958c-3fb6754d3d84" Feb 17 14:17:04 crc kubenswrapper[4768]: I0217 14:17:04.535057 4768 scope.go:117] "RemoveContainer" containerID="f1e36808cf48ba534d66b2ba15d35f7785c1fe424a7230a2ddcd4dc0fb81f016" Feb 17 14:17:04 crc kubenswrapper[4768]: E0217 14:17:04.535808 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p97z4_openshift-machine-config-operator(10c685ba-8fe0-425c-958c-3fb6754d3d84)\"" pod="openshift-machine-config-operator/machine-config-daemon-p97z4" podUID="10c685ba-8fe0-425c-958c-3fb6754d3d84" Feb 17 14:17:18 crc kubenswrapper[4768]: I0217 14:17:18.535442 4768 scope.go:117] "RemoveContainer" containerID="f1e36808cf48ba534d66b2ba15d35f7785c1fe424a7230a2ddcd4dc0fb81f016" Feb 17 14:17:18 crc kubenswrapper[4768]: E0217 14:17:18.536730 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p97z4_openshift-machine-config-operator(10c685ba-8fe0-425c-958c-3fb6754d3d84)\"" pod="openshift-machine-config-operator/machine-config-daemon-p97z4" podUID="10c685ba-8fe0-425c-958c-3fb6754d3d84" Feb 17 14:17:31 crc kubenswrapper[4768]: I0217 14:17:31.542971 4768 scope.go:117] "RemoveContainer" containerID="f1e36808cf48ba534d66b2ba15d35f7785c1fe424a7230a2ddcd4dc0fb81f016" Feb 17 14:17:31 crc kubenswrapper[4768]: E0217 14:17:31.543813 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p97z4_openshift-machine-config-operator(10c685ba-8fe0-425c-958c-3fb6754d3d84)\"" pod="openshift-machine-config-operator/machine-config-daemon-p97z4" podUID="10c685ba-8fe0-425c-958c-3fb6754d3d84" Feb 17 14:17:44 crc kubenswrapper[4768]: I0217 14:17:44.534528 4768 scope.go:117] "RemoveContainer" containerID="f1e36808cf48ba534d66b2ba15d35f7785c1fe424a7230a2ddcd4dc0fb81f016" Feb 17 14:17:44 crc kubenswrapper[4768]: E0217 14:17:44.535339 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p97z4_openshift-machine-config-operator(10c685ba-8fe0-425c-958c-3fb6754d3d84)\"" pod="openshift-machine-config-operator/machine-config-daemon-p97z4" podUID="10c685ba-8fe0-425c-958c-3fb6754d3d84" Feb 17 14:17:58 crc kubenswrapper[4768]: I0217 14:17:58.534766 4768 scope.go:117] "RemoveContainer" containerID="f1e36808cf48ba534d66b2ba15d35f7785c1fe424a7230a2ddcd4dc0fb81f016" Feb 17 14:17:59 crc kubenswrapper[4768]: I0217 14:17:59.144724 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-p97z4" event={"ID":"10c685ba-8fe0-425c-958c-3fb6754d3d84","Type":"ContainerStarted","Data":"5e19e1a31aa1dc6bcd956bc608001955d760aa97c517ab661175716241ae79b7"} Feb 17 14:18:15 crc kubenswrapper[4768]: I0217 14:18:15.295273 4768 generic.go:334] "Generic (PLEG): container finished" podID="037854ba-d107-4be1-8a90-914e9180957d" containerID="73e5668d9623f1d78aa4467a052e528ced85af4221bf9bc495c369ec19220f47" exitCode=0 Feb 17 14:18:15 crc kubenswrapper[4768]: I0217 14:18:15.295424 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-txckp" event={"ID":"037854ba-d107-4be1-8a90-914e9180957d","Type":"ContainerDied","Data":"73e5668d9623f1d78aa4467a052e528ced85af4221bf9bc495c369ec19220f47"} Feb 17 14:18:16 crc kubenswrapper[4768]: I0217 14:18:16.764524 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-txckp" Feb 17 14:18:16 crc kubenswrapper[4768]: I0217 14:18:16.832031 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pchkc\" (UniqueName: \"kubernetes.io/projected/037854ba-d107-4be1-8a90-914e9180957d-kube-api-access-pchkc\") pod \"037854ba-d107-4be1-8a90-914e9180957d\" (UID: \"037854ba-d107-4be1-8a90-914e9180957d\") " Feb 17 14:18:16 crc kubenswrapper[4768]: I0217 14:18:16.832148 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/037854ba-d107-4be1-8a90-914e9180957d-ceilometer-compute-config-data-0\") pod \"037854ba-d107-4be1-8a90-914e9180957d\" (UID: \"037854ba-d107-4be1-8a90-914e9180957d\") " Feb 17 14:18:16 crc kubenswrapper[4768]: I0217 14:18:16.832229 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/037854ba-d107-4be1-8a90-914e9180957d-inventory\") pod \"037854ba-d107-4be1-8a90-914e9180957d\" (UID: \"037854ba-d107-4be1-8a90-914e9180957d\") " Feb 17 14:18:16 crc kubenswrapper[4768]: I0217 14:18:16.832299 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/037854ba-d107-4be1-8a90-914e9180957d-ssh-key-openstack-edpm-ipam\") pod \"037854ba-d107-4be1-8a90-914e9180957d\" (UID: \"037854ba-d107-4be1-8a90-914e9180957d\") " Feb 17 14:18:16 crc kubenswrapper[4768]: I0217 14:18:16.832360 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/037854ba-d107-4be1-8a90-914e9180957d-ceilometer-compute-config-data-2\") pod \"037854ba-d107-4be1-8a90-914e9180957d\" (UID: \"037854ba-d107-4be1-8a90-914e9180957d\") " Feb 17 14:18:16 crc kubenswrapper[4768]: I0217 14:18:16.832416 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/037854ba-d107-4be1-8a90-914e9180957d-telemetry-combined-ca-bundle\") pod \"037854ba-d107-4be1-8a90-914e9180957d\" (UID: \"037854ba-d107-4be1-8a90-914e9180957d\") " Feb 17 14:18:16 crc kubenswrapper[4768]: I0217 14:18:16.832440 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/037854ba-d107-4be1-8a90-914e9180957d-ceilometer-compute-config-data-1\") pod \"037854ba-d107-4be1-8a90-914e9180957d\" (UID: \"037854ba-d107-4be1-8a90-914e9180957d\") " Feb 17 14:18:16 crc kubenswrapper[4768]: I0217 14:18:16.837840 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/037854ba-d107-4be1-8a90-914e9180957d-telemetry-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-combined-ca-bundle") pod "037854ba-d107-4be1-8a90-914e9180957d" (UID: "037854ba-d107-4be1-8a90-914e9180957d"). InnerVolumeSpecName "telemetry-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 14:18:16 crc kubenswrapper[4768]: I0217 14:18:16.840648 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/037854ba-d107-4be1-8a90-914e9180957d-kube-api-access-pchkc" (OuterVolumeSpecName: "kube-api-access-pchkc") pod "037854ba-d107-4be1-8a90-914e9180957d" (UID: "037854ba-d107-4be1-8a90-914e9180957d"). InnerVolumeSpecName "kube-api-access-pchkc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 14:18:16 crc kubenswrapper[4768]: I0217 14:18:16.865010 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/037854ba-d107-4be1-8a90-914e9180957d-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "037854ba-d107-4be1-8a90-914e9180957d" (UID: "037854ba-d107-4be1-8a90-914e9180957d"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 14:18:16 crc kubenswrapper[4768]: I0217 14:18:16.865373 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/037854ba-d107-4be1-8a90-914e9180957d-ceilometer-compute-config-data-0" (OuterVolumeSpecName: "ceilometer-compute-config-data-0") pod "037854ba-d107-4be1-8a90-914e9180957d" (UID: "037854ba-d107-4be1-8a90-914e9180957d"). InnerVolumeSpecName "ceilometer-compute-config-data-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 14:18:16 crc kubenswrapper[4768]: I0217 14:18:16.871218 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/037854ba-d107-4be1-8a90-914e9180957d-ceilometer-compute-config-data-2" (OuterVolumeSpecName: "ceilometer-compute-config-data-2") pod "037854ba-d107-4be1-8a90-914e9180957d" (UID: "037854ba-d107-4be1-8a90-914e9180957d"). InnerVolumeSpecName "ceilometer-compute-config-data-2". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 14:18:16 crc kubenswrapper[4768]: I0217 14:18:16.876351 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/037854ba-d107-4be1-8a90-914e9180957d-inventory" (OuterVolumeSpecName: "inventory") pod "037854ba-d107-4be1-8a90-914e9180957d" (UID: "037854ba-d107-4be1-8a90-914e9180957d"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 14:18:16 crc kubenswrapper[4768]: I0217 14:18:16.887059 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/037854ba-d107-4be1-8a90-914e9180957d-ceilometer-compute-config-data-1" (OuterVolumeSpecName: "ceilometer-compute-config-data-1") pod "037854ba-d107-4be1-8a90-914e9180957d" (UID: "037854ba-d107-4be1-8a90-914e9180957d"). InnerVolumeSpecName "ceilometer-compute-config-data-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 14:18:16 crc kubenswrapper[4768]: I0217 14:18:16.934752 4768 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/037854ba-d107-4be1-8a90-914e9180957d-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 17 14:18:16 crc kubenswrapper[4768]: I0217 14:18:16.934793 4768 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/037854ba-d107-4be1-8a90-914e9180957d-ceilometer-compute-config-data-2\") on node \"crc\" DevicePath \"\"" Feb 17 14:18:16 crc kubenswrapper[4768]: I0217 14:18:16.934807 4768 reconciler_common.go:293] "Volume detached for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/037854ba-d107-4be1-8a90-914e9180957d-telemetry-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 14:18:16 crc kubenswrapper[4768]: I0217 14:18:16.934816 4768 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/037854ba-d107-4be1-8a90-914e9180957d-ceilometer-compute-config-data-1\") on node \"crc\" DevicePath \"\"" Feb 17 14:18:16 crc kubenswrapper[4768]: I0217 14:18:16.934825 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pchkc\" (UniqueName: \"kubernetes.io/projected/037854ba-d107-4be1-8a90-914e9180957d-kube-api-access-pchkc\") on node \"crc\" DevicePath \"\"" Feb 17 14:18:16 crc kubenswrapper[4768]: I0217 14:18:16.934835 4768 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/037854ba-d107-4be1-8a90-914e9180957d-ceilometer-compute-config-data-0\") on node \"crc\" DevicePath \"\"" Feb 17 14:18:16 crc kubenswrapper[4768]: I0217 14:18:16.934867 4768 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/037854ba-d107-4be1-8a90-914e9180957d-inventory\") on node \"crc\" DevicePath \"\"" Feb 17 14:18:17 crc kubenswrapper[4768]: I0217 14:18:17.320326 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-txckp" event={"ID":"037854ba-d107-4be1-8a90-914e9180957d","Type":"ContainerDied","Data":"04025dfee28da6c4d196bfd5e7bc160117db8b594fa9a3d69f1eeb439b0dfb13"} Feb 17 14:18:17 crc kubenswrapper[4768]: I0217 14:18:17.320621 4768 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="04025dfee28da6c4d196bfd5e7bc160117db8b594fa9a3d69f1eeb439b0dfb13" Feb 17 14:18:17 crc kubenswrapper[4768]: I0217 14:18:17.320420 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-txckp" Feb 17 14:19:09 crc kubenswrapper[4768]: I0217 14:19:09.002703 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/tempest-tests-tempest"] Feb 17 14:19:09 crc kubenswrapper[4768]: E0217 14:19:09.003622 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="037854ba-d107-4be1-8a90-914e9180957d" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Feb 17 14:19:09 crc kubenswrapper[4768]: I0217 14:19:09.003637 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="037854ba-d107-4be1-8a90-914e9180957d" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Feb 17 14:19:09 crc kubenswrapper[4768]: I0217 14:19:09.003836 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="037854ba-d107-4be1-8a90-914e9180957d" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Feb 17 14:19:09 crc kubenswrapper[4768]: I0217 14:19:09.004487 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Feb 17 14:19:09 crc kubenswrapper[4768]: I0217 14:19:09.006799 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"test-operator-controller-priv-key" Feb 17 14:19:09 crc kubenswrapper[4768]: I0217 14:19:09.007928 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-clc7g" Feb 17 14:19:09 crc kubenswrapper[4768]: I0217 14:19:09.010279 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-env-vars-s0" Feb 17 14:19:09 crc kubenswrapper[4768]: I0217 14:19:09.011841 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-custom-data-s0" Feb 17 14:19:09 crc kubenswrapper[4768]: I0217 14:19:09.024961 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest"] Feb 17 14:19:09 crc kubenswrapper[4768]: I0217 14:19:09.039475 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/780f2ee6-f4d9-455c-97e6-7e6451706324-config-data\") pod \"tempest-tests-tempest\" (UID: \"780f2ee6-f4d9-455c-97e6-7e6451706324\") " pod="openstack/tempest-tests-tempest" Feb 17 14:19:09 crc kubenswrapper[4768]: I0217 14:19:09.039700 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/780f2ee6-f4d9-455c-97e6-7e6451706324-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"780f2ee6-f4d9-455c-97e6-7e6451706324\") " pod="openstack/tempest-tests-tempest" Feb 17 14:19:09 crc kubenswrapper[4768]: I0217 14:19:09.039755 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/780f2ee6-f4d9-455c-97e6-7e6451706324-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"780f2ee6-f4d9-455c-97e6-7e6451706324\") " pod="openstack/tempest-tests-tempest" Feb 17 14:19:09 crc kubenswrapper[4768]: I0217 14:19:09.141158 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/780f2ee6-f4d9-455c-97e6-7e6451706324-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"780f2ee6-f4d9-455c-97e6-7e6451706324\") " pod="openstack/tempest-tests-tempest" Feb 17 14:19:09 crc kubenswrapper[4768]: I0217 14:19:09.141203 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/780f2ee6-f4d9-455c-97e6-7e6451706324-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"780f2ee6-f4d9-455c-97e6-7e6451706324\") " pod="openstack/tempest-tests-tempest" Feb 17 14:19:09 crc kubenswrapper[4768]: I0217 14:19:09.141224 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/780f2ee6-f4d9-455c-97e6-7e6451706324-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"780f2ee6-f4d9-455c-97e6-7e6451706324\") " pod="openstack/tempest-tests-tempest" Feb 17 14:19:09 crc kubenswrapper[4768]: I0217 14:19:09.141242 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/780f2ee6-f4d9-455c-97e6-7e6451706324-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"780f2ee6-f4d9-455c-97e6-7e6451706324\") " pod="openstack/tempest-tests-tempest" Feb 17 14:19:09 crc kubenswrapper[4768]: I0217 14:19:09.141280 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/780f2ee6-f4d9-455c-97e6-7e6451706324-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"780f2ee6-f4d9-455c-97e6-7e6451706324\") " pod="openstack/tempest-tests-tempest" Feb 17 14:19:09 crc kubenswrapper[4768]: I0217 14:19:09.141305 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/780f2ee6-f4d9-455c-97e6-7e6451706324-config-data\") pod \"tempest-tests-tempest\" (UID: \"780f2ee6-f4d9-455c-97e6-7e6451706324\") " pod="openstack/tempest-tests-tempest" Feb 17 14:19:09 crc kubenswrapper[4768]: I0217 14:19:09.141547 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"tempest-tests-tempest\" (UID: \"780f2ee6-f4d9-455c-97e6-7e6451706324\") " pod="openstack/tempest-tests-tempest" Feb 17 14:19:09 crc kubenswrapper[4768]: I0217 14:19:09.141649 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/780f2ee6-f4d9-455c-97e6-7e6451706324-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"780f2ee6-f4d9-455c-97e6-7e6451706324\") " pod="openstack/tempest-tests-tempest" Feb 17 14:19:09 crc kubenswrapper[4768]: I0217 14:19:09.141984 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xndzs\" (UniqueName: \"kubernetes.io/projected/780f2ee6-f4d9-455c-97e6-7e6451706324-kube-api-access-xndzs\") pod \"tempest-tests-tempest\" (UID: \"780f2ee6-f4d9-455c-97e6-7e6451706324\") " pod="openstack/tempest-tests-tempest" Feb 17 14:19:09 crc kubenswrapper[4768]: I0217 14:19:09.142779 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/780f2ee6-f4d9-455c-97e6-7e6451706324-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"780f2ee6-f4d9-455c-97e6-7e6451706324\") " pod="openstack/tempest-tests-tempest" Feb 17 14:19:09 crc kubenswrapper[4768]: I0217 14:19:09.142837 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/780f2ee6-f4d9-455c-97e6-7e6451706324-config-data\") pod \"tempest-tests-tempest\" (UID: \"780f2ee6-f4d9-455c-97e6-7e6451706324\") " pod="openstack/tempest-tests-tempest" Feb 17 14:19:09 crc kubenswrapper[4768]: I0217 14:19:09.160998 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/780f2ee6-f4d9-455c-97e6-7e6451706324-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"780f2ee6-f4d9-455c-97e6-7e6451706324\") " pod="openstack/tempest-tests-tempest" Feb 17 14:19:09 crc kubenswrapper[4768]: I0217 14:19:09.244294 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/780f2ee6-f4d9-455c-97e6-7e6451706324-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"780f2ee6-f4d9-455c-97e6-7e6451706324\") " pod="openstack/tempest-tests-tempest" Feb 17 14:19:09 crc kubenswrapper[4768]: I0217 14:19:09.244345 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/780f2ee6-f4d9-455c-97e6-7e6451706324-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"780f2ee6-f4d9-455c-97e6-7e6451706324\") " pod="openstack/tempest-tests-tempest" Feb 17 14:19:09 crc kubenswrapper[4768]: I0217 14:19:09.244376 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/780f2ee6-f4d9-455c-97e6-7e6451706324-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"780f2ee6-f4d9-455c-97e6-7e6451706324\") " pod="openstack/tempest-tests-tempest" Feb 17 14:19:09 crc kubenswrapper[4768]: I0217 14:19:09.244457 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"tempest-tests-tempest\" (UID: \"780f2ee6-f4d9-455c-97e6-7e6451706324\") " pod="openstack/tempest-tests-tempest" Feb 17 14:19:09 crc kubenswrapper[4768]: I0217 14:19:09.244478 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/780f2ee6-f4d9-455c-97e6-7e6451706324-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"780f2ee6-f4d9-455c-97e6-7e6451706324\") " pod="openstack/tempest-tests-tempest" Feb 17 14:19:09 crc kubenswrapper[4768]: I0217 14:19:09.244530 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xndzs\" (UniqueName: \"kubernetes.io/projected/780f2ee6-f4d9-455c-97e6-7e6451706324-kube-api-access-xndzs\") pod \"tempest-tests-tempest\" (UID: \"780f2ee6-f4d9-455c-97e6-7e6451706324\") " pod="openstack/tempest-tests-tempest" Feb 17 14:19:09 crc kubenswrapper[4768]: I0217 14:19:09.245010 4768 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"tempest-tests-tempest\" (UID: \"780f2ee6-f4d9-455c-97e6-7e6451706324\") device mount path \"/mnt/openstack/pv12\"" pod="openstack/tempest-tests-tempest" Feb 17 14:19:09 crc kubenswrapper[4768]: I0217 14:19:09.245116 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/780f2ee6-f4d9-455c-97e6-7e6451706324-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"780f2ee6-f4d9-455c-97e6-7e6451706324\") " pod="openstack/tempest-tests-tempest" Feb 17 14:19:09 crc kubenswrapper[4768]: I0217 14:19:09.245175 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/780f2ee6-f4d9-455c-97e6-7e6451706324-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"780f2ee6-f4d9-455c-97e6-7e6451706324\") " pod="openstack/tempest-tests-tempest" Feb 17 14:19:09 crc kubenswrapper[4768]: I0217 14:19:09.249456 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/780f2ee6-f4d9-455c-97e6-7e6451706324-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"780f2ee6-f4d9-455c-97e6-7e6451706324\") " pod="openstack/tempest-tests-tempest" Feb 17 14:19:09 crc kubenswrapper[4768]: I0217 14:19:09.256353 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/780f2ee6-f4d9-455c-97e6-7e6451706324-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"780f2ee6-f4d9-455c-97e6-7e6451706324\") " pod="openstack/tempest-tests-tempest" Feb 17 14:19:09 crc kubenswrapper[4768]: I0217 14:19:09.262963 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xndzs\" (UniqueName: \"kubernetes.io/projected/780f2ee6-f4d9-455c-97e6-7e6451706324-kube-api-access-xndzs\") pod \"tempest-tests-tempest\" (UID: \"780f2ee6-f4d9-455c-97e6-7e6451706324\") " pod="openstack/tempest-tests-tempest" Feb 17 14:19:09 crc kubenswrapper[4768]: I0217 14:19:09.305066 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"tempest-tests-tempest\" (UID: \"780f2ee6-f4d9-455c-97e6-7e6451706324\") " pod="openstack/tempest-tests-tempest" Feb 17 14:19:09 crc kubenswrapper[4768]: I0217 14:19:09.326513 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Feb 17 14:19:09 crc kubenswrapper[4768]: I0217 14:19:09.804882 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest"] Feb 17 14:19:09 crc kubenswrapper[4768]: I0217 14:19:09.856075 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"780f2ee6-f4d9-455c-97e6-7e6451706324","Type":"ContainerStarted","Data":"75cfef6f7ff5a41f3d5dd8621ce0981f46aed73da6a8c4152b01b74868f99792"} Feb 17 14:19:45 crc kubenswrapper[4768]: E0217 14:19:45.499260 4768 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified" Feb 17 14:19:45 crc kubenswrapper[4768]: E0217 14:19:45.500184 4768 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:tempest-tests-tempest-tests-runner,Image:quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/test_operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-ephemeral-workdir,ReadOnly:false,MountPath:/var/lib/tempest,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-ephemeral-temporary,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-logs,ReadOnly:false,MountPath:/var/lib/tempest/external_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config,ReadOnly:true,MountPath:/etc/openstack/clouds.yaml,SubPath:clouds.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config,ReadOnly:true,MountPath:/var/lib/tempest/.config/openstack/clouds.yaml,SubPath:clouds.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config-secret,ReadOnly:false,MountPath:/etc/openstack/secure.yaml,SubPath:secure.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ca-certs,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ssh-key,ReadOnly:false,MountPath:/var/lib/tempest/id_ecdsa,SubPath:ssh_key,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xndzs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42480,RunAsNonRoot:*false,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:*true,RunAsGroup:*42480,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{EnvFromSource{Prefix:,ConfigMapRef:&ConfigMapEnvSource{LocalObjectReference:LocalObjectReference{Name:tempest-tests-tempest-custom-data-s0,},Optional:nil,},SecretRef:nil,},EnvFromSource{Prefix:,ConfigMapRef:&ConfigMapEnvSource{LocalObjectReference:LocalObjectReference{Name:tempest-tests-tempest-env-vars-s0,},Optional:nil,},SecretRef:nil,},},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod tempest-tests-tempest_openstack(780f2ee6-f4d9-455c-97e6-7e6451706324): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 17 14:19:45 crc kubenswrapper[4768]: E0217 14:19:45.501592 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tempest-tests-tempest-tests-runner\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/tempest-tests-tempest" podUID="780f2ee6-f4d9-455c-97e6-7e6451706324" Feb 17 14:19:46 crc kubenswrapper[4768]: E0217 14:19:46.238738 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tempest-tests-tempest-tests-runner\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified\\\"\"" pod="openstack/tempest-tests-tempest" podUID="780f2ee6-f4d9-455c-97e6-7e6451706324" Feb 17 14:19:58 crc kubenswrapper[4768]: I0217 14:19:58.061008 4768 patch_prober.go:28] interesting pod/machine-config-daemon-p97z4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 14:19:58 crc kubenswrapper[4768]: I0217 14:19:58.061982 4768 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-p97z4" podUID="10c685ba-8fe0-425c-958c-3fb6754d3d84" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 14:19:59 crc kubenswrapper[4768]: I0217 14:19:59.282657 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-env-vars-s0" Feb 17 14:20:00 crc kubenswrapper[4768]: I0217 14:20:00.380754 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"780f2ee6-f4d9-455c-97e6-7e6451706324","Type":"ContainerStarted","Data":"75a350e1d00cd063f9c25d6cd1f8e553497147a0a776b96659f66a101e7b5969"} Feb 17 14:20:00 crc kubenswrapper[4768]: I0217 14:20:00.401597 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/tempest-tests-tempest" podStartSLOduration=3.933427358 podStartE2EDuration="53.401577638s" podCreationTimestamp="2026-02-17 14:19:07 +0000 UTC" firstStartedPulling="2026-02-17 14:19:09.811204269 +0000 UTC m=+2569.090590711" lastFinishedPulling="2026-02-17 14:19:59.279354549 +0000 UTC m=+2618.558740991" observedRunningTime="2026-02-17 14:20:00.400863958 +0000 UTC m=+2619.680250400" watchObservedRunningTime="2026-02-17 14:20:00.401577638 +0000 UTC m=+2619.680964080" Feb 17 14:20:28 crc kubenswrapper[4768]: I0217 14:20:28.060096 4768 patch_prober.go:28] interesting pod/machine-config-daemon-p97z4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 14:20:28 crc kubenswrapper[4768]: I0217 14:20:28.060694 4768 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-p97z4" podUID="10c685ba-8fe0-425c-958c-3fb6754d3d84" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 14:20:58 crc kubenswrapper[4768]: I0217 14:20:58.060562 4768 patch_prober.go:28] interesting pod/machine-config-daemon-p97z4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 14:20:58 crc kubenswrapper[4768]: I0217 14:20:58.061145 4768 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-p97z4" podUID="10c685ba-8fe0-425c-958c-3fb6754d3d84" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 14:20:58 crc kubenswrapper[4768]: I0217 14:20:58.061204 4768 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-p97z4" Feb 17 14:20:58 crc kubenswrapper[4768]: I0217 14:20:58.061983 4768 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"5e19e1a31aa1dc6bcd956bc608001955d760aa97c517ab661175716241ae79b7"} pod="openshift-machine-config-operator/machine-config-daemon-p97z4" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 17 14:20:58 crc kubenswrapper[4768]: I0217 14:20:58.062041 4768 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-p97z4" podUID="10c685ba-8fe0-425c-958c-3fb6754d3d84" containerName="machine-config-daemon" containerID="cri-o://5e19e1a31aa1dc6bcd956bc608001955d760aa97c517ab661175716241ae79b7" gracePeriod=600 Feb 17 14:20:58 crc kubenswrapper[4768]: I0217 14:20:58.990849 4768 generic.go:334] "Generic (PLEG): container finished" podID="10c685ba-8fe0-425c-958c-3fb6754d3d84" containerID="5e19e1a31aa1dc6bcd956bc608001955d760aa97c517ab661175716241ae79b7" exitCode=0 Feb 17 14:20:58 crc kubenswrapper[4768]: I0217 14:20:58.990928 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-p97z4" event={"ID":"10c685ba-8fe0-425c-958c-3fb6754d3d84","Type":"ContainerDied","Data":"5e19e1a31aa1dc6bcd956bc608001955d760aa97c517ab661175716241ae79b7"} Feb 17 14:20:58 crc kubenswrapper[4768]: I0217 14:20:58.991442 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-p97z4" event={"ID":"10c685ba-8fe0-425c-958c-3fb6754d3d84","Type":"ContainerStarted","Data":"dc7bce07b3dfdfe43d79ada51697272e656711537a015463e8a8ba6fdf153965"} Feb 17 14:20:58 crc kubenswrapper[4768]: I0217 14:20:58.991468 4768 scope.go:117] "RemoveContainer" containerID="f1e36808cf48ba534d66b2ba15d35f7785c1fe424a7230a2ddcd4dc0fb81f016" Feb 17 14:21:01 crc kubenswrapper[4768]: I0217 14:21:01.957749 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-z5krn"] Feb 17 14:21:01 crc kubenswrapper[4768]: I0217 14:21:01.961346 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-z5krn" Feb 17 14:21:01 crc kubenswrapper[4768]: I0217 14:21:01.981696 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-z5krn"] Feb 17 14:21:02 crc kubenswrapper[4768]: I0217 14:21:02.102849 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e787272f-f1cb-4dcd-b2ed-8fe0661c6070-utilities\") pod \"redhat-operators-z5krn\" (UID: \"e787272f-f1cb-4dcd-b2ed-8fe0661c6070\") " pod="openshift-marketplace/redhat-operators-z5krn" Feb 17 14:21:02 crc kubenswrapper[4768]: I0217 14:21:02.102944 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4cwmq\" (UniqueName: \"kubernetes.io/projected/e787272f-f1cb-4dcd-b2ed-8fe0661c6070-kube-api-access-4cwmq\") pod \"redhat-operators-z5krn\" (UID: \"e787272f-f1cb-4dcd-b2ed-8fe0661c6070\") " pod="openshift-marketplace/redhat-operators-z5krn" Feb 17 14:21:02 crc kubenswrapper[4768]: I0217 14:21:02.104346 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e787272f-f1cb-4dcd-b2ed-8fe0661c6070-catalog-content\") pod \"redhat-operators-z5krn\" (UID: \"e787272f-f1cb-4dcd-b2ed-8fe0661c6070\") " pod="openshift-marketplace/redhat-operators-z5krn" Feb 17 14:21:02 crc kubenswrapper[4768]: I0217 14:21:02.205795 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e787272f-f1cb-4dcd-b2ed-8fe0661c6070-catalog-content\") pod \"redhat-operators-z5krn\" (UID: \"e787272f-f1cb-4dcd-b2ed-8fe0661c6070\") " pod="openshift-marketplace/redhat-operators-z5krn" Feb 17 14:21:02 crc kubenswrapper[4768]: I0217 14:21:02.205970 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e787272f-f1cb-4dcd-b2ed-8fe0661c6070-utilities\") pod \"redhat-operators-z5krn\" (UID: \"e787272f-f1cb-4dcd-b2ed-8fe0661c6070\") " pod="openshift-marketplace/redhat-operators-z5krn" Feb 17 14:21:02 crc kubenswrapper[4768]: I0217 14:21:02.206037 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4cwmq\" (UniqueName: \"kubernetes.io/projected/e787272f-f1cb-4dcd-b2ed-8fe0661c6070-kube-api-access-4cwmq\") pod \"redhat-operators-z5krn\" (UID: \"e787272f-f1cb-4dcd-b2ed-8fe0661c6070\") " pod="openshift-marketplace/redhat-operators-z5krn" Feb 17 14:21:02 crc kubenswrapper[4768]: I0217 14:21:02.206299 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e787272f-f1cb-4dcd-b2ed-8fe0661c6070-catalog-content\") pod \"redhat-operators-z5krn\" (UID: \"e787272f-f1cb-4dcd-b2ed-8fe0661c6070\") " pod="openshift-marketplace/redhat-operators-z5krn" Feb 17 14:21:02 crc kubenswrapper[4768]: I0217 14:21:02.206646 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e787272f-f1cb-4dcd-b2ed-8fe0661c6070-utilities\") pod \"redhat-operators-z5krn\" (UID: \"e787272f-f1cb-4dcd-b2ed-8fe0661c6070\") " pod="openshift-marketplace/redhat-operators-z5krn" Feb 17 14:21:02 crc kubenswrapper[4768]: I0217 14:21:02.233074 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4cwmq\" (UniqueName: \"kubernetes.io/projected/e787272f-f1cb-4dcd-b2ed-8fe0661c6070-kube-api-access-4cwmq\") pod \"redhat-operators-z5krn\" (UID: \"e787272f-f1cb-4dcd-b2ed-8fe0661c6070\") " pod="openshift-marketplace/redhat-operators-z5krn" Feb 17 14:21:02 crc kubenswrapper[4768]: I0217 14:21:02.297264 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-z5krn" Feb 17 14:21:02 crc kubenswrapper[4768]: I0217 14:21:02.748866 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-z5krn"] Feb 17 14:21:03 crc kubenswrapper[4768]: I0217 14:21:03.054243 4768 generic.go:334] "Generic (PLEG): container finished" podID="e787272f-f1cb-4dcd-b2ed-8fe0661c6070" containerID="0f4e711e4d57ddec8704a378e94c2b606aca3b2b9e5100c45b061ba68b659181" exitCode=0 Feb 17 14:21:03 crc kubenswrapper[4768]: I0217 14:21:03.054428 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-z5krn" event={"ID":"e787272f-f1cb-4dcd-b2ed-8fe0661c6070","Type":"ContainerDied","Data":"0f4e711e4d57ddec8704a378e94c2b606aca3b2b9e5100c45b061ba68b659181"} Feb 17 14:21:03 crc kubenswrapper[4768]: I0217 14:21:03.054577 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-z5krn" event={"ID":"e787272f-f1cb-4dcd-b2ed-8fe0661c6070","Type":"ContainerStarted","Data":"5d70a3c72a5cbdbd0a38800a9fa1f1915326eb0f86dffe9b5836a1552714ceff"} Feb 17 14:21:03 crc kubenswrapper[4768]: I0217 14:21:03.060904 4768 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 17 14:21:04 crc kubenswrapper[4768]: I0217 14:21:04.065766 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-z5krn" event={"ID":"e787272f-f1cb-4dcd-b2ed-8fe0661c6070","Type":"ContainerStarted","Data":"1d7047f5ccac8becdc343e24555ed3c2db77b55ec5fc0978e73fd10c210b2324"} Feb 17 14:21:06 crc kubenswrapper[4768]: I0217 14:21:06.087719 4768 generic.go:334] "Generic (PLEG): container finished" podID="e787272f-f1cb-4dcd-b2ed-8fe0661c6070" containerID="1d7047f5ccac8becdc343e24555ed3c2db77b55ec5fc0978e73fd10c210b2324" exitCode=0 Feb 17 14:21:06 crc kubenswrapper[4768]: I0217 14:21:06.087801 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-z5krn" event={"ID":"e787272f-f1cb-4dcd-b2ed-8fe0661c6070","Type":"ContainerDied","Data":"1d7047f5ccac8becdc343e24555ed3c2db77b55ec5fc0978e73fd10c210b2324"} Feb 17 14:21:07 crc kubenswrapper[4768]: I0217 14:21:07.097546 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-z5krn" event={"ID":"e787272f-f1cb-4dcd-b2ed-8fe0661c6070","Type":"ContainerStarted","Data":"9df1eab9f83ea769fc1d6d2a91fb0af3357689e47fb35842a59767bbd37434d3"} Feb 17 14:21:07 crc kubenswrapper[4768]: I0217 14:21:07.123363 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-z5krn" podStartSLOduration=2.675268421 podStartE2EDuration="6.12334349s" podCreationTimestamp="2026-02-17 14:21:01 +0000 UTC" firstStartedPulling="2026-02-17 14:21:03.060577871 +0000 UTC m=+2682.339964313" lastFinishedPulling="2026-02-17 14:21:06.5086529 +0000 UTC m=+2685.788039382" observedRunningTime="2026-02-17 14:21:07.116345568 +0000 UTC m=+2686.395732030" watchObservedRunningTime="2026-02-17 14:21:07.12334349 +0000 UTC m=+2686.402729932" Feb 17 14:21:12 crc kubenswrapper[4768]: I0217 14:21:12.298255 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-z5krn" Feb 17 14:21:12 crc kubenswrapper[4768]: I0217 14:21:12.298872 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-z5krn" Feb 17 14:21:13 crc kubenswrapper[4768]: I0217 14:21:13.348007 4768 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-z5krn" podUID="e787272f-f1cb-4dcd-b2ed-8fe0661c6070" containerName="registry-server" probeResult="failure" output=< Feb 17 14:21:13 crc kubenswrapper[4768]: timeout: failed to connect service ":50051" within 1s Feb 17 14:21:13 crc kubenswrapper[4768]: > Feb 17 14:21:22 crc kubenswrapper[4768]: I0217 14:21:22.372488 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-z5krn" Feb 17 14:21:22 crc kubenswrapper[4768]: I0217 14:21:22.433834 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-z5krn" Feb 17 14:21:22 crc kubenswrapper[4768]: I0217 14:21:22.613641 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-z5krn"] Feb 17 14:21:24 crc kubenswrapper[4768]: I0217 14:21:24.288234 4768 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-z5krn" podUID="e787272f-f1cb-4dcd-b2ed-8fe0661c6070" containerName="registry-server" containerID="cri-o://9df1eab9f83ea769fc1d6d2a91fb0af3357689e47fb35842a59767bbd37434d3" gracePeriod=2 Feb 17 14:21:24 crc kubenswrapper[4768]: I0217 14:21:24.800185 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-z5krn" Feb 17 14:21:24 crc kubenswrapper[4768]: I0217 14:21:24.967819 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4cwmq\" (UniqueName: \"kubernetes.io/projected/e787272f-f1cb-4dcd-b2ed-8fe0661c6070-kube-api-access-4cwmq\") pod \"e787272f-f1cb-4dcd-b2ed-8fe0661c6070\" (UID: \"e787272f-f1cb-4dcd-b2ed-8fe0661c6070\") " Feb 17 14:21:24 crc kubenswrapper[4768]: I0217 14:21:24.968003 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e787272f-f1cb-4dcd-b2ed-8fe0661c6070-utilities\") pod \"e787272f-f1cb-4dcd-b2ed-8fe0661c6070\" (UID: \"e787272f-f1cb-4dcd-b2ed-8fe0661c6070\") " Feb 17 14:21:24 crc kubenswrapper[4768]: I0217 14:21:24.968028 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e787272f-f1cb-4dcd-b2ed-8fe0661c6070-catalog-content\") pod \"e787272f-f1cb-4dcd-b2ed-8fe0661c6070\" (UID: \"e787272f-f1cb-4dcd-b2ed-8fe0661c6070\") " Feb 17 14:21:24 crc kubenswrapper[4768]: I0217 14:21:24.968849 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e787272f-f1cb-4dcd-b2ed-8fe0661c6070-utilities" (OuterVolumeSpecName: "utilities") pod "e787272f-f1cb-4dcd-b2ed-8fe0661c6070" (UID: "e787272f-f1cb-4dcd-b2ed-8fe0661c6070"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 14:21:24 crc kubenswrapper[4768]: I0217 14:21:24.980365 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e787272f-f1cb-4dcd-b2ed-8fe0661c6070-kube-api-access-4cwmq" (OuterVolumeSpecName: "kube-api-access-4cwmq") pod "e787272f-f1cb-4dcd-b2ed-8fe0661c6070" (UID: "e787272f-f1cb-4dcd-b2ed-8fe0661c6070"). InnerVolumeSpecName "kube-api-access-4cwmq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 14:21:25 crc kubenswrapper[4768]: I0217 14:21:25.070334 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4cwmq\" (UniqueName: \"kubernetes.io/projected/e787272f-f1cb-4dcd-b2ed-8fe0661c6070-kube-api-access-4cwmq\") on node \"crc\" DevicePath \"\"" Feb 17 14:21:25 crc kubenswrapper[4768]: I0217 14:21:25.070370 4768 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e787272f-f1cb-4dcd-b2ed-8fe0661c6070-utilities\") on node \"crc\" DevicePath \"\"" Feb 17 14:21:25 crc kubenswrapper[4768]: I0217 14:21:25.105966 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e787272f-f1cb-4dcd-b2ed-8fe0661c6070-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e787272f-f1cb-4dcd-b2ed-8fe0661c6070" (UID: "e787272f-f1cb-4dcd-b2ed-8fe0661c6070"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 14:21:25 crc kubenswrapper[4768]: I0217 14:21:25.172022 4768 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e787272f-f1cb-4dcd-b2ed-8fe0661c6070-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 17 14:21:25 crc kubenswrapper[4768]: I0217 14:21:25.306195 4768 generic.go:334] "Generic (PLEG): container finished" podID="e787272f-f1cb-4dcd-b2ed-8fe0661c6070" containerID="9df1eab9f83ea769fc1d6d2a91fb0af3357689e47fb35842a59767bbd37434d3" exitCode=0 Feb 17 14:21:25 crc kubenswrapper[4768]: I0217 14:21:25.306236 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-z5krn" event={"ID":"e787272f-f1cb-4dcd-b2ed-8fe0661c6070","Type":"ContainerDied","Data":"9df1eab9f83ea769fc1d6d2a91fb0af3357689e47fb35842a59767bbd37434d3"} Feb 17 14:21:25 crc kubenswrapper[4768]: I0217 14:21:25.306262 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-z5krn" event={"ID":"e787272f-f1cb-4dcd-b2ed-8fe0661c6070","Type":"ContainerDied","Data":"5d70a3c72a5cbdbd0a38800a9fa1f1915326eb0f86dffe9b5836a1552714ceff"} Feb 17 14:21:25 crc kubenswrapper[4768]: I0217 14:21:25.306261 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-z5krn" Feb 17 14:21:25 crc kubenswrapper[4768]: I0217 14:21:25.306294 4768 scope.go:117] "RemoveContainer" containerID="9df1eab9f83ea769fc1d6d2a91fb0af3357689e47fb35842a59767bbd37434d3" Feb 17 14:21:25 crc kubenswrapper[4768]: I0217 14:21:25.336678 4768 scope.go:117] "RemoveContainer" containerID="1d7047f5ccac8becdc343e24555ed3c2db77b55ec5fc0978e73fd10c210b2324" Feb 17 14:21:25 crc kubenswrapper[4768]: I0217 14:21:25.336730 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-z5krn"] Feb 17 14:21:25 crc kubenswrapper[4768]: I0217 14:21:25.345567 4768 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-z5krn"] Feb 17 14:21:25 crc kubenswrapper[4768]: I0217 14:21:25.356697 4768 scope.go:117] "RemoveContainer" containerID="0f4e711e4d57ddec8704a378e94c2b606aca3b2b9e5100c45b061ba68b659181" Feb 17 14:21:25 crc kubenswrapper[4768]: I0217 14:21:25.404288 4768 scope.go:117] "RemoveContainer" containerID="9df1eab9f83ea769fc1d6d2a91fb0af3357689e47fb35842a59767bbd37434d3" Feb 17 14:21:25 crc kubenswrapper[4768]: E0217 14:21:25.404993 4768 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9df1eab9f83ea769fc1d6d2a91fb0af3357689e47fb35842a59767bbd37434d3\": container with ID starting with 9df1eab9f83ea769fc1d6d2a91fb0af3357689e47fb35842a59767bbd37434d3 not found: ID does not exist" containerID="9df1eab9f83ea769fc1d6d2a91fb0af3357689e47fb35842a59767bbd37434d3" Feb 17 14:21:25 crc kubenswrapper[4768]: I0217 14:21:25.405160 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9df1eab9f83ea769fc1d6d2a91fb0af3357689e47fb35842a59767bbd37434d3"} err="failed to get container status \"9df1eab9f83ea769fc1d6d2a91fb0af3357689e47fb35842a59767bbd37434d3\": rpc error: code = NotFound desc = could not find container \"9df1eab9f83ea769fc1d6d2a91fb0af3357689e47fb35842a59767bbd37434d3\": container with ID starting with 9df1eab9f83ea769fc1d6d2a91fb0af3357689e47fb35842a59767bbd37434d3 not found: ID does not exist" Feb 17 14:21:25 crc kubenswrapper[4768]: I0217 14:21:25.405306 4768 scope.go:117] "RemoveContainer" containerID="1d7047f5ccac8becdc343e24555ed3c2db77b55ec5fc0978e73fd10c210b2324" Feb 17 14:21:25 crc kubenswrapper[4768]: E0217 14:21:25.406033 4768 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1d7047f5ccac8becdc343e24555ed3c2db77b55ec5fc0978e73fd10c210b2324\": container with ID starting with 1d7047f5ccac8becdc343e24555ed3c2db77b55ec5fc0978e73fd10c210b2324 not found: ID does not exist" containerID="1d7047f5ccac8becdc343e24555ed3c2db77b55ec5fc0978e73fd10c210b2324" Feb 17 14:21:25 crc kubenswrapper[4768]: I0217 14:21:25.406158 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1d7047f5ccac8becdc343e24555ed3c2db77b55ec5fc0978e73fd10c210b2324"} err="failed to get container status \"1d7047f5ccac8becdc343e24555ed3c2db77b55ec5fc0978e73fd10c210b2324\": rpc error: code = NotFound desc = could not find container \"1d7047f5ccac8becdc343e24555ed3c2db77b55ec5fc0978e73fd10c210b2324\": container with ID starting with 1d7047f5ccac8becdc343e24555ed3c2db77b55ec5fc0978e73fd10c210b2324 not found: ID does not exist" Feb 17 14:21:25 crc kubenswrapper[4768]: I0217 14:21:25.406258 4768 scope.go:117] "RemoveContainer" containerID="0f4e711e4d57ddec8704a378e94c2b606aca3b2b9e5100c45b061ba68b659181" Feb 17 14:21:25 crc kubenswrapper[4768]: E0217 14:21:25.406921 4768 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0f4e711e4d57ddec8704a378e94c2b606aca3b2b9e5100c45b061ba68b659181\": container with ID starting with 0f4e711e4d57ddec8704a378e94c2b606aca3b2b9e5100c45b061ba68b659181 not found: ID does not exist" containerID="0f4e711e4d57ddec8704a378e94c2b606aca3b2b9e5100c45b061ba68b659181" Feb 17 14:21:25 crc kubenswrapper[4768]: I0217 14:21:25.406958 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0f4e711e4d57ddec8704a378e94c2b606aca3b2b9e5100c45b061ba68b659181"} err="failed to get container status \"0f4e711e4d57ddec8704a378e94c2b606aca3b2b9e5100c45b061ba68b659181\": rpc error: code = NotFound desc = could not find container \"0f4e711e4d57ddec8704a378e94c2b606aca3b2b9e5100c45b061ba68b659181\": container with ID starting with 0f4e711e4d57ddec8704a378e94c2b606aca3b2b9e5100c45b061ba68b659181 not found: ID does not exist" Feb 17 14:21:25 crc kubenswrapper[4768]: I0217 14:21:25.547434 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e787272f-f1cb-4dcd-b2ed-8fe0661c6070" path="/var/lib/kubelet/pods/e787272f-f1cb-4dcd-b2ed-8fe0661c6070/volumes" Feb 17 14:21:36 crc kubenswrapper[4768]: I0217 14:21:36.726774 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-b77x7"] Feb 17 14:21:36 crc kubenswrapper[4768]: E0217 14:21:36.727852 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e787272f-f1cb-4dcd-b2ed-8fe0661c6070" containerName="registry-server" Feb 17 14:21:36 crc kubenswrapper[4768]: I0217 14:21:36.727870 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="e787272f-f1cb-4dcd-b2ed-8fe0661c6070" containerName="registry-server" Feb 17 14:21:36 crc kubenswrapper[4768]: E0217 14:21:36.727884 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e787272f-f1cb-4dcd-b2ed-8fe0661c6070" containerName="extract-utilities" Feb 17 14:21:36 crc kubenswrapper[4768]: I0217 14:21:36.727892 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="e787272f-f1cb-4dcd-b2ed-8fe0661c6070" containerName="extract-utilities" Feb 17 14:21:36 crc kubenswrapper[4768]: E0217 14:21:36.727915 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e787272f-f1cb-4dcd-b2ed-8fe0661c6070" containerName="extract-content" Feb 17 14:21:36 crc kubenswrapper[4768]: I0217 14:21:36.727926 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="e787272f-f1cb-4dcd-b2ed-8fe0661c6070" containerName="extract-content" Feb 17 14:21:36 crc kubenswrapper[4768]: I0217 14:21:36.728157 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="e787272f-f1cb-4dcd-b2ed-8fe0661c6070" containerName="registry-server" Feb 17 14:21:36 crc kubenswrapper[4768]: I0217 14:21:36.729730 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-b77x7" Feb 17 14:21:36 crc kubenswrapper[4768]: I0217 14:21:36.751410 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-b77x7"] Feb 17 14:21:36 crc kubenswrapper[4768]: I0217 14:21:36.812443 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5199b8c8-cb92-4e8e-acfd-d9f8ce4b4ca2-catalog-content\") pod \"redhat-marketplace-b77x7\" (UID: \"5199b8c8-cb92-4e8e-acfd-d9f8ce4b4ca2\") " pod="openshift-marketplace/redhat-marketplace-b77x7" Feb 17 14:21:36 crc kubenswrapper[4768]: I0217 14:21:36.812503 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fb678\" (UniqueName: \"kubernetes.io/projected/5199b8c8-cb92-4e8e-acfd-d9f8ce4b4ca2-kube-api-access-fb678\") pod \"redhat-marketplace-b77x7\" (UID: \"5199b8c8-cb92-4e8e-acfd-d9f8ce4b4ca2\") " pod="openshift-marketplace/redhat-marketplace-b77x7" Feb 17 14:21:36 crc kubenswrapper[4768]: I0217 14:21:36.812567 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5199b8c8-cb92-4e8e-acfd-d9f8ce4b4ca2-utilities\") pod \"redhat-marketplace-b77x7\" (UID: \"5199b8c8-cb92-4e8e-acfd-d9f8ce4b4ca2\") " pod="openshift-marketplace/redhat-marketplace-b77x7" Feb 17 14:21:36 crc kubenswrapper[4768]: I0217 14:21:36.914740 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fb678\" (UniqueName: \"kubernetes.io/projected/5199b8c8-cb92-4e8e-acfd-d9f8ce4b4ca2-kube-api-access-fb678\") pod \"redhat-marketplace-b77x7\" (UID: \"5199b8c8-cb92-4e8e-acfd-d9f8ce4b4ca2\") " pod="openshift-marketplace/redhat-marketplace-b77x7" Feb 17 14:21:36 crc kubenswrapper[4768]: I0217 14:21:36.915062 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5199b8c8-cb92-4e8e-acfd-d9f8ce4b4ca2-utilities\") pod \"redhat-marketplace-b77x7\" (UID: \"5199b8c8-cb92-4e8e-acfd-d9f8ce4b4ca2\") " pod="openshift-marketplace/redhat-marketplace-b77x7" Feb 17 14:21:36 crc kubenswrapper[4768]: I0217 14:21:36.915213 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5199b8c8-cb92-4e8e-acfd-d9f8ce4b4ca2-catalog-content\") pod \"redhat-marketplace-b77x7\" (UID: \"5199b8c8-cb92-4e8e-acfd-d9f8ce4b4ca2\") " pod="openshift-marketplace/redhat-marketplace-b77x7" Feb 17 14:21:36 crc kubenswrapper[4768]: I0217 14:21:36.915604 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5199b8c8-cb92-4e8e-acfd-d9f8ce4b4ca2-utilities\") pod \"redhat-marketplace-b77x7\" (UID: \"5199b8c8-cb92-4e8e-acfd-d9f8ce4b4ca2\") " pod="openshift-marketplace/redhat-marketplace-b77x7" Feb 17 14:21:36 crc kubenswrapper[4768]: I0217 14:21:36.915611 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5199b8c8-cb92-4e8e-acfd-d9f8ce4b4ca2-catalog-content\") pod \"redhat-marketplace-b77x7\" (UID: \"5199b8c8-cb92-4e8e-acfd-d9f8ce4b4ca2\") " pod="openshift-marketplace/redhat-marketplace-b77x7" Feb 17 14:21:36 crc kubenswrapper[4768]: I0217 14:21:36.934742 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fb678\" (UniqueName: \"kubernetes.io/projected/5199b8c8-cb92-4e8e-acfd-d9f8ce4b4ca2-kube-api-access-fb678\") pod \"redhat-marketplace-b77x7\" (UID: \"5199b8c8-cb92-4e8e-acfd-d9f8ce4b4ca2\") " pod="openshift-marketplace/redhat-marketplace-b77x7" Feb 17 14:21:37 crc kubenswrapper[4768]: I0217 14:21:37.050156 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-b77x7" Feb 17 14:21:37 crc kubenswrapper[4768]: I0217 14:21:37.551913 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-b77x7"] Feb 17 14:21:38 crc kubenswrapper[4768]: I0217 14:21:38.435072 4768 generic.go:334] "Generic (PLEG): container finished" podID="5199b8c8-cb92-4e8e-acfd-d9f8ce4b4ca2" containerID="ad2bb6e926b912545a12c78b8a24ccba737032a79cff7e2cc099842237afb381" exitCode=0 Feb 17 14:21:38 crc kubenswrapper[4768]: I0217 14:21:38.435158 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-b77x7" event={"ID":"5199b8c8-cb92-4e8e-acfd-d9f8ce4b4ca2","Type":"ContainerDied","Data":"ad2bb6e926b912545a12c78b8a24ccba737032a79cff7e2cc099842237afb381"} Feb 17 14:21:38 crc kubenswrapper[4768]: I0217 14:21:38.435385 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-b77x7" event={"ID":"5199b8c8-cb92-4e8e-acfd-d9f8ce4b4ca2","Type":"ContainerStarted","Data":"2bc51b3acc773bfcdf3575523d9b538a1b327c93a34bd0287145456fd684ec58"} Feb 17 14:21:39 crc kubenswrapper[4768]: I0217 14:21:39.445856 4768 generic.go:334] "Generic (PLEG): container finished" podID="5199b8c8-cb92-4e8e-acfd-d9f8ce4b4ca2" containerID="0fd747a548a70e8ee71d2b4a34c40b20b732ef1d55119c8c68c9ba0e07e629fa" exitCode=0 Feb 17 14:21:39 crc kubenswrapper[4768]: I0217 14:21:39.445972 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-b77x7" event={"ID":"5199b8c8-cb92-4e8e-acfd-d9f8ce4b4ca2","Type":"ContainerDied","Data":"0fd747a548a70e8ee71d2b4a34c40b20b732ef1d55119c8c68c9ba0e07e629fa"} Feb 17 14:21:40 crc kubenswrapper[4768]: I0217 14:21:40.465309 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-b77x7" event={"ID":"5199b8c8-cb92-4e8e-acfd-d9f8ce4b4ca2","Type":"ContainerStarted","Data":"4ff431eab06e2e14d45d5e0a19f634ca5a15f77cca3400bf2474c70e51efbdd3"} Feb 17 14:21:40 crc kubenswrapper[4768]: I0217 14:21:40.492136 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-b77x7" podStartSLOduration=3.028356686 podStartE2EDuration="4.492085907s" podCreationTimestamp="2026-02-17 14:21:36 +0000 UTC" firstStartedPulling="2026-02-17 14:21:38.437139091 +0000 UTC m=+2717.716525553" lastFinishedPulling="2026-02-17 14:21:39.900868322 +0000 UTC m=+2719.180254774" observedRunningTime="2026-02-17 14:21:40.490452842 +0000 UTC m=+2719.769839294" watchObservedRunningTime="2026-02-17 14:21:40.492085907 +0000 UTC m=+2719.771472379" Feb 17 14:21:47 crc kubenswrapper[4768]: I0217 14:21:47.050853 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-b77x7" Feb 17 14:21:47 crc kubenswrapper[4768]: I0217 14:21:47.052194 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-b77x7" Feb 17 14:21:47 crc kubenswrapper[4768]: I0217 14:21:47.104602 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-b77x7" Feb 17 14:21:47 crc kubenswrapper[4768]: I0217 14:21:47.598067 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-b77x7" Feb 17 14:21:47 crc kubenswrapper[4768]: I0217 14:21:47.641446 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-b77x7"] Feb 17 14:21:49 crc kubenswrapper[4768]: I0217 14:21:49.543722 4768 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-b77x7" podUID="5199b8c8-cb92-4e8e-acfd-d9f8ce4b4ca2" containerName="registry-server" containerID="cri-o://4ff431eab06e2e14d45d5e0a19f634ca5a15f77cca3400bf2474c70e51efbdd3" gracePeriod=2 Feb 17 14:21:50 crc kubenswrapper[4768]: I0217 14:21:50.212386 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-b77x7" Feb 17 14:21:50 crc kubenswrapper[4768]: I0217 14:21:50.391618 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fb678\" (UniqueName: \"kubernetes.io/projected/5199b8c8-cb92-4e8e-acfd-d9f8ce4b4ca2-kube-api-access-fb678\") pod \"5199b8c8-cb92-4e8e-acfd-d9f8ce4b4ca2\" (UID: \"5199b8c8-cb92-4e8e-acfd-d9f8ce4b4ca2\") " Feb 17 14:21:50 crc kubenswrapper[4768]: I0217 14:21:50.392142 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5199b8c8-cb92-4e8e-acfd-d9f8ce4b4ca2-catalog-content\") pod \"5199b8c8-cb92-4e8e-acfd-d9f8ce4b4ca2\" (UID: \"5199b8c8-cb92-4e8e-acfd-d9f8ce4b4ca2\") " Feb 17 14:21:50 crc kubenswrapper[4768]: I0217 14:21:50.392334 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5199b8c8-cb92-4e8e-acfd-d9f8ce4b4ca2-utilities\") pod \"5199b8c8-cb92-4e8e-acfd-d9f8ce4b4ca2\" (UID: \"5199b8c8-cb92-4e8e-acfd-d9f8ce4b4ca2\") " Feb 17 14:21:50 crc kubenswrapper[4768]: I0217 14:21:50.393888 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5199b8c8-cb92-4e8e-acfd-d9f8ce4b4ca2-utilities" (OuterVolumeSpecName: "utilities") pod "5199b8c8-cb92-4e8e-acfd-d9f8ce4b4ca2" (UID: "5199b8c8-cb92-4e8e-acfd-d9f8ce4b4ca2"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 14:21:50 crc kubenswrapper[4768]: I0217 14:21:50.400251 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5199b8c8-cb92-4e8e-acfd-d9f8ce4b4ca2-kube-api-access-fb678" (OuterVolumeSpecName: "kube-api-access-fb678") pod "5199b8c8-cb92-4e8e-acfd-d9f8ce4b4ca2" (UID: "5199b8c8-cb92-4e8e-acfd-d9f8ce4b4ca2"). InnerVolumeSpecName "kube-api-access-fb678". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 14:21:50 crc kubenswrapper[4768]: I0217 14:21:50.421896 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5199b8c8-cb92-4e8e-acfd-d9f8ce4b4ca2-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5199b8c8-cb92-4e8e-acfd-d9f8ce4b4ca2" (UID: "5199b8c8-cb92-4e8e-acfd-d9f8ce4b4ca2"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 14:21:50 crc kubenswrapper[4768]: I0217 14:21:50.495122 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fb678\" (UniqueName: \"kubernetes.io/projected/5199b8c8-cb92-4e8e-acfd-d9f8ce4b4ca2-kube-api-access-fb678\") on node \"crc\" DevicePath \"\"" Feb 17 14:21:50 crc kubenswrapper[4768]: I0217 14:21:50.495163 4768 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5199b8c8-cb92-4e8e-acfd-d9f8ce4b4ca2-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 17 14:21:50 crc kubenswrapper[4768]: I0217 14:21:50.495176 4768 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5199b8c8-cb92-4e8e-acfd-d9f8ce4b4ca2-utilities\") on node \"crc\" DevicePath \"\"" Feb 17 14:21:50 crc kubenswrapper[4768]: I0217 14:21:50.562383 4768 generic.go:334] "Generic (PLEG): container finished" podID="5199b8c8-cb92-4e8e-acfd-d9f8ce4b4ca2" containerID="4ff431eab06e2e14d45d5e0a19f634ca5a15f77cca3400bf2474c70e51efbdd3" exitCode=0 Feb 17 14:21:50 crc kubenswrapper[4768]: I0217 14:21:50.562428 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-b77x7" event={"ID":"5199b8c8-cb92-4e8e-acfd-d9f8ce4b4ca2","Type":"ContainerDied","Data":"4ff431eab06e2e14d45d5e0a19f634ca5a15f77cca3400bf2474c70e51efbdd3"} Feb 17 14:21:50 crc kubenswrapper[4768]: I0217 14:21:50.562457 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-b77x7" event={"ID":"5199b8c8-cb92-4e8e-acfd-d9f8ce4b4ca2","Type":"ContainerDied","Data":"2bc51b3acc773bfcdf3575523d9b538a1b327c93a34bd0287145456fd684ec58"} Feb 17 14:21:50 crc kubenswrapper[4768]: I0217 14:21:50.562473 4768 scope.go:117] "RemoveContainer" containerID="4ff431eab06e2e14d45d5e0a19f634ca5a15f77cca3400bf2474c70e51efbdd3" Feb 17 14:21:50 crc kubenswrapper[4768]: I0217 14:21:50.562498 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-b77x7" Feb 17 14:21:50 crc kubenswrapper[4768]: I0217 14:21:50.589056 4768 scope.go:117] "RemoveContainer" containerID="0fd747a548a70e8ee71d2b4a34c40b20b732ef1d55119c8c68c9ba0e07e629fa" Feb 17 14:21:50 crc kubenswrapper[4768]: I0217 14:21:50.608965 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-b77x7"] Feb 17 14:21:50 crc kubenswrapper[4768]: I0217 14:21:50.617409 4768 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-b77x7"] Feb 17 14:21:50 crc kubenswrapper[4768]: I0217 14:21:50.634997 4768 scope.go:117] "RemoveContainer" containerID="ad2bb6e926b912545a12c78b8a24ccba737032a79cff7e2cc099842237afb381" Feb 17 14:21:50 crc kubenswrapper[4768]: I0217 14:21:50.682973 4768 scope.go:117] "RemoveContainer" containerID="4ff431eab06e2e14d45d5e0a19f634ca5a15f77cca3400bf2474c70e51efbdd3" Feb 17 14:21:50 crc kubenswrapper[4768]: E0217 14:21:50.683464 4768 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4ff431eab06e2e14d45d5e0a19f634ca5a15f77cca3400bf2474c70e51efbdd3\": container with ID starting with 4ff431eab06e2e14d45d5e0a19f634ca5a15f77cca3400bf2474c70e51efbdd3 not found: ID does not exist" containerID="4ff431eab06e2e14d45d5e0a19f634ca5a15f77cca3400bf2474c70e51efbdd3" Feb 17 14:21:50 crc kubenswrapper[4768]: I0217 14:21:50.683507 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4ff431eab06e2e14d45d5e0a19f634ca5a15f77cca3400bf2474c70e51efbdd3"} err="failed to get container status \"4ff431eab06e2e14d45d5e0a19f634ca5a15f77cca3400bf2474c70e51efbdd3\": rpc error: code = NotFound desc = could not find container \"4ff431eab06e2e14d45d5e0a19f634ca5a15f77cca3400bf2474c70e51efbdd3\": container with ID starting with 4ff431eab06e2e14d45d5e0a19f634ca5a15f77cca3400bf2474c70e51efbdd3 not found: ID does not exist" Feb 17 14:21:50 crc kubenswrapper[4768]: I0217 14:21:50.683532 4768 scope.go:117] "RemoveContainer" containerID="0fd747a548a70e8ee71d2b4a34c40b20b732ef1d55119c8c68c9ba0e07e629fa" Feb 17 14:21:50 crc kubenswrapper[4768]: E0217 14:21:50.683932 4768 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0fd747a548a70e8ee71d2b4a34c40b20b732ef1d55119c8c68c9ba0e07e629fa\": container with ID starting with 0fd747a548a70e8ee71d2b4a34c40b20b732ef1d55119c8c68c9ba0e07e629fa not found: ID does not exist" containerID="0fd747a548a70e8ee71d2b4a34c40b20b732ef1d55119c8c68c9ba0e07e629fa" Feb 17 14:21:50 crc kubenswrapper[4768]: I0217 14:21:50.683956 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0fd747a548a70e8ee71d2b4a34c40b20b732ef1d55119c8c68c9ba0e07e629fa"} err="failed to get container status \"0fd747a548a70e8ee71d2b4a34c40b20b732ef1d55119c8c68c9ba0e07e629fa\": rpc error: code = NotFound desc = could not find container \"0fd747a548a70e8ee71d2b4a34c40b20b732ef1d55119c8c68c9ba0e07e629fa\": container with ID starting with 0fd747a548a70e8ee71d2b4a34c40b20b732ef1d55119c8c68c9ba0e07e629fa not found: ID does not exist" Feb 17 14:21:50 crc kubenswrapper[4768]: I0217 14:21:50.683970 4768 scope.go:117] "RemoveContainer" containerID="ad2bb6e926b912545a12c78b8a24ccba737032a79cff7e2cc099842237afb381" Feb 17 14:21:50 crc kubenswrapper[4768]: E0217 14:21:50.684269 4768 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ad2bb6e926b912545a12c78b8a24ccba737032a79cff7e2cc099842237afb381\": container with ID starting with ad2bb6e926b912545a12c78b8a24ccba737032a79cff7e2cc099842237afb381 not found: ID does not exist" containerID="ad2bb6e926b912545a12c78b8a24ccba737032a79cff7e2cc099842237afb381" Feb 17 14:21:50 crc kubenswrapper[4768]: I0217 14:21:50.684309 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ad2bb6e926b912545a12c78b8a24ccba737032a79cff7e2cc099842237afb381"} err="failed to get container status \"ad2bb6e926b912545a12c78b8a24ccba737032a79cff7e2cc099842237afb381\": rpc error: code = NotFound desc = could not find container \"ad2bb6e926b912545a12c78b8a24ccba737032a79cff7e2cc099842237afb381\": container with ID starting with ad2bb6e926b912545a12c78b8a24ccba737032a79cff7e2cc099842237afb381 not found: ID does not exist" Feb 17 14:21:51 crc kubenswrapper[4768]: I0217 14:21:51.545173 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5199b8c8-cb92-4e8e-acfd-d9f8ce4b4ca2" path="/var/lib/kubelet/pods/5199b8c8-cb92-4e8e-acfd-d9f8ce4b4ca2/volumes" Feb 17 14:22:58 crc kubenswrapper[4768]: I0217 14:22:58.060256 4768 patch_prober.go:28] interesting pod/machine-config-daemon-p97z4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 14:22:58 crc kubenswrapper[4768]: I0217 14:22:58.060835 4768 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-p97z4" podUID="10c685ba-8fe0-425c-958c-3fb6754d3d84" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 14:23:28 crc kubenswrapper[4768]: I0217 14:23:28.060562 4768 patch_prober.go:28] interesting pod/machine-config-daemon-p97z4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 14:23:28 crc kubenswrapper[4768]: I0217 14:23:28.061039 4768 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-p97z4" podUID="10c685ba-8fe0-425c-958c-3fb6754d3d84" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 14:23:41 crc kubenswrapper[4768]: I0217 14:23:41.312239 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-5w2gf"] Feb 17 14:23:41 crc kubenswrapper[4768]: E0217 14:23:41.313357 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5199b8c8-cb92-4e8e-acfd-d9f8ce4b4ca2" containerName="extract-utilities" Feb 17 14:23:41 crc kubenswrapper[4768]: I0217 14:23:41.313374 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="5199b8c8-cb92-4e8e-acfd-d9f8ce4b4ca2" containerName="extract-utilities" Feb 17 14:23:41 crc kubenswrapper[4768]: E0217 14:23:41.313416 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5199b8c8-cb92-4e8e-acfd-d9f8ce4b4ca2" containerName="extract-content" Feb 17 14:23:41 crc kubenswrapper[4768]: I0217 14:23:41.313426 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="5199b8c8-cb92-4e8e-acfd-d9f8ce4b4ca2" containerName="extract-content" Feb 17 14:23:41 crc kubenswrapper[4768]: E0217 14:23:41.313493 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5199b8c8-cb92-4e8e-acfd-d9f8ce4b4ca2" containerName="registry-server" Feb 17 14:23:41 crc kubenswrapper[4768]: I0217 14:23:41.313503 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="5199b8c8-cb92-4e8e-acfd-d9f8ce4b4ca2" containerName="registry-server" Feb 17 14:23:41 crc kubenswrapper[4768]: I0217 14:23:41.313832 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="5199b8c8-cb92-4e8e-acfd-d9f8ce4b4ca2" containerName="registry-server" Feb 17 14:23:41 crc kubenswrapper[4768]: I0217 14:23:41.316390 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5w2gf" Feb 17 14:23:41 crc kubenswrapper[4768]: I0217 14:23:41.332814 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-5w2gf"] Feb 17 14:23:41 crc kubenswrapper[4768]: I0217 14:23:41.495991 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a5889224-4b67-414b-bd55-319ad3051eb1-utilities\") pod \"community-operators-5w2gf\" (UID: \"a5889224-4b67-414b-bd55-319ad3051eb1\") " pod="openshift-marketplace/community-operators-5w2gf" Feb 17 14:23:41 crc kubenswrapper[4768]: I0217 14:23:41.496410 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a5889224-4b67-414b-bd55-319ad3051eb1-catalog-content\") pod \"community-operators-5w2gf\" (UID: \"a5889224-4b67-414b-bd55-319ad3051eb1\") " pod="openshift-marketplace/community-operators-5w2gf" Feb 17 14:23:41 crc kubenswrapper[4768]: I0217 14:23:41.496489 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lmv7r\" (UniqueName: \"kubernetes.io/projected/a5889224-4b67-414b-bd55-319ad3051eb1-kube-api-access-lmv7r\") pod \"community-operators-5w2gf\" (UID: \"a5889224-4b67-414b-bd55-319ad3051eb1\") " pod="openshift-marketplace/community-operators-5w2gf" Feb 17 14:23:41 crc kubenswrapper[4768]: I0217 14:23:41.598649 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a5889224-4b67-414b-bd55-319ad3051eb1-utilities\") pod \"community-operators-5w2gf\" (UID: \"a5889224-4b67-414b-bd55-319ad3051eb1\") " pod="openshift-marketplace/community-operators-5w2gf" Feb 17 14:23:41 crc kubenswrapper[4768]: I0217 14:23:41.599257 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a5889224-4b67-414b-bd55-319ad3051eb1-utilities\") pod \"community-operators-5w2gf\" (UID: \"a5889224-4b67-414b-bd55-319ad3051eb1\") " pod="openshift-marketplace/community-operators-5w2gf" Feb 17 14:23:41 crc kubenswrapper[4768]: I0217 14:23:41.599669 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a5889224-4b67-414b-bd55-319ad3051eb1-catalog-content\") pod \"community-operators-5w2gf\" (UID: \"a5889224-4b67-414b-bd55-319ad3051eb1\") " pod="openshift-marketplace/community-operators-5w2gf" Feb 17 14:23:41 crc kubenswrapper[4768]: I0217 14:23:41.600002 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a5889224-4b67-414b-bd55-319ad3051eb1-catalog-content\") pod \"community-operators-5w2gf\" (UID: \"a5889224-4b67-414b-bd55-319ad3051eb1\") " pod="openshift-marketplace/community-operators-5w2gf" Feb 17 14:23:41 crc kubenswrapper[4768]: I0217 14:23:41.600070 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lmv7r\" (UniqueName: \"kubernetes.io/projected/a5889224-4b67-414b-bd55-319ad3051eb1-kube-api-access-lmv7r\") pod \"community-operators-5w2gf\" (UID: \"a5889224-4b67-414b-bd55-319ad3051eb1\") " pod="openshift-marketplace/community-operators-5w2gf" Feb 17 14:23:41 crc kubenswrapper[4768]: I0217 14:23:41.622601 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lmv7r\" (UniqueName: \"kubernetes.io/projected/a5889224-4b67-414b-bd55-319ad3051eb1-kube-api-access-lmv7r\") pod \"community-operators-5w2gf\" (UID: \"a5889224-4b67-414b-bd55-319ad3051eb1\") " pod="openshift-marketplace/community-operators-5w2gf" Feb 17 14:23:41 crc kubenswrapper[4768]: I0217 14:23:41.641239 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5w2gf" Feb 17 14:23:42 crc kubenswrapper[4768]: I0217 14:23:42.072957 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-5w2gf"] Feb 17 14:23:42 crc kubenswrapper[4768]: I0217 14:23:42.629692 4768 generic.go:334] "Generic (PLEG): container finished" podID="a5889224-4b67-414b-bd55-319ad3051eb1" containerID="c032dc4796680fb6eb80f821f003e7af3f5f67a4708904992cb803f3e1bc8a6f" exitCode=0 Feb 17 14:23:42 crc kubenswrapper[4768]: I0217 14:23:42.629790 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5w2gf" event={"ID":"a5889224-4b67-414b-bd55-319ad3051eb1","Type":"ContainerDied","Data":"c032dc4796680fb6eb80f821f003e7af3f5f67a4708904992cb803f3e1bc8a6f"} Feb 17 14:23:42 crc kubenswrapper[4768]: I0217 14:23:42.629938 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5w2gf" event={"ID":"a5889224-4b67-414b-bd55-319ad3051eb1","Type":"ContainerStarted","Data":"2f61b560362a2eac8a1927598274a8f77494ebded19c0a76d1ecc08225e86805"} Feb 17 14:23:43 crc kubenswrapper[4768]: I0217 14:23:43.641569 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5w2gf" event={"ID":"a5889224-4b67-414b-bd55-319ad3051eb1","Type":"ContainerStarted","Data":"4a8dd85fc75be460839bd12ec65bc69f79c5a58f9ee6637f204af527091bf6ae"} Feb 17 14:23:44 crc kubenswrapper[4768]: I0217 14:23:44.656230 4768 generic.go:334] "Generic (PLEG): container finished" podID="a5889224-4b67-414b-bd55-319ad3051eb1" containerID="4a8dd85fc75be460839bd12ec65bc69f79c5a58f9ee6637f204af527091bf6ae" exitCode=0 Feb 17 14:23:44 crc kubenswrapper[4768]: I0217 14:23:44.656388 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5w2gf" event={"ID":"a5889224-4b67-414b-bd55-319ad3051eb1","Type":"ContainerDied","Data":"4a8dd85fc75be460839bd12ec65bc69f79c5a58f9ee6637f204af527091bf6ae"} Feb 17 14:23:45 crc kubenswrapper[4768]: I0217 14:23:45.667135 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5w2gf" event={"ID":"a5889224-4b67-414b-bd55-319ad3051eb1","Type":"ContainerStarted","Data":"21a3318c310d84648be5d57b11a75e3da26bd2aae28e16fb36be3431de62bc12"} Feb 17 14:23:45 crc kubenswrapper[4768]: I0217 14:23:45.693083 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-5w2gf" podStartSLOduration=2.2875545649999998 podStartE2EDuration="4.693053391s" podCreationTimestamp="2026-02-17 14:23:41 +0000 UTC" firstStartedPulling="2026-02-17 14:23:42.634819631 +0000 UTC m=+2841.914206083" lastFinishedPulling="2026-02-17 14:23:45.040318437 +0000 UTC m=+2844.319704909" observedRunningTime="2026-02-17 14:23:45.687313423 +0000 UTC m=+2844.966699875" watchObservedRunningTime="2026-02-17 14:23:45.693053391 +0000 UTC m=+2844.972439823" Feb 17 14:23:51 crc kubenswrapper[4768]: I0217 14:23:51.642285 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-5w2gf" Feb 17 14:23:51 crc kubenswrapper[4768]: I0217 14:23:51.642892 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-5w2gf" Feb 17 14:23:51 crc kubenswrapper[4768]: I0217 14:23:51.735615 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-5w2gf" Feb 17 14:23:51 crc kubenswrapper[4768]: I0217 14:23:51.791278 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-5w2gf" Feb 17 14:23:51 crc kubenswrapper[4768]: I0217 14:23:51.966793 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-5w2gf"] Feb 17 14:23:53 crc kubenswrapper[4768]: I0217 14:23:53.728226 4768 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-5w2gf" podUID="a5889224-4b67-414b-bd55-319ad3051eb1" containerName="registry-server" containerID="cri-o://21a3318c310d84648be5d57b11a75e3da26bd2aae28e16fb36be3431de62bc12" gracePeriod=2 Feb 17 14:23:54 crc kubenswrapper[4768]: I0217 14:23:54.740055 4768 generic.go:334] "Generic (PLEG): container finished" podID="a5889224-4b67-414b-bd55-319ad3051eb1" containerID="21a3318c310d84648be5d57b11a75e3da26bd2aae28e16fb36be3431de62bc12" exitCode=0 Feb 17 14:23:54 crc kubenswrapper[4768]: I0217 14:23:54.740129 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5w2gf" event={"ID":"a5889224-4b67-414b-bd55-319ad3051eb1","Type":"ContainerDied","Data":"21a3318c310d84648be5d57b11a75e3da26bd2aae28e16fb36be3431de62bc12"} Feb 17 14:23:55 crc kubenswrapper[4768]: I0217 14:23:55.433564 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5w2gf" Feb 17 14:23:55 crc kubenswrapper[4768]: I0217 14:23:55.589132 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a5889224-4b67-414b-bd55-319ad3051eb1-utilities\") pod \"a5889224-4b67-414b-bd55-319ad3051eb1\" (UID: \"a5889224-4b67-414b-bd55-319ad3051eb1\") " Feb 17 14:23:55 crc kubenswrapper[4768]: I0217 14:23:55.589620 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a5889224-4b67-414b-bd55-319ad3051eb1-catalog-content\") pod \"a5889224-4b67-414b-bd55-319ad3051eb1\" (UID: \"a5889224-4b67-414b-bd55-319ad3051eb1\") " Feb 17 14:23:55 crc kubenswrapper[4768]: I0217 14:23:55.590056 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a5889224-4b67-414b-bd55-319ad3051eb1-utilities" (OuterVolumeSpecName: "utilities") pod "a5889224-4b67-414b-bd55-319ad3051eb1" (UID: "a5889224-4b67-414b-bd55-319ad3051eb1"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 14:23:55 crc kubenswrapper[4768]: I0217 14:23:55.590390 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lmv7r\" (UniqueName: \"kubernetes.io/projected/a5889224-4b67-414b-bd55-319ad3051eb1-kube-api-access-lmv7r\") pod \"a5889224-4b67-414b-bd55-319ad3051eb1\" (UID: \"a5889224-4b67-414b-bd55-319ad3051eb1\") " Feb 17 14:23:55 crc kubenswrapper[4768]: I0217 14:23:55.592283 4768 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a5889224-4b67-414b-bd55-319ad3051eb1-utilities\") on node \"crc\" DevicePath \"\"" Feb 17 14:23:55 crc kubenswrapper[4768]: I0217 14:23:55.610389 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a5889224-4b67-414b-bd55-319ad3051eb1-kube-api-access-lmv7r" (OuterVolumeSpecName: "kube-api-access-lmv7r") pod "a5889224-4b67-414b-bd55-319ad3051eb1" (UID: "a5889224-4b67-414b-bd55-319ad3051eb1"). InnerVolumeSpecName "kube-api-access-lmv7r". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 14:23:55 crc kubenswrapper[4768]: I0217 14:23:55.640478 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a5889224-4b67-414b-bd55-319ad3051eb1-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a5889224-4b67-414b-bd55-319ad3051eb1" (UID: "a5889224-4b67-414b-bd55-319ad3051eb1"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 14:23:55 crc kubenswrapper[4768]: I0217 14:23:55.693493 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lmv7r\" (UniqueName: \"kubernetes.io/projected/a5889224-4b67-414b-bd55-319ad3051eb1-kube-api-access-lmv7r\") on node \"crc\" DevicePath \"\"" Feb 17 14:23:55 crc kubenswrapper[4768]: I0217 14:23:55.693527 4768 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a5889224-4b67-414b-bd55-319ad3051eb1-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 17 14:23:55 crc kubenswrapper[4768]: I0217 14:23:55.751198 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5w2gf" event={"ID":"a5889224-4b67-414b-bd55-319ad3051eb1","Type":"ContainerDied","Data":"2f61b560362a2eac8a1927598274a8f77494ebded19c0a76d1ecc08225e86805"} Feb 17 14:23:55 crc kubenswrapper[4768]: I0217 14:23:55.751683 4768 scope.go:117] "RemoveContainer" containerID="21a3318c310d84648be5d57b11a75e3da26bd2aae28e16fb36be3431de62bc12" Feb 17 14:23:55 crc kubenswrapper[4768]: I0217 14:23:55.751315 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5w2gf" Feb 17 14:23:55 crc kubenswrapper[4768]: I0217 14:23:55.787258 4768 scope.go:117] "RemoveContainer" containerID="4a8dd85fc75be460839bd12ec65bc69f79c5a58f9ee6637f204af527091bf6ae" Feb 17 14:23:55 crc kubenswrapper[4768]: I0217 14:23:55.807766 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-5w2gf"] Feb 17 14:23:55 crc kubenswrapper[4768]: I0217 14:23:55.815201 4768 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-5w2gf"] Feb 17 14:23:55 crc kubenswrapper[4768]: I0217 14:23:55.832161 4768 scope.go:117] "RemoveContainer" containerID="c032dc4796680fb6eb80f821f003e7af3f5f67a4708904992cb803f3e1bc8a6f" Feb 17 14:23:57 crc kubenswrapper[4768]: I0217 14:23:57.551016 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a5889224-4b67-414b-bd55-319ad3051eb1" path="/var/lib/kubelet/pods/a5889224-4b67-414b-bd55-319ad3051eb1/volumes" Feb 17 14:23:58 crc kubenswrapper[4768]: I0217 14:23:58.060735 4768 patch_prober.go:28] interesting pod/machine-config-daemon-p97z4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 14:23:58 crc kubenswrapper[4768]: I0217 14:23:58.060872 4768 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-p97z4" podUID="10c685ba-8fe0-425c-958c-3fb6754d3d84" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 14:23:58 crc kubenswrapper[4768]: I0217 14:23:58.061000 4768 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-p97z4" Feb 17 14:23:58 crc kubenswrapper[4768]: I0217 14:23:58.062590 4768 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"dc7bce07b3dfdfe43d79ada51697272e656711537a015463e8a8ba6fdf153965"} pod="openshift-machine-config-operator/machine-config-daemon-p97z4" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 17 14:23:58 crc kubenswrapper[4768]: I0217 14:23:58.062756 4768 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-p97z4" podUID="10c685ba-8fe0-425c-958c-3fb6754d3d84" containerName="machine-config-daemon" containerID="cri-o://dc7bce07b3dfdfe43d79ada51697272e656711537a015463e8a8ba6fdf153965" gracePeriod=600 Feb 17 14:23:58 crc kubenswrapper[4768]: E0217 14:23:58.185257 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p97z4_openshift-machine-config-operator(10c685ba-8fe0-425c-958c-3fb6754d3d84)\"" pod="openshift-machine-config-operator/machine-config-daemon-p97z4" podUID="10c685ba-8fe0-425c-958c-3fb6754d3d84" Feb 17 14:23:58 crc kubenswrapper[4768]: I0217 14:23:58.783161 4768 generic.go:334] "Generic (PLEG): container finished" podID="10c685ba-8fe0-425c-958c-3fb6754d3d84" containerID="dc7bce07b3dfdfe43d79ada51697272e656711537a015463e8a8ba6fdf153965" exitCode=0 Feb 17 14:23:58 crc kubenswrapper[4768]: I0217 14:23:58.783205 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-p97z4" event={"ID":"10c685ba-8fe0-425c-958c-3fb6754d3d84","Type":"ContainerDied","Data":"dc7bce07b3dfdfe43d79ada51697272e656711537a015463e8a8ba6fdf153965"} Feb 17 14:23:58 crc kubenswrapper[4768]: I0217 14:23:58.783238 4768 scope.go:117] "RemoveContainer" containerID="5e19e1a31aa1dc6bcd956bc608001955d760aa97c517ab661175716241ae79b7" Feb 17 14:23:58 crc kubenswrapper[4768]: I0217 14:23:58.784256 4768 scope.go:117] "RemoveContainer" containerID="dc7bce07b3dfdfe43d79ada51697272e656711537a015463e8a8ba6fdf153965" Feb 17 14:23:58 crc kubenswrapper[4768]: E0217 14:23:58.784906 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p97z4_openshift-machine-config-operator(10c685ba-8fe0-425c-958c-3fb6754d3d84)\"" pod="openshift-machine-config-operator/machine-config-daemon-p97z4" podUID="10c685ba-8fe0-425c-958c-3fb6754d3d84" Feb 17 14:24:13 crc kubenswrapper[4768]: I0217 14:24:13.534616 4768 scope.go:117] "RemoveContainer" containerID="dc7bce07b3dfdfe43d79ada51697272e656711537a015463e8a8ba6fdf153965" Feb 17 14:24:13 crc kubenswrapper[4768]: E0217 14:24:13.535393 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p97z4_openshift-machine-config-operator(10c685ba-8fe0-425c-958c-3fb6754d3d84)\"" pod="openshift-machine-config-operator/machine-config-daemon-p97z4" podUID="10c685ba-8fe0-425c-958c-3fb6754d3d84" Feb 17 14:24:24 crc kubenswrapper[4768]: I0217 14:24:24.534023 4768 scope.go:117] "RemoveContainer" containerID="dc7bce07b3dfdfe43d79ada51697272e656711537a015463e8a8ba6fdf153965" Feb 17 14:24:24 crc kubenswrapper[4768]: E0217 14:24:24.534821 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p97z4_openshift-machine-config-operator(10c685ba-8fe0-425c-958c-3fb6754d3d84)\"" pod="openshift-machine-config-operator/machine-config-daemon-p97z4" podUID="10c685ba-8fe0-425c-958c-3fb6754d3d84" Feb 17 14:24:35 crc kubenswrapper[4768]: I0217 14:24:35.534664 4768 scope.go:117] "RemoveContainer" containerID="dc7bce07b3dfdfe43d79ada51697272e656711537a015463e8a8ba6fdf153965" Feb 17 14:24:35 crc kubenswrapper[4768]: E0217 14:24:35.535435 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p97z4_openshift-machine-config-operator(10c685ba-8fe0-425c-958c-3fb6754d3d84)\"" pod="openshift-machine-config-operator/machine-config-daemon-p97z4" podUID="10c685ba-8fe0-425c-958c-3fb6754d3d84" Feb 17 14:24:50 crc kubenswrapper[4768]: I0217 14:24:50.534266 4768 scope.go:117] "RemoveContainer" containerID="dc7bce07b3dfdfe43d79ada51697272e656711537a015463e8a8ba6fdf153965" Feb 17 14:24:50 crc kubenswrapper[4768]: E0217 14:24:50.535347 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p97z4_openshift-machine-config-operator(10c685ba-8fe0-425c-958c-3fb6754d3d84)\"" pod="openshift-machine-config-operator/machine-config-daemon-p97z4" podUID="10c685ba-8fe0-425c-958c-3fb6754d3d84" Feb 17 14:25:05 crc kubenswrapper[4768]: I0217 14:25:05.535007 4768 scope.go:117] "RemoveContainer" containerID="dc7bce07b3dfdfe43d79ada51697272e656711537a015463e8a8ba6fdf153965" Feb 17 14:25:05 crc kubenswrapper[4768]: E0217 14:25:05.536060 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p97z4_openshift-machine-config-operator(10c685ba-8fe0-425c-958c-3fb6754d3d84)\"" pod="openshift-machine-config-operator/machine-config-daemon-p97z4" podUID="10c685ba-8fe0-425c-958c-3fb6754d3d84" Feb 17 14:25:20 crc kubenswrapper[4768]: I0217 14:25:20.535236 4768 scope.go:117] "RemoveContainer" containerID="dc7bce07b3dfdfe43d79ada51697272e656711537a015463e8a8ba6fdf153965" Feb 17 14:25:20 crc kubenswrapper[4768]: E0217 14:25:20.536000 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p97z4_openshift-machine-config-operator(10c685ba-8fe0-425c-958c-3fb6754d3d84)\"" pod="openshift-machine-config-operator/machine-config-daemon-p97z4" podUID="10c685ba-8fe0-425c-958c-3fb6754d3d84" Feb 17 14:25:35 crc kubenswrapper[4768]: I0217 14:25:35.535217 4768 scope.go:117] "RemoveContainer" containerID="dc7bce07b3dfdfe43d79ada51697272e656711537a015463e8a8ba6fdf153965" Feb 17 14:25:35 crc kubenswrapper[4768]: E0217 14:25:35.536035 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p97z4_openshift-machine-config-operator(10c685ba-8fe0-425c-958c-3fb6754d3d84)\"" pod="openshift-machine-config-operator/machine-config-daemon-p97z4" podUID="10c685ba-8fe0-425c-958c-3fb6754d3d84" Feb 17 14:25:46 crc kubenswrapper[4768]: I0217 14:25:46.534803 4768 scope.go:117] "RemoveContainer" containerID="dc7bce07b3dfdfe43d79ada51697272e656711537a015463e8a8ba6fdf153965" Feb 17 14:25:46 crc kubenswrapper[4768]: E0217 14:25:46.535769 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p97z4_openshift-machine-config-operator(10c685ba-8fe0-425c-958c-3fb6754d3d84)\"" pod="openshift-machine-config-operator/machine-config-daemon-p97z4" podUID="10c685ba-8fe0-425c-958c-3fb6754d3d84" Feb 17 14:25:59 crc kubenswrapper[4768]: I0217 14:25:59.535217 4768 scope.go:117] "RemoveContainer" containerID="dc7bce07b3dfdfe43d79ada51697272e656711537a015463e8a8ba6fdf153965" Feb 17 14:25:59 crc kubenswrapper[4768]: E0217 14:25:59.536240 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p97z4_openshift-machine-config-operator(10c685ba-8fe0-425c-958c-3fb6754d3d84)\"" pod="openshift-machine-config-operator/machine-config-daemon-p97z4" podUID="10c685ba-8fe0-425c-958c-3fb6754d3d84" Feb 17 14:26:14 crc kubenswrapper[4768]: I0217 14:26:14.535015 4768 scope.go:117] "RemoveContainer" containerID="dc7bce07b3dfdfe43d79ada51697272e656711537a015463e8a8ba6fdf153965" Feb 17 14:26:14 crc kubenswrapper[4768]: E0217 14:26:14.536201 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p97z4_openshift-machine-config-operator(10c685ba-8fe0-425c-958c-3fb6754d3d84)\"" pod="openshift-machine-config-operator/machine-config-daemon-p97z4" podUID="10c685ba-8fe0-425c-958c-3fb6754d3d84" Feb 17 14:26:28 crc kubenswrapper[4768]: I0217 14:26:28.535120 4768 scope.go:117] "RemoveContainer" containerID="dc7bce07b3dfdfe43d79ada51697272e656711537a015463e8a8ba6fdf153965" Feb 17 14:26:28 crc kubenswrapper[4768]: E0217 14:26:28.536023 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p97z4_openshift-machine-config-operator(10c685ba-8fe0-425c-958c-3fb6754d3d84)\"" pod="openshift-machine-config-operator/machine-config-daemon-p97z4" podUID="10c685ba-8fe0-425c-958c-3fb6754d3d84" Feb 17 14:26:39 crc kubenswrapper[4768]: I0217 14:26:39.535076 4768 scope.go:117] "RemoveContainer" containerID="dc7bce07b3dfdfe43d79ada51697272e656711537a015463e8a8ba6fdf153965" Feb 17 14:26:39 crc kubenswrapper[4768]: E0217 14:26:39.536086 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p97z4_openshift-machine-config-operator(10c685ba-8fe0-425c-958c-3fb6754d3d84)\"" pod="openshift-machine-config-operator/machine-config-daemon-p97z4" podUID="10c685ba-8fe0-425c-958c-3fb6754d3d84" Feb 17 14:26:54 crc kubenswrapper[4768]: I0217 14:26:54.535319 4768 scope.go:117] "RemoveContainer" containerID="dc7bce07b3dfdfe43d79ada51697272e656711537a015463e8a8ba6fdf153965" Feb 17 14:26:54 crc kubenswrapper[4768]: E0217 14:26:54.536311 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p97z4_openshift-machine-config-operator(10c685ba-8fe0-425c-958c-3fb6754d3d84)\"" pod="openshift-machine-config-operator/machine-config-daemon-p97z4" podUID="10c685ba-8fe0-425c-958c-3fb6754d3d84" Feb 17 14:27:09 crc kubenswrapper[4768]: I0217 14:27:09.534393 4768 scope.go:117] "RemoveContainer" containerID="dc7bce07b3dfdfe43d79ada51697272e656711537a015463e8a8ba6fdf153965" Feb 17 14:27:09 crc kubenswrapper[4768]: E0217 14:27:09.535175 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p97z4_openshift-machine-config-operator(10c685ba-8fe0-425c-958c-3fb6754d3d84)\"" pod="openshift-machine-config-operator/machine-config-daemon-p97z4" podUID="10c685ba-8fe0-425c-958c-3fb6754d3d84" Feb 17 14:27:22 crc kubenswrapper[4768]: I0217 14:27:22.534482 4768 scope.go:117] "RemoveContainer" containerID="dc7bce07b3dfdfe43d79ada51697272e656711537a015463e8a8ba6fdf153965" Feb 17 14:27:22 crc kubenswrapper[4768]: E0217 14:27:22.535194 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p97z4_openshift-machine-config-operator(10c685ba-8fe0-425c-958c-3fb6754d3d84)\"" pod="openshift-machine-config-operator/machine-config-daemon-p97z4" podUID="10c685ba-8fe0-425c-958c-3fb6754d3d84" Feb 17 14:27:37 crc kubenswrapper[4768]: I0217 14:27:37.534979 4768 scope.go:117] "RemoveContainer" containerID="dc7bce07b3dfdfe43d79ada51697272e656711537a015463e8a8ba6fdf153965" Feb 17 14:27:37 crc kubenswrapper[4768]: E0217 14:27:37.535859 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p97z4_openshift-machine-config-operator(10c685ba-8fe0-425c-958c-3fb6754d3d84)\"" pod="openshift-machine-config-operator/machine-config-daemon-p97z4" podUID="10c685ba-8fe0-425c-958c-3fb6754d3d84" Feb 17 14:27:51 crc kubenswrapper[4768]: I0217 14:27:51.541845 4768 scope.go:117] "RemoveContainer" containerID="dc7bce07b3dfdfe43d79ada51697272e656711537a015463e8a8ba6fdf153965" Feb 17 14:27:51 crc kubenswrapper[4768]: E0217 14:27:51.544288 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p97z4_openshift-machine-config-operator(10c685ba-8fe0-425c-958c-3fb6754d3d84)\"" pod="openshift-machine-config-operator/machine-config-daemon-p97z4" podUID="10c685ba-8fe0-425c-958c-3fb6754d3d84" Feb 17 14:28:02 crc kubenswrapper[4768]: I0217 14:28:02.534163 4768 scope.go:117] "RemoveContainer" containerID="dc7bce07b3dfdfe43d79ada51697272e656711537a015463e8a8ba6fdf153965" Feb 17 14:28:02 crc kubenswrapper[4768]: E0217 14:28:02.534882 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p97z4_openshift-machine-config-operator(10c685ba-8fe0-425c-958c-3fb6754d3d84)\"" pod="openshift-machine-config-operator/machine-config-daemon-p97z4" podUID="10c685ba-8fe0-425c-958c-3fb6754d3d84" Feb 17 14:28:14 crc kubenswrapper[4768]: I0217 14:28:14.535081 4768 scope.go:117] "RemoveContainer" containerID="dc7bce07b3dfdfe43d79ada51697272e656711537a015463e8a8ba6fdf153965" Feb 17 14:28:14 crc kubenswrapper[4768]: E0217 14:28:14.536949 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p97z4_openshift-machine-config-operator(10c685ba-8fe0-425c-958c-3fb6754d3d84)\"" pod="openshift-machine-config-operator/machine-config-daemon-p97z4" podUID="10c685ba-8fe0-425c-958c-3fb6754d3d84" Feb 17 14:28:26 crc kubenswrapper[4768]: I0217 14:28:26.535034 4768 scope.go:117] "RemoveContainer" containerID="dc7bce07b3dfdfe43d79ada51697272e656711537a015463e8a8ba6fdf153965" Feb 17 14:28:26 crc kubenswrapper[4768]: E0217 14:28:26.536636 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p97z4_openshift-machine-config-operator(10c685ba-8fe0-425c-958c-3fb6754d3d84)\"" pod="openshift-machine-config-operator/machine-config-daemon-p97z4" podUID="10c685ba-8fe0-425c-958c-3fb6754d3d84" Feb 17 14:28:37 crc kubenswrapper[4768]: I0217 14:28:37.534298 4768 scope.go:117] "RemoveContainer" containerID="dc7bce07b3dfdfe43d79ada51697272e656711537a015463e8a8ba6fdf153965" Feb 17 14:28:37 crc kubenswrapper[4768]: E0217 14:28:37.535218 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p97z4_openshift-machine-config-operator(10c685ba-8fe0-425c-958c-3fb6754d3d84)\"" pod="openshift-machine-config-operator/machine-config-daemon-p97z4" podUID="10c685ba-8fe0-425c-958c-3fb6754d3d84" Feb 17 14:28:48 crc kubenswrapper[4768]: I0217 14:28:48.535475 4768 scope.go:117] "RemoveContainer" containerID="dc7bce07b3dfdfe43d79ada51697272e656711537a015463e8a8ba6fdf153965" Feb 17 14:28:48 crc kubenswrapper[4768]: E0217 14:28:48.536603 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p97z4_openshift-machine-config-operator(10c685ba-8fe0-425c-958c-3fb6754d3d84)\"" pod="openshift-machine-config-operator/machine-config-daemon-p97z4" podUID="10c685ba-8fe0-425c-958c-3fb6754d3d84" Feb 17 14:29:01 crc kubenswrapper[4768]: I0217 14:29:01.541311 4768 scope.go:117] "RemoveContainer" containerID="dc7bce07b3dfdfe43d79ada51697272e656711537a015463e8a8ba6fdf153965" Feb 17 14:29:02 crc kubenswrapper[4768]: I0217 14:29:02.612086 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-p97z4" event={"ID":"10c685ba-8fe0-425c-958c-3fb6754d3d84","Type":"ContainerStarted","Data":"ec2ca14acc6c65d44de9f0616d9b906984ff6fc79d13b91607724b751e6b996b"} Feb 17 14:29:14 crc kubenswrapper[4768]: I0217 14:29:14.060277 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-jfffv"] Feb 17 14:29:14 crc kubenswrapper[4768]: E0217 14:29:14.062508 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a5889224-4b67-414b-bd55-319ad3051eb1" containerName="registry-server" Feb 17 14:29:14 crc kubenswrapper[4768]: I0217 14:29:14.062639 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="a5889224-4b67-414b-bd55-319ad3051eb1" containerName="registry-server" Feb 17 14:29:14 crc kubenswrapper[4768]: E0217 14:29:14.062742 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a5889224-4b67-414b-bd55-319ad3051eb1" containerName="extract-utilities" Feb 17 14:29:14 crc kubenswrapper[4768]: I0217 14:29:14.062815 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="a5889224-4b67-414b-bd55-319ad3051eb1" containerName="extract-utilities" Feb 17 14:29:14 crc kubenswrapper[4768]: E0217 14:29:14.062898 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a5889224-4b67-414b-bd55-319ad3051eb1" containerName="extract-content" Feb 17 14:29:14 crc kubenswrapper[4768]: I0217 14:29:14.062974 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="a5889224-4b67-414b-bd55-319ad3051eb1" containerName="extract-content" Feb 17 14:29:14 crc kubenswrapper[4768]: I0217 14:29:14.063320 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="a5889224-4b67-414b-bd55-319ad3051eb1" containerName="registry-server" Feb 17 14:29:14 crc kubenswrapper[4768]: I0217 14:29:14.065139 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-jfffv" Feb 17 14:29:14 crc kubenswrapper[4768]: I0217 14:29:14.070862 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-jfffv"] Feb 17 14:29:14 crc kubenswrapper[4768]: I0217 14:29:14.220097 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4f0a2eb6-2e08-4743-9841-a97f5061e411-catalog-content\") pod \"certified-operators-jfffv\" (UID: \"4f0a2eb6-2e08-4743-9841-a97f5061e411\") " pod="openshift-marketplace/certified-operators-jfffv" Feb 17 14:29:14 crc kubenswrapper[4768]: I0217 14:29:14.220164 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4f0a2eb6-2e08-4743-9841-a97f5061e411-utilities\") pod \"certified-operators-jfffv\" (UID: \"4f0a2eb6-2e08-4743-9841-a97f5061e411\") " pod="openshift-marketplace/certified-operators-jfffv" Feb 17 14:29:14 crc kubenswrapper[4768]: I0217 14:29:14.220376 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mm7j5\" (UniqueName: \"kubernetes.io/projected/4f0a2eb6-2e08-4743-9841-a97f5061e411-kube-api-access-mm7j5\") pod \"certified-operators-jfffv\" (UID: \"4f0a2eb6-2e08-4743-9841-a97f5061e411\") " pod="openshift-marketplace/certified-operators-jfffv" Feb 17 14:29:14 crc kubenswrapper[4768]: I0217 14:29:14.322438 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4f0a2eb6-2e08-4743-9841-a97f5061e411-catalog-content\") pod \"certified-operators-jfffv\" (UID: \"4f0a2eb6-2e08-4743-9841-a97f5061e411\") " pod="openshift-marketplace/certified-operators-jfffv" Feb 17 14:29:14 crc kubenswrapper[4768]: I0217 14:29:14.322724 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4f0a2eb6-2e08-4743-9841-a97f5061e411-utilities\") pod \"certified-operators-jfffv\" (UID: \"4f0a2eb6-2e08-4743-9841-a97f5061e411\") " pod="openshift-marketplace/certified-operators-jfffv" Feb 17 14:29:14 crc kubenswrapper[4768]: I0217 14:29:14.322880 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mm7j5\" (UniqueName: \"kubernetes.io/projected/4f0a2eb6-2e08-4743-9841-a97f5061e411-kube-api-access-mm7j5\") pod \"certified-operators-jfffv\" (UID: \"4f0a2eb6-2e08-4743-9841-a97f5061e411\") " pod="openshift-marketplace/certified-operators-jfffv" Feb 17 14:29:14 crc kubenswrapper[4768]: I0217 14:29:14.323372 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4f0a2eb6-2e08-4743-9841-a97f5061e411-catalog-content\") pod \"certified-operators-jfffv\" (UID: \"4f0a2eb6-2e08-4743-9841-a97f5061e411\") " pod="openshift-marketplace/certified-operators-jfffv" Feb 17 14:29:14 crc kubenswrapper[4768]: I0217 14:29:14.323383 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4f0a2eb6-2e08-4743-9841-a97f5061e411-utilities\") pod \"certified-operators-jfffv\" (UID: \"4f0a2eb6-2e08-4743-9841-a97f5061e411\") " pod="openshift-marketplace/certified-operators-jfffv" Feb 17 14:29:14 crc kubenswrapper[4768]: I0217 14:29:14.346297 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mm7j5\" (UniqueName: \"kubernetes.io/projected/4f0a2eb6-2e08-4743-9841-a97f5061e411-kube-api-access-mm7j5\") pod \"certified-operators-jfffv\" (UID: \"4f0a2eb6-2e08-4743-9841-a97f5061e411\") " pod="openshift-marketplace/certified-operators-jfffv" Feb 17 14:29:14 crc kubenswrapper[4768]: I0217 14:29:14.390956 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-jfffv" Feb 17 14:29:15 crc kubenswrapper[4768]: I0217 14:29:15.000952 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-jfffv"] Feb 17 14:29:15 crc kubenswrapper[4768]: I0217 14:29:15.729765 4768 generic.go:334] "Generic (PLEG): container finished" podID="4f0a2eb6-2e08-4743-9841-a97f5061e411" containerID="3190f8cd0261415997d4437a4f5a451eecb249e5a026c428b0e26fdba39f41c4" exitCode=0 Feb 17 14:29:15 crc kubenswrapper[4768]: I0217 14:29:15.730024 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jfffv" event={"ID":"4f0a2eb6-2e08-4743-9841-a97f5061e411","Type":"ContainerDied","Data":"3190f8cd0261415997d4437a4f5a451eecb249e5a026c428b0e26fdba39f41c4"} Feb 17 14:29:15 crc kubenswrapper[4768]: I0217 14:29:15.730050 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jfffv" event={"ID":"4f0a2eb6-2e08-4743-9841-a97f5061e411","Type":"ContainerStarted","Data":"f59a24424c921fa934607081a0dd8d8e5d973ab52957f75d9f643165f754d245"} Feb 17 14:29:15 crc kubenswrapper[4768]: I0217 14:29:15.733186 4768 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 17 14:29:16 crc kubenswrapper[4768]: I0217 14:29:16.743179 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jfffv" event={"ID":"4f0a2eb6-2e08-4743-9841-a97f5061e411","Type":"ContainerStarted","Data":"a077bcbed14fede57b3034769a6f556a048fb6827bb3264226cc3506fbfacaaf"} Feb 17 14:29:17 crc kubenswrapper[4768]: I0217 14:29:17.752873 4768 generic.go:334] "Generic (PLEG): container finished" podID="4f0a2eb6-2e08-4743-9841-a97f5061e411" containerID="a077bcbed14fede57b3034769a6f556a048fb6827bb3264226cc3506fbfacaaf" exitCode=0 Feb 17 14:29:17 crc kubenswrapper[4768]: I0217 14:29:17.752934 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jfffv" event={"ID":"4f0a2eb6-2e08-4743-9841-a97f5061e411","Type":"ContainerDied","Data":"a077bcbed14fede57b3034769a6f556a048fb6827bb3264226cc3506fbfacaaf"} Feb 17 14:29:18 crc kubenswrapper[4768]: I0217 14:29:18.769178 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jfffv" event={"ID":"4f0a2eb6-2e08-4743-9841-a97f5061e411","Type":"ContainerStarted","Data":"6a4f4fe762a29f5e47c42a0f48a4cf7aa6155f281eee47d1c1d36f9043d622e0"} Feb 17 14:29:18 crc kubenswrapper[4768]: I0217 14:29:18.789031 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-jfffv" podStartSLOduration=2.393364465 podStartE2EDuration="4.789009239s" podCreationTimestamp="2026-02-17 14:29:14 +0000 UTC" firstStartedPulling="2026-02-17 14:29:15.732870728 +0000 UTC m=+3175.012257180" lastFinishedPulling="2026-02-17 14:29:18.128515522 +0000 UTC m=+3177.407901954" observedRunningTime="2026-02-17 14:29:18.788521966 +0000 UTC m=+3178.067908418" watchObservedRunningTime="2026-02-17 14:29:18.789009239 +0000 UTC m=+3178.068395681" Feb 17 14:29:24 crc kubenswrapper[4768]: I0217 14:29:24.391623 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-jfffv" Feb 17 14:29:24 crc kubenswrapper[4768]: I0217 14:29:24.392209 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-jfffv" Feb 17 14:29:24 crc kubenswrapper[4768]: I0217 14:29:24.477912 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-jfffv" Feb 17 14:29:24 crc kubenswrapper[4768]: I0217 14:29:24.930629 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-jfffv" Feb 17 14:29:25 crc kubenswrapper[4768]: I0217 14:29:25.452735 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-jfffv"] Feb 17 14:29:26 crc kubenswrapper[4768]: I0217 14:29:26.834219 4768 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-jfffv" podUID="4f0a2eb6-2e08-4743-9841-a97f5061e411" containerName="registry-server" containerID="cri-o://6a4f4fe762a29f5e47c42a0f48a4cf7aa6155f281eee47d1c1d36f9043d622e0" gracePeriod=2 Feb 17 14:29:27 crc kubenswrapper[4768]: I0217 14:29:27.380039 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-jfffv" Feb 17 14:29:27 crc kubenswrapper[4768]: I0217 14:29:27.471346 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mm7j5\" (UniqueName: \"kubernetes.io/projected/4f0a2eb6-2e08-4743-9841-a97f5061e411-kube-api-access-mm7j5\") pod \"4f0a2eb6-2e08-4743-9841-a97f5061e411\" (UID: \"4f0a2eb6-2e08-4743-9841-a97f5061e411\") " Feb 17 14:29:27 crc kubenswrapper[4768]: I0217 14:29:27.471507 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4f0a2eb6-2e08-4743-9841-a97f5061e411-utilities\") pod \"4f0a2eb6-2e08-4743-9841-a97f5061e411\" (UID: \"4f0a2eb6-2e08-4743-9841-a97f5061e411\") " Feb 17 14:29:27 crc kubenswrapper[4768]: I0217 14:29:27.471736 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4f0a2eb6-2e08-4743-9841-a97f5061e411-catalog-content\") pod \"4f0a2eb6-2e08-4743-9841-a97f5061e411\" (UID: \"4f0a2eb6-2e08-4743-9841-a97f5061e411\") " Feb 17 14:29:27 crc kubenswrapper[4768]: I0217 14:29:27.472658 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4f0a2eb6-2e08-4743-9841-a97f5061e411-utilities" (OuterVolumeSpecName: "utilities") pod "4f0a2eb6-2e08-4743-9841-a97f5061e411" (UID: "4f0a2eb6-2e08-4743-9841-a97f5061e411"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 14:29:27 crc kubenswrapper[4768]: I0217 14:29:27.477402 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4f0a2eb6-2e08-4743-9841-a97f5061e411-kube-api-access-mm7j5" (OuterVolumeSpecName: "kube-api-access-mm7j5") pod "4f0a2eb6-2e08-4743-9841-a97f5061e411" (UID: "4f0a2eb6-2e08-4743-9841-a97f5061e411"). InnerVolumeSpecName "kube-api-access-mm7j5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 14:29:27 crc kubenswrapper[4768]: I0217 14:29:27.550562 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4f0a2eb6-2e08-4743-9841-a97f5061e411-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "4f0a2eb6-2e08-4743-9841-a97f5061e411" (UID: "4f0a2eb6-2e08-4743-9841-a97f5061e411"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 14:29:27 crc kubenswrapper[4768]: I0217 14:29:27.573877 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mm7j5\" (UniqueName: \"kubernetes.io/projected/4f0a2eb6-2e08-4743-9841-a97f5061e411-kube-api-access-mm7j5\") on node \"crc\" DevicePath \"\"" Feb 17 14:29:27 crc kubenswrapper[4768]: I0217 14:29:27.574659 4768 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4f0a2eb6-2e08-4743-9841-a97f5061e411-utilities\") on node \"crc\" DevicePath \"\"" Feb 17 14:29:27 crc kubenswrapper[4768]: I0217 14:29:27.574785 4768 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4f0a2eb6-2e08-4743-9841-a97f5061e411-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 17 14:29:27 crc kubenswrapper[4768]: I0217 14:29:27.846200 4768 generic.go:334] "Generic (PLEG): container finished" podID="4f0a2eb6-2e08-4743-9841-a97f5061e411" containerID="6a4f4fe762a29f5e47c42a0f48a4cf7aa6155f281eee47d1c1d36f9043d622e0" exitCode=0 Feb 17 14:29:27 crc kubenswrapper[4768]: I0217 14:29:27.846236 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-jfffv" Feb 17 14:29:27 crc kubenswrapper[4768]: I0217 14:29:27.846247 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jfffv" event={"ID":"4f0a2eb6-2e08-4743-9841-a97f5061e411","Type":"ContainerDied","Data":"6a4f4fe762a29f5e47c42a0f48a4cf7aa6155f281eee47d1c1d36f9043d622e0"} Feb 17 14:29:27 crc kubenswrapper[4768]: I0217 14:29:27.846276 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jfffv" event={"ID":"4f0a2eb6-2e08-4743-9841-a97f5061e411","Type":"ContainerDied","Data":"f59a24424c921fa934607081a0dd8d8e5d973ab52957f75d9f643165f754d245"} Feb 17 14:29:27 crc kubenswrapper[4768]: I0217 14:29:27.846297 4768 scope.go:117] "RemoveContainer" containerID="6a4f4fe762a29f5e47c42a0f48a4cf7aa6155f281eee47d1c1d36f9043d622e0" Feb 17 14:29:27 crc kubenswrapper[4768]: I0217 14:29:27.878346 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-jfffv"] Feb 17 14:29:27 crc kubenswrapper[4768]: I0217 14:29:27.879841 4768 scope.go:117] "RemoveContainer" containerID="a077bcbed14fede57b3034769a6f556a048fb6827bb3264226cc3506fbfacaaf" Feb 17 14:29:27 crc kubenswrapper[4768]: I0217 14:29:27.886768 4768 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-jfffv"] Feb 17 14:29:27 crc kubenswrapper[4768]: I0217 14:29:27.916139 4768 scope.go:117] "RemoveContainer" containerID="3190f8cd0261415997d4437a4f5a451eecb249e5a026c428b0e26fdba39f41c4" Feb 17 14:29:27 crc kubenswrapper[4768]: I0217 14:29:27.951313 4768 scope.go:117] "RemoveContainer" containerID="6a4f4fe762a29f5e47c42a0f48a4cf7aa6155f281eee47d1c1d36f9043d622e0" Feb 17 14:29:27 crc kubenswrapper[4768]: E0217 14:29:27.951753 4768 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6a4f4fe762a29f5e47c42a0f48a4cf7aa6155f281eee47d1c1d36f9043d622e0\": container with ID starting with 6a4f4fe762a29f5e47c42a0f48a4cf7aa6155f281eee47d1c1d36f9043d622e0 not found: ID does not exist" containerID="6a4f4fe762a29f5e47c42a0f48a4cf7aa6155f281eee47d1c1d36f9043d622e0" Feb 17 14:29:27 crc kubenswrapper[4768]: I0217 14:29:27.951794 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6a4f4fe762a29f5e47c42a0f48a4cf7aa6155f281eee47d1c1d36f9043d622e0"} err="failed to get container status \"6a4f4fe762a29f5e47c42a0f48a4cf7aa6155f281eee47d1c1d36f9043d622e0\": rpc error: code = NotFound desc = could not find container \"6a4f4fe762a29f5e47c42a0f48a4cf7aa6155f281eee47d1c1d36f9043d622e0\": container with ID starting with 6a4f4fe762a29f5e47c42a0f48a4cf7aa6155f281eee47d1c1d36f9043d622e0 not found: ID does not exist" Feb 17 14:29:27 crc kubenswrapper[4768]: I0217 14:29:27.951818 4768 scope.go:117] "RemoveContainer" containerID="a077bcbed14fede57b3034769a6f556a048fb6827bb3264226cc3506fbfacaaf" Feb 17 14:29:27 crc kubenswrapper[4768]: E0217 14:29:27.952173 4768 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a077bcbed14fede57b3034769a6f556a048fb6827bb3264226cc3506fbfacaaf\": container with ID starting with a077bcbed14fede57b3034769a6f556a048fb6827bb3264226cc3506fbfacaaf not found: ID does not exist" containerID="a077bcbed14fede57b3034769a6f556a048fb6827bb3264226cc3506fbfacaaf" Feb 17 14:29:27 crc kubenswrapper[4768]: I0217 14:29:27.952198 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a077bcbed14fede57b3034769a6f556a048fb6827bb3264226cc3506fbfacaaf"} err="failed to get container status \"a077bcbed14fede57b3034769a6f556a048fb6827bb3264226cc3506fbfacaaf\": rpc error: code = NotFound desc = could not find container \"a077bcbed14fede57b3034769a6f556a048fb6827bb3264226cc3506fbfacaaf\": container with ID starting with a077bcbed14fede57b3034769a6f556a048fb6827bb3264226cc3506fbfacaaf not found: ID does not exist" Feb 17 14:29:27 crc kubenswrapper[4768]: I0217 14:29:27.952213 4768 scope.go:117] "RemoveContainer" containerID="3190f8cd0261415997d4437a4f5a451eecb249e5a026c428b0e26fdba39f41c4" Feb 17 14:29:27 crc kubenswrapper[4768]: E0217 14:29:27.952444 4768 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3190f8cd0261415997d4437a4f5a451eecb249e5a026c428b0e26fdba39f41c4\": container with ID starting with 3190f8cd0261415997d4437a4f5a451eecb249e5a026c428b0e26fdba39f41c4 not found: ID does not exist" containerID="3190f8cd0261415997d4437a4f5a451eecb249e5a026c428b0e26fdba39f41c4" Feb 17 14:29:27 crc kubenswrapper[4768]: I0217 14:29:27.952467 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3190f8cd0261415997d4437a4f5a451eecb249e5a026c428b0e26fdba39f41c4"} err="failed to get container status \"3190f8cd0261415997d4437a4f5a451eecb249e5a026c428b0e26fdba39f41c4\": rpc error: code = NotFound desc = could not find container \"3190f8cd0261415997d4437a4f5a451eecb249e5a026c428b0e26fdba39f41c4\": container with ID starting with 3190f8cd0261415997d4437a4f5a451eecb249e5a026c428b0e26fdba39f41c4 not found: ID does not exist" Feb 17 14:29:29 crc kubenswrapper[4768]: I0217 14:29:29.561323 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4f0a2eb6-2e08-4743-9841-a97f5061e411" path="/var/lib/kubelet/pods/4f0a2eb6-2e08-4743-9841-a97f5061e411/volumes" Feb 17 14:30:00 crc kubenswrapper[4768]: I0217 14:30:00.148740 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29522310-pvf4w"] Feb 17 14:30:00 crc kubenswrapper[4768]: E0217 14:30:00.149589 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4f0a2eb6-2e08-4743-9841-a97f5061e411" containerName="registry-server" Feb 17 14:30:00 crc kubenswrapper[4768]: I0217 14:30:00.149614 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="4f0a2eb6-2e08-4743-9841-a97f5061e411" containerName="registry-server" Feb 17 14:30:00 crc kubenswrapper[4768]: E0217 14:30:00.149632 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4f0a2eb6-2e08-4743-9841-a97f5061e411" containerName="extract-content" Feb 17 14:30:00 crc kubenswrapper[4768]: I0217 14:30:00.149639 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="4f0a2eb6-2e08-4743-9841-a97f5061e411" containerName="extract-content" Feb 17 14:30:00 crc kubenswrapper[4768]: E0217 14:30:00.149664 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4f0a2eb6-2e08-4743-9841-a97f5061e411" containerName="extract-utilities" Feb 17 14:30:00 crc kubenswrapper[4768]: I0217 14:30:00.149674 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="4f0a2eb6-2e08-4743-9841-a97f5061e411" containerName="extract-utilities" Feb 17 14:30:00 crc kubenswrapper[4768]: I0217 14:30:00.149863 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="4f0a2eb6-2e08-4743-9841-a97f5061e411" containerName="registry-server" Feb 17 14:30:00 crc kubenswrapper[4768]: I0217 14:30:00.150577 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29522310-pvf4w" Feb 17 14:30:00 crc kubenswrapper[4768]: I0217 14:30:00.152842 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 17 14:30:00 crc kubenswrapper[4768]: I0217 14:30:00.153361 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 17 14:30:00 crc kubenswrapper[4768]: I0217 14:30:00.158463 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29522310-pvf4w"] Feb 17 14:30:00 crc kubenswrapper[4768]: I0217 14:30:00.243582 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/688a6188-f6bf-4c8d-94fb-f03130e4c634-secret-volume\") pod \"collect-profiles-29522310-pvf4w\" (UID: \"688a6188-f6bf-4c8d-94fb-f03130e4c634\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522310-pvf4w" Feb 17 14:30:00 crc kubenswrapper[4768]: I0217 14:30:00.243675 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gw8jp\" (UniqueName: \"kubernetes.io/projected/688a6188-f6bf-4c8d-94fb-f03130e4c634-kube-api-access-gw8jp\") pod \"collect-profiles-29522310-pvf4w\" (UID: \"688a6188-f6bf-4c8d-94fb-f03130e4c634\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522310-pvf4w" Feb 17 14:30:00 crc kubenswrapper[4768]: I0217 14:30:00.243770 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/688a6188-f6bf-4c8d-94fb-f03130e4c634-config-volume\") pod \"collect-profiles-29522310-pvf4w\" (UID: \"688a6188-f6bf-4c8d-94fb-f03130e4c634\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522310-pvf4w" Feb 17 14:30:00 crc kubenswrapper[4768]: I0217 14:30:00.345432 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gw8jp\" (UniqueName: \"kubernetes.io/projected/688a6188-f6bf-4c8d-94fb-f03130e4c634-kube-api-access-gw8jp\") pod \"collect-profiles-29522310-pvf4w\" (UID: \"688a6188-f6bf-4c8d-94fb-f03130e4c634\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522310-pvf4w" Feb 17 14:30:00 crc kubenswrapper[4768]: I0217 14:30:00.345805 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/688a6188-f6bf-4c8d-94fb-f03130e4c634-config-volume\") pod \"collect-profiles-29522310-pvf4w\" (UID: \"688a6188-f6bf-4c8d-94fb-f03130e4c634\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522310-pvf4w" Feb 17 14:30:00 crc kubenswrapper[4768]: I0217 14:30:00.345992 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/688a6188-f6bf-4c8d-94fb-f03130e4c634-secret-volume\") pod \"collect-profiles-29522310-pvf4w\" (UID: \"688a6188-f6bf-4c8d-94fb-f03130e4c634\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522310-pvf4w" Feb 17 14:30:00 crc kubenswrapper[4768]: I0217 14:30:00.346816 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/688a6188-f6bf-4c8d-94fb-f03130e4c634-config-volume\") pod \"collect-profiles-29522310-pvf4w\" (UID: \"688a6188-f6bf-4c8d-94fb-f03130e4c634\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522310-pvf4w" Feb 17 14:30:00 crc kubenswrapper[4768]: I0217 14:30:00.352539 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/688a6188-f6bf-4c8d-94fb-f03130e4c634-secret-volume\") pod \"collect-profiles-29522310-pvf4w\" (UID: \"688a6188-f6bf-4c8d-94fb-f03130e4c634\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522310-pvf4w" Feb 17 14:30:00 crc kubenswrapper[4768]: I0217 14:30:00.361760 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gw8jp\" (UniqueName: \"kubernetes.io/projected/688a6188-f6bf-4c8d-94fb-f03130e4c634-kube-api-access-gw8jp\") pod \"collect-profiles-29522310-pvf4w\" (UID: \"688a6188-f6bf-4c8d-94fb-f03130e4c634\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522310-pvf4w" Feb 17 14:30:00 crc kubenswrapper[4768]: I0217 14:30:00.479633 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29522310-pvf4w" Feb 17 14:30:00 crc kubenswrapper[4768]: I0217 14:30:00.931484 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29522310-pvf4w"] Feb 17 14:30:01 crc kubenswrapper[4768]: I0217 14:30:01.193693 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29522310-pvf4w" event={"ID":"688a6188-f6bf-4c8d-94fb-f03130e4c634","Type":"ContainerStarted","Data":"98e3f96243380b7812bd8ab888236fa05a29e2a74204ef045f8cffc64a603ff5"} Feb 17 14:30:01 crc kubenswrapper[4768]: I0217 14:30:01.194018 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29522310-pvf4w" event={"ID":"688a6188-f6bf-4c8d-94fb-f03130e4c634","Type":"ContainerStarted","Data":"da62acc7756169c1543e42ee7bf63e7008f7c42b42ce5824d3e5f8c8edb7a47e"} Feb 17 14:30:01 crc kubenswrapper[4768]: I0217 14:30:01.218688 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29522310-pvf4w" podStartSLOduration=1.2186658750000001 podStartE2EDuration="1.218665875s" podCreationTimestamp="2026-02-17 14:30:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 14:30:01.21152172 +0000 UTC m=+3220.490908162" watchObservedRunningTime="2026-02-17 14:30:01.218665875 +0000 UTC m=+3220.498052317" Feb 17 14:30:02 crc kubenswrapper[4768]: I0217 14:30:02.207165 4768 generic.go:334] "Generic (PLEG): container finished" podID="688a6188-f6bf-4c8d-94fb-f03130e4c634" containerID="98e3f96243380b7812bd8ab888236fa05a29e2a74204ef045f8cffc64a603ff5" exitCode=0 Feb 17 14:30:02 crc kubenswrapper[4768]: I0217 14:30:02.207236 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29522310-pvf4w" event={"ID":"688a6188-f6bf-4c8d-94fb-f03130e4c634","Type":"ContainerDied","Data":"98e3f96243380b7812bd8ab888236fa05a29e2a74204ef045f8cffc64a603ff5"} Feb 17 14:30:03 crc kubenswrapper[4768]: I0217 14:30:03.612344 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29522310-pvf4w" Feb 17 14:30:03 crc kubenswrapper[4768]: I0217 14:30:03.805463 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gw8jp\" (UniqueName: \"kubernetes.io/projected/688a6188-f6bf-4c8d-94fb-f03130e4c634-kube-api-access-gw8jp\") pod \"688a6188-f6bf-4c8d-94fb-f03130e4c634\" (UID: \"688a6188-f6bf-4c8d-94fb-f03130e4c634\") " Feb 17 14:30:03 crc kubenswrapper[4768]: I0217 14:30:03.805601 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/688a6188-f6bf-4c8d-94fb-f03130e4c634-secret-volume\") pod \"688a6188-f6bf-4c8d-94fb-f03130e4c634\" (UID: \"688a6188-f6bf-4c8d-94fb-f03130e4c634\") " Feb 17 14:30:03 crc kubenswrapper[4768]: I0217 14:30:03.805705 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/688a6188-f6bf-4c8d-94fb-f03130e4c634-config-volume\") pod \"688a6188-f6bf-4c8d-94fb-f03130e4c634\" (UID: \"688a6188-f6bf-4c8d-94fb-f03130e4c634\") " Feb 17 14:30:03 crc kubenswrapper[4768]: I0217 14:30:03.806499 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/688a6188-f6bf-4c8d-94fb-f03130e4c634-config-volume" (OuterVolumeSpecName: "config-volume") pod "688a6188-f6bf-4c8d-94fb-f03130e4c634" (UID: "688a6188-f6bf-4c8d-94fb-f03130e4c634"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 14:30:03 crc kubenswrapper[4768]: I0217 14:30:03.811492 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/688a6188-f6bf-4c8d-94fb-f03130e4c634-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "688a6188-f6bf-4c8d-94fb-f03130e4c634" (UID: "688a6188-f6bf-4c8d-94fb-f03130e4c634"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 14:30:03 crc kubenswrapper[4768]: I0217 14:30:03.811766 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/688a6188-f6bf-4c8d-94fb-f03130e4c634-kube-api-access-gw8jp" (OuterVolumeSpecName: "kube-api-access-gw8jp") pod "688a6188-f6bf-4c8d-94fb-f03130e4c634" (UID: "688a6188-f6bf-4c8d-94fb-f03130e4c634"). InnerVolumeSpecName "kube-api-access-gw8jp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 14:30:03 crc kubenswrapper[4768]: I0217 14:30:03.908153 4768 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/688a6188-f6bf-4c8d-94fb-f03130e4c634-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 17 14:30:03 crc kubenswrapper[4768]: I0217 14:30:03.908194 4768 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/688a6188-f6bf-4c8d-94fb-f03130e4c634-config-volume\") on node \"crc\" DevicePath \"\"" Feb 17 14:30:03 crc kubenswrapper[4768]: I0217 14:30:03.908206 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gw8jp\" (UniqueName: \"kubernetes.io/projected/688a6188-f6bf-4c8d-94fb-f03130e4c634-kube-api-access-gw8jp\") on node \"crc\" DevicePath \"\"" Feb 17 14:30:04 crc kubenswrapper[4768]: I0217 14:30:04.224259 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29522310-pvf4w" Feb 17 14:30:04 crc kubenswrapper[4768]: I0217 14:30:04.224313 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29522310-pvf4w" event={"ID":"688a6188-f6bf-4c8d-94fb-f03130e4c634","Type":"ContainerDied","Data":"da62acc7756169c1543e42ee7bf63e7008f7c42b42ce5824d3e5f8c8edb7a47e"} Feb 17 14:30:04 crc kubenswrapper[4768]: I0217 14:30:04.224352 4768 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="da62acc7756169c1543e42ee7bf63e7008f7c42b42ce5824d3e5f8c8edb7a47e" Feb 17 14:30:04 crc kubenswrapper[4768]: I0217 14:30:04.291696 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29522265-827m7"] Feb 17 14:30:04 crc kubenswrapper[4768]: I0217 14:30:04.300215 4768 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29522265-827m7"] Feb 17 14:30:05 crc kubenswrapper[4768]: I0217 14:30:05.547141 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31035514-a227-4c3c-b638-baa3165746d6" path="/var/lib/kubelet/pods/31035514-a227-4c3c-b638-baa3165746d6/volumes" Feb 17 14:30:29 crc kubenswrapper[4768]: I0217 14:30:29.924707 4768 scope.go:117] "RemoveContainer" containerID="e27bc4595eff5b92c47140ca83c86a5b69c343ac2b274894137306cac14203e4" Feb 17 14:30:37 crc kubenswrapper[4768]: I0217 14:30:37.543626 4768 generic.go:334] "Generic (PLEG): container finished" podID="780f2ee6-f4d9-455c-97e6-7e6451706324" containerID="75a350e1d00cd063f9c25d6cd1f8e553497147a0a776b96659f66a101e7b5969" exitCode=0 Feb 17 14:30:37 crc kubenswrapper[4768]: I0217 14:30:37.549228 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"780f2ee6-f4d9-455c-97e6-7e6451706324","Type":"ContainerDied","Data":"75a350e1d00cd063f9c25d6cd1f8e553497147a0a776b96659f66a101e7b5969"} Feb 17 14:30:38 crc kubenswrapper[4768]: I0217 14:30:38.974617 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Feb 17 14:30:39 crc kubenswrapper[4768]: I0217 14:30:39.094421 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/780f2ee6-f4d9-455c-97e6-7e6451706324-test-operator-ephemeral-temporary\") pod \"780f2ee6-f4d9-455c-97e6-7e6451706324\" (UID: \"780f2ee6-f4d9-455c-97e6-7e6451706324\") " Feb 17 14:30:39 crc kubenswrapper[4768]: I0217 14:30:39.094489 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/780f2ee6-f4d9-455c-97e6-7e6451706324-openstack-config-secret\") pod \"780f2ee6-f4d9-455c-97e6-7e6451706324\" (UID: \"780f2ee6-f4d9-455c-97e6-7e6451706324\") " Feb 17 14:30:39 crc kubenswrapper[4768]: I0217 14:30:39.094575 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/780f2ee6-f4d9-455c-97e6-7e6451706324-openstack-config\") pod \"780f2ee6-f4d9-455c-97e6-7e6451706324\" (UID: \"780f2ee6-f4d9-455c-97e6-7e6451706324\") " Feb 17 14:30:39 crc kubenswrapper[4768]: I0217 14:30:39.094597 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-logs\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"780f2ee6-f4d9-455c-97e6-7e6451706324\" (UID: \"780f2ee6-f4d9-455c-97e6-7e6451706324\") " Feb 17 14:30:39 crc kubenswrapper[4768]: I0217 14:30:39.094646 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/780f2ee6-f4d9-455c-97e6-7e6451706324-config-data\") pod \"780f2ee6-f4d9-455c-97e6-7e6451706324\" (UID: \"780f2ee6-f4d9-455c-97e6-7e6451706324\") " Feb 17 14:30:39 crc kubenswrapper[4768]: I0217 14:30:39.094703 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/780f2ee6-f4d9-455c-97e6-7e6451706324-test-operator-ephemeral-workdir\") pod \"780f2ee6-f4d9-455c-97e6-7e6451706324\" (UID: \"780f2ee6-f4d9-455c-97e6-7e6451706324\") " Feb 17 14:30:39 crc kubenswrapper[4768]: I0217 14:30:39.094724 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/780f2ee6-f4d9-455c-97e6-7e6451706324-ca-certs\") pod \"780f2ee6-f4d9-455c-97e6-7e6451706324\" (UID: \"780f2ee6-f4d9-455c-97e6-7e6451706324\") " Feb 17 14:30:39 crc kubenswrapper[4768]: I0217 14:30:39.094753 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xndzs\" (UniqueName: \"kubernetes.io/projected/780f2ee6-f4d9-455c-97e6-7e6451706324-kube-api-access-xndzs\") pod \"780f2ee6-f4d9-455c-97e6-7e6451706324\" (UID: \"780f2ee6-f4d9-455c-97e6-7e6451706324\") " Feb 17 14:30:39 crc kubenswrapper[4768]: I0217 14:30:39.095680 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/780f2ee6-f4d9-455c-97e6-7e6451706324-test-operator-ephemeral-temporary" (OuterVolumeSpecName: "test-operator-ephemeral-temporary") pod "780f2ee6-f4d9-455c-97e6-7e6451706324" (UID: "780f2ee6-f4d9-455c-97e6-7e6451706324"). InnerVolumeSpecName "test-operator-ephemeral-temporary". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 14:30:39 crc kubenswrapper[4768]: I0217 14:30:39.095761 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/780f2ee6-f4d9-455c-97e6-7e6451706324-ssh-key\") pod \"780f2ee6-f4d9-455c-97e6-7e6451706324\" (UID: \"780f2ee6-f4d9-455c-97e6-7e6451706324\") " Feb 17 14:30:39 crc kubenswrapper[4768]: I0217 14:30:39.095992 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/780f2ee6-f4d9-455c-97e6-7e6451706324-config-data" (OuterVolumeSpecName: "config-data") pod "780f2ee6-f4d9-455c-97e6-7e6451706324" (UID: "780f2ee6-f4d9-455c-97e6-7e6451706324"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 14:30:39 crc kubenswrapper[4768]: I0217 14:30:39.097034 4768 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/780f2ee6-f4d9-455c-97e6-7e6451706324-test-operator-ephemeral-temporary\") on node \"crc\" DevicePath \"\"" Feb 17 14:30:39 crc kubenswrapper[4768]: I0217 14:30:39.097089 4768 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/780f2ee6-f4d9-455c-97e6-7e6451706324-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 14:30:39 crc kubenswrapper[4768]: I0217 14:30:39.100266 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage12-crc" (OuterVolumeSpecName: "test-operator-logs") pod "780f2ee6-f4d9-455c-97e6-7e6451706324" (UID: "780f2ee6-f4d9-455c-97e6-7e6451706324"). InnerVolumeSpecName "local-storage12-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Feb 17 14:30:39 crc kubenswrapper[4768]: I0217 14:30:39.100915 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/780f2ee6-f4d9-455c-97e6-7e6451706324-test-operator-ephemeral-workdir" (OuterVolumeSpecName: "test-operator-ephemeral-workdir") pod "780f2ee6-f4d9-455c-97e6-7e6451706324" (UID: "780f2ee6-f4d9-455c-97e6-7e6451706324"). InnerVolumeSpecName "test-operator-ephemeral-workdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 14:30:39 crc kubenswrapper[4768]: I0217 14:30:39.102745 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/780f2ee6-f4d9-455c-97e6-7e6451706324-kube-api-access-xndzs" (OuterVolumeSpecName: "kube-api-access-xndzs") pod "780f2ee6-f4d9-455c-97e6-7e6451706324" (UID: "780f2ee6-f4d9-455c-97e6-7e6451706324"). InnerVolumeSpecName "kube-api-access-xndzs". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 14:30:39 crc kubenswrapper[4768]: I0217 14:30:39.137548 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/780f2ee6-f4d9-455c-97e6-7e6451706324-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "780f2ee6-f4d9-455c-97e6-7e6451706324" (UID: "780f2ee6-f4d9-455c-97e6-7e6451706324"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 14:30:39 crc kubenswrapper[4768]: I0217 14:30:39.138172 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/780f2ee6-f4d9-455c-97e6-7e6451706324-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "780f2ee6-f4d9-455c-97e6-7e6451706324" (UID: "780f2ee6-f4d9-455c-97e6-7e6451706324"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 14:30:39 crc kubenswrapper[4768]: I0217 14:30:39.138787 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/780f2ee6-f4d9-455c-97e6-7e6451706324-ca-certs" (OuterVolumeSpecName: "ca-certs") pod "780f2ee6-f4d9-455c-97e6-7e6451706324" (UID: "780f2ee6-f4d9-455c-97e6-7e6451706324"). InnerVolumeSpecName "ca-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 14:30:39 crc kubenswrapper[4768]: I0217 14:30:39.145287 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/780f2ee6-f4d9-455c-97e6-7e6451706324-openstack-config" (OuterVolumeSpecName: "openstack-config") pod "780f2ee6-f4d9-455c-97e6-7e6451706324" (UID: "780f2ee6-f4d9-455c-97e6-7e6451706324"). InnerVolumeSpecName "openstack-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 14:30:39 crc kubenswrapper[4768]: I0217 14:30:39.199518 4768 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/780f2ee6-f4d9-455c-97e6-7e6451706324-ssh-key\") on node \"crc\" DevicePath \"\"" Feb 17 14:30:39 crc kubenswrapper[4768]: I0217 14:30:39.199574 4768 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/780f2ee6-f4d9-455c-97e6-7e6451706324-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Feb 17 14:30:39 crc kubenswrapper[4768]: I0217 14:30:39.199601 4768 reconciler_common.go:293] "Volume detached for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/780f2ee6-f4d9-455c-97e6-7e6451706324-openstack-config\") on node \"crc\" DevicePath \"\"" Feb 17 14:30:39 crc kubenswrapper[4768]: I0217 14:30:39.199653 4768 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") on node \"crc\" " Feb 17 14:30:39 crc kubenswrapper[4768]: I0217 14:30:39.199676 4768 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/780f2ee6-f4d9-455c-97e6-7e6451706324-test-operator-ephemeral-workdir\") on node \"crc\" DevicePath \"\"" Feb 17 14:30:39 crc kubenswrapper[4768]: I0217 14:30:39.199696 4768 reconciler_common.go:293] "Volume detached for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/780f2ee6-f4d9-455c-97e6-7e6451706324-ca-certs\") on node \"crc\" DevicePath \"\"" Feb 17 14:30:39 crc kubenswrapper[4768]: I0217 14:30:39.199715 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xndzs\" (UniqueName: \"kubernetes.io/projected/780f2ee6-f4d9-455c-97e6-7e6451706324-kube-api-access-xndzs\") on node \"crc\" DevicePath \"\"" Feb 17 14:30:39 crc kubenswrapper[4768]: I0217 14:30:39.237643 4768 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage12-crc" (UniqueName: "kubernetes.io/local-volume/local-storage12-crc") on node "crc" Feb 17 14:30:39 crc kubenswrapper[4768]: I0217 14:30:39.301579 4768 reconciler_common.go:293] "Volume detached for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") on node \"crc\" DevicePath \"\"" Feb 17 14:30:39 crc kubenswrapper[4768]: I0217 14:30:39.569882 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"780f2ee6-f4d9-455c-97e6-7e6451706324","Type":"ContainerDied","Data":"75cfef6f7ff5a41f3d5dd8621ce0981f46aed73da6a8c4152b01b74868f99792"} Feb 17 14:30:39 crc kubenswrapper[4768]: I0217 14:30:39.569948 4768 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="75cfef6f7ff5a41f3d5dd8621ce0981f46aed73da6a8c4152b01b74868f99792" Feb 17 14:30:39 crc kubenswrapper[4768]: I0217 14:30:39.570000 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Feb 17 14:30:48 crc kubenswrapper[4768]: I0217 14:30:48.929448 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Feb 17 14:30:48 crc kubenswrapper[4768]: E0217 14:30:48.931342 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="780f2ee6-f4d9-455c-97e6-7e6451706324" containerName="tempest-tests-tempest-tests-runner" Feb 17 14:30:48 crc kubenswrapper[4768]: I0217 14:30:48.931372 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="780f2ee6-f4d9-455c-97e6-7e6451706324" containerName="tempest-tests-tempest-tests-runner" Feb 17 14:30:48 crc kubenswrapper[4768]: E0217 14:30:48.931407 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="688a6188-f6bf-4c8d-94fb-f03130e4c634" containerName="collect-profiles" Feb 17 14:30:48 crc kubenswrapper[4768]: I0217 14:30:48.931421 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="688a6188-f6bf-4c8d-94fb-f03130e4c634" containerName="collect-profiles" Feb 17 14:30:48 crc kubenswrapper[4768]: I0217 14:30:48.931821 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="688a6188-f6bf-4c8d-94fb-f03130e4c634" containerName="collect-profiles" Feb 17 14:30:48 crc kubenswrapper[4768]: I0217 14:30:48.931855 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="780f2ee6-f4d9-455c-97e6-7e6451706324" containerName="tempest-tests-tempest-tests-runner" Feb 17 14:30:48 crc kubenswrapper[4768]: I0217 14:30:48.933027 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 17 14:30:48 crc kubenswrapper[4768]: I0217 14:30:48.937582 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-clc7g" Feb 17 14:30:48 crc kubenswrapper[4768]: I0217 14:30:48.940924 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Feb 17 14:30:48 crc kubenswrapper[4768]: I0217 14:30:48.997938 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"0ecebf14-59d4-448f-8f09-f3b51ebd695e\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 17 14:30:48 crc kubenswrapper[4768]: I0217 14:30:48.998064 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q2b59\" (UniqueName: \"kubernetes.io/projected/0ecebf14-59d4-448f-8f09-f3b51ebd695e-kube-api-access-q2b59\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"0ecebf14-59d4-448f-8f09-f3b51ebd695e\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 17 14:30:49 crc kubenswrapper[4768]: I0217 14:30:49.100279 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"0ecebf14-59d4-448f-8f09-f3b51ebd695e\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 17 14:30:49 crc kubenswrapper[4768]: I0217 14:30:49.100446 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q2b59\" (UniqueName: \"kubernetes.io/projected/0ecebf14-59d4-448f-8f09-f3b51ebd695e-kube-api-access-q2b59\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"0ecebf14-59d4-448f-8f09-f3b51ebd695e\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 17 14:30:49 crc kubenswrapper[4768]: I0217 14:30:49.100878 4768 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"0ecebf14-59d4-448f-8f09-f3b51ebd695e\") device mount path \"/mnt/openstack/pv12\"" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 17 14:30:49 crc kubenswrapper[4768]: I0217 14:30:49.125160 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q2b59\" (UniqueName: \"kubernetes.io/projected/0ecebf14-59d4-448f-8f09-f3b51ebd695e-kube-api-access-q2b59\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"0ecebf14-59d4-448f-8f09-f3b51ebd695e\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 17 14:30:49 crc kubenswrapper[4768]: I0217 14:30:49.133417 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"0ecebf14-59d4-448f-8f09-f3b51ebd695e\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 17 14:30:49 crc kubenswrapper[4768]: I0217 14:30:49.274046 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 17 14:30:49 crc kubenswrapper[4768]: I0217 14:30:49.712809 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Feb 17 14:30:49 crc kubenswrapper[4768]: W0217 14:30:49.715563 4768 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0ecebf14_59d4_448f_8f09_f3b51ebd695e.slice/crio-e43b095f6b86e384f6af8125051b896d2ada31888e6b26974e0514c234c0e994 WatchSource:0}: Error finding container e43b095f6b86e384f6af8125051b896d2ada31888e6b26974e0514c234c0e994: Status 404 returned error can't find the container with id e43b095f6b86e384f6af8125051b896d2ada31888e6b26974e0514c234c0e994 Feb 17 14:30:50 crc kubenswrapper[4768]: I0217 14:30:50.706469 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" event={"ID":"0ecebf14-59d4-448f-8f09-f3b51ebd695e","Type":"ContainerStarted","Data":"e43b095f6b86e384f6af8125051b896d2ada31888e6b26974e0514c234c0e994"} Feb 17 14:30:51 crc kubenswrapper[4768]: I0217 14:30:51.718933 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" event={"ID":"0ecebf14-59d4-448f-8f09-f3b51ebd695e","Type":"ContainerStarted","Data":"dc54d625435be98513728ed2d0dde981e07c19556c2c337a8c7a79176cc70229"} Feb 17 14:30:51 crc kubenswrapper[4768]: I0217 14:30:51.739917 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" podStartSLOduration=2.857285622 podStartE2EDuration="3.739894507s" podCreationTimestamp="2026-02-17 14:30:48 +0000 UTC" firstStartedPulling="2026-02-17 14:30:49.717767067 +0000 UTC m=+3268.997153509" lastFinishedPulling="2026-02-17 14:30:50.600375952 +0000 UTC m=+3269.879762394" observedRunningTime="2026-02-17 14:30:51.737669866 +0000 UTC m=+3271.017056328" watchObservedRunningTime="2026-02-17 14:30:51.739894507 +0000 UTC m=+3271.019280949" Feb 17 14:31:13 crc kubenswrapper[4768]: I0217 14:31:13.303547 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-kmvln/must-gather-2qkjw"] Feb 17 14:31:13 crc kubenswrapper[4768]: I0217 14:31:13.309267 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-kmvln/must-gather-2qkjw" Feb 17 14:31:13 crc kubenswrapper[4768]: I0217 14:31:13.311584 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-kmvln"/"default-dockercfg-b5bbf" Feb 17 14:31:13 crc kubenswrapper[4768]: I0217 14:31:13.311642 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-kmvln"/"kube-root-ca.crt" Feb 17 14:31:13 crc kubenswrapper[4768]: I0217 14:31:13.324375 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-kmvln"/"openshift-service-ca.crt" Feb 17 14:31:13 crc kubenswrapper[4768]: I0217 14:31:13.332779 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-kmvln/must-gather-2qkjw"] Feb 17 14:31:13 crc kubenswrapper[4768]: I0217 14:31:13.357553 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hfnkw\" (UniqueName: \"kubernetes.io/projected/0435c3c9-4bd7-46a0-9cd2-df744778c614-kube-api-access-hfnkw\") pod \"must-gather-2qkjw\" (UID: \"0435c3c9-4bd7-46a0-9cd2-df744778c614\") " pod="openshift-must-gather-kmvln/must-gather-2qkjw" Feb 17 14:31:13 crc kubenswrapper[4768]: I0217 14:31:13.357671 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/0435c3c9-4bd7-46a0-9cd2-df744778c614-must-gather-output\") pod \"must-gather-2qkjw\" (UID: \"0435c3c9-4bd7-46a0-9cd2-df744778c614\") " pod="openshift-must-gather-kmvln/must-gather-2qkjw" Feb 17 14:31:13 crc kubenswrapper[4768]: I0217 14:31:13.459054 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hfnkw\" (UniqueName: \"kubernetes.io/projected/0435c3c9-4bd7-46a0-9cd2-df744778c614-kube-api-access-hfnkw\") pod \"must-gather-2qkjw\" (UID: \"0435c3c9-4bd7-46a0-9cd2-df744778c614\") " pod="openshift-must-gather-kmvln/must-gather-2qkjw" Feb 17 14:31:13 crc kubenswrapper[4768]: I0217 14:31:13.459244 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/0435c3c9-4bd7-46a0-9cd2-df744778c614-must-gather-output\") pod \"must-gather-2qkjw\" (UID: \"0435c3c9-4bd7-46a0-9cd2-df744778c614\") " pod="openshift-must-gather-kmvln/must-gather-2qkjw" Feb 17 14:31:13 crc kubenswrapper[4768]: I0217 14:31:13.459766 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/0435c3c9-4bd7-46a0-9cd2-df744778c614-must-gather-output\") pod \"must-gather-2qkjw\" (UID: \"0435c3c9-4bd7-46a0-9cd2-df744778c614\") " pod="openshift-must-gather-kmvln/must-gather-2qkjw" Feb 17 14:31:13 crc kubenswrapper[4768]: I0217 14:31:13.495866 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hfnkw\" (UniqueName: \"kubernetes.io/projected/0435c3c9-4bd7-46a0-9cd2-df744778c614-kube-api-access-hfnkw\") pod \"must-gather-2qkjw\" (UID: \"0435c3c9-4bd7-46a0-9cd2-df744778c614\") " pod="openshift-must-gather-kmvln/must-gather-2qkjw" Feb 17 14:31:13 crc kubenswrapper[4768]: I0217 14:31:13.631675 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-kmvln/must-gather-2qkjw" Feb 17 14:31:14 crc kubenswrapper[4768]: I0217 14:31:14.058081 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-kmvln/must-gather-2qkjw"] Feb 17 14:31:14 crc kubenswrapper[4768]: I0217 14:31:14.950733 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-kmvln/must-gather-2qkjw" event={"ID":"0435c3c9-4bd7-46a0-9cd2-df744778c614","Type":"ContainerStarted","Data":"a53863b5df8128de55fee78cf74e79a4cfb9e5a121ec218fda52d265f4cc8bae"} Feb 17 14:31:21 crc kubenswrapper[4768]: I0217 14:31:21.022818 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-kmvln/must-gather-2qkjw" event={"ID":"0435c3c9-4bd7-46a0-9cd2-df744778c614","Type":"ContainerStarted","Data":"aeea91b5f8db2c28dd88a7f6e7eb105996e7d91bd1ec3a1f9442b3b385aa2dcb"} Feb 17 14:31:21 crc kubenswrapper[4768]: I0217 14:31:21.023405 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-kmvln/must-gather-2qkjw" event={"ID":"0435c3c9-4bd7-46a0-9cd2-df744778c614","Type":"ContainerStarted","Data":"93ea69202fbb8aa300d562dd517c73d6043d6f52d1f44aa73a2edda55a7218f1"} Feb 17 14:31:21 crc kubenswrapper[4768]: I0217 14:31:21.063231 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-kmvln/must-gather-2qkjw" podStartSLOduration=2.231413953 podStartE2EDuration="8.063211094s" podCreationTimestamp="2026-02-17 14:31:13 +0000 UTC" firstStartedPulling="2026-02-17 14:31:14.065687373 +0000 UTC m=+3293.345073815" lastFinishedPulling="2026-02-17 14:31:19.897484514 +0000 UTC m=+3299.176870956" observedRunningTime="2026-02-17 14:31:21.051539705 +0000 UTC m=+3300.330926147" watchObservedRunningTime="2026-02-17 14:31:21.063211094 +0000 UTC m=+3300.342597536" Feb 17 14:31:23 crc kubenswrapper[4768]: I0217 14:31:23.722426 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-kmvln/crc-debug-7gxzd"] Feb 17 14:31:23 crc kubenswrapper[4768]: I0217 14:31:23.724145 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-kmvln/crc-debug-7gxzd" Feb 17 14:31:23 crc kubenswrapper[4768]: I0217 14:31:23.783936 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/ac873ef3-0874-4dab-91b9-675b74f13ad3-host\") pod \"crc-debug-7gxzd\" (UID: \"ac873ef3-0874-4dab-91b9-675b74f13ad3\") " pod="openshift-must-gather-kmvln/crc-debug-7gxzd" Feb 17 14:31:23 crc kubenswrapper[4768]: I0217 14:31:23.784436 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-trx47\" (UniqueName: \"kubernetes.io/projected/ac873ef3-0874-4dab-91b9-675b74f13ad3-kube-api-access-trx47\") pod \"crc-debug-7gxzd\" (UID: \"ac873ef3-0874-4dab-91b9-675b74f13ad3\") " pod="openshift-must-gather-kmvln/crc-debug-7gxzd" Feb 17 14:31:23 crc kubenswrapper[4768]: I0217 14:31:23.886495 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-trx47\" (UniqueName: \"kubernetes.io/projected/ac873ef3-0874-4dab-91b9-675b74f13ad3-kube-api-access-trx47\") pod \"crc-debug-7gxzd\" (UID: \"ac873ef3-0874-4dab-91b9-675b74f13ad3\") " pod="openshift-must-gather-kmvln/crc-debug-7gxzd" Feb 17 14:31:23 crc kubenswrapper[4768]: I0217 14:31:23.886639 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/ac873ef3-0874-4dab-91b9-675b74f13ad3-host\") pod \"crc-debug-7gxzd\" (UID: \"ac873ef3-0874-4dab-91b9-675b74f13ad3\") " pod="openshift-must-gather-kmvln/crc-debug-7gxzd" Feb 17 14:31:23 crc kubenswrapper[4768]: I0217 14:31:23.886830 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/ac873ef3-0874-4dab-91b9-675b74f13ad3-host\") pod \"crc-debug-7gxzd\" (UID: \"ac873ef3-0874-4dab-91b9-675b74f13ad3\") " pod="openshift-must-gather-kmvln/crc-debug-7gxzd" Feb 17 14:31:23 crc kubenswrapper[4768]: I0217 14:31:23.906399 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-trx47\" (UniqueName: \"kubernetes.io/projected/ac873ef3-0874-4dab-91b9-675b74f13ad3-kube-api-access-trx47\") pod \"crc-debug-7gxzd\" (UID: \"ac873ef3-0874-4dab-91b9-675b74f13ad3\") " pod="openshift-must-gather-kmvln/crc-debug-7gxzd" Feb 17 14:31:24 crc kubenswrapper[4768]: I0217 14:31:24.043851 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-kmvln/crc-debug-7gxzd" Feb 17 14:31:24 crc kubenswrapper[4768]: W0217 14:31:24.075506 4768 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podac873ef3_0874_4dab_91b9_675b74f13ad3.slice/crio-eab8f0fd3ff1309f2a0385cc1a83b1fe0e40d6fca72c6c5148ffaef412c25a9c WatchSource:0}: Error finding container eab8f0fd3ff1309f2a0385cc1a83b1fe0e40d6fca72c6c5148ffaef412c25a9c: Status 404 returned error can't find the container with id eab8f0fd3ff1309f2a0385cc1a83b1fe0e40d6fca72c6c5148ffaef412c25a9c Feb 17 14:31:25 crc kubenswrapper[4768]: I0217 14:31:25.059701 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-kmvln/crc-debug-7gxzd" event={"ID":"ac873ef3-0874-4dab-91b9-675b74f13ad3","Type":"ContainerStarted","Data":"eab8f0fd3ff1309f2a0385cc1a83b1fe0e40d6fca72c6c5148ffaef412c25a9c"} Feb 17 14:31:28 crc kubenswrapper[4768]: I0217 14:31:28.060274 4768 patch_prober.go:28] interesting pod/machine-config-daemon-p97z4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 14:31:28 crc kubenswrapper[4768]: I0217 14:31:28.060600 4768 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-p97z4" podUID="10c685ba-8fe0-425c-958c-3fb6754d3d84" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 14:31:36 crc kubenswrapper[4768]: I0217 14:31:36.195872 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-kmvln/crc-debug-7gxzd" event={"ID":"ac873ef3-0874-4dab-91b9-675b74f13ad3","Type":"ContainerStarted","Data":"0dad2640359b822dc7f772626cf4a0f8faa958d737327fac924fbf56ec5c8da6"} Feb 17 14:31:36 crc kubenswrapper[4768]: I0217 14:31:36.230011 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-kmvln/crc-debug-7gxzd" podStartSLOduration=1.5485552930000002 podStartE2EDuration="13.229995479s" podCreationTimestamp="2026-02-17 14:31:23 +0000 UTC" firstStartedPulling="2026-02-17 14:31:24.077994416 +0000 UTC m=+3303.357380858" lastFinishedPulling="2026-02-17 14:31:35.759434612 +0000 UTC m=+3315.038821044" observedRunningTime="2026-02-17 14:31:36.220587862 +0000 UTC m=+3315.499974304" watchObservedRunningTime="2026-02-17 14:31:36.229995479 +0000 UTC m=+3315.509381921" Feb 17 14:31:58 crc kubenswrapper[4768]: I0217 14:31:58.059860 4768 patch_prober.go:28] interesting pod/machine-config-daemon-p97z4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 14:31:58 crc kubenswrapper[4768]: I0217 14:31:58.060350 4768 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-p97z4" podUID="10c685ba-8fe0-425c-958c-3fb6754d3d84" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 14:32:16 crc kubenswrapper[4768]: I0217 14:32:16.548958 4768 generic.go:334] "Generic (PLEG): container finished" podID="ac873ef3-0874-4dab-91b9-675b74f13ad3" containerID="0dad2640359b822dc7f772626cf4a0f8faa958d737327fac924fbf56ec5c8da6" exitCode=0 Feb 17 14:32:16 crc kubenswrapper[4768]: I0217 14:32:16.549431 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-kmvln/crc-debug-7gxzd" event={"ID":"ac873ef3-0874-4dab-91b9-675b74f13ad3","Type":"ContainerDied","Data":"0dad2640359b822dc7f772626cf4a0f8faa958d737327fac924fbf56ec5c8da6"} Feb 17 14:32:17 crc kubenswrapper[4768]: I0217 14:32:17.676081 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-kmvln/crc-debug-7gxzd" Feb 17 14:32:17 crc kubenswrapper[4768]: I0217 14:32:17.715499 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-kmvln/crc-debug-7gxzd"] Feb 17 14:32:17 crc kubenswrapper[4768]: I0217 14:32:17.724050 4768 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-kmvln/crc-debug-7gxzd"] Feb 17 14:32:17 crc kubenswrapper[4768]: I0217 14:32:17.877750 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-trx47\" (UniqueName: \"kubernetes.io/projected/ac873ef3-0874-4dab-91b9-675b74f13ad3-kube-api-access-trx47\") pod \"ac873ef3-0874-4dab-91b9-675b74f13ad3\" (UID: \"ac873ef3-0874-4dab-91b9-675b74f13ad3\") " Feb 17 14:32:17 crc kubenswrapper[4768]: I0217 14:32:17.877895 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/ac873ef3-0874-4dab-91b9-675b74f13ad3-host\") pod \"ac873ef3-0874-4dab-91b9-675b74f13ad3\" (UID: \"ac873ef3-0874-4dab-91b9-675b74f13ad3\") " Feb 17 14:32:17 crc kubenswrapper[4768]: I0217 14:32:17.878018 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ac873ef3-0874-4dab-91b9-675b74f13ad3-host" (OuterVolumeSpecName: "host") pod "ac873ef3-0874-4dab-91b9-675b74f13ad3" (UID: "ac873ef3-0874-4dab-91b9-675b74f13ad3"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 14:32:17 crc kubenswrapper[4768]: I0217 14:32:17.878361 4768 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/ac873ef3-0874-4dab-91b9-675b74f13ad3-host\") on node \"crc\" DevicePath \"\"" Feb 17 14:32:17 crc kubenswrapper[4768]: I0217 14:32:17.884374 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ac873ef3-0874-4dab-91b9-675b74f13ad3-kube-api-access-trx47" (OuterVolumeSpecName: "kube-api-access-trx47") pod "ac873ef3-0874-4dab-91b9-675b74f13ad3" (UID: "ac873ef3-0874-4dab-91b9-675b74f13ad3"). InnerVolumeSpecName "kube-api-access-trx47". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 14:32:17 crc kubenswrapper[4768]: I0217 14:32:17.979998 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-trx47\" (UniqueName: \"kubernetes.io/projected/ac873ef3-0874-4dab-91b9-675b74f13ad3-kube-api-access-trx47\") on node \"crc\" DevicePath \"\"" Feb 17 14:32:18 crc kubenswrapper[4768]: I0217 14:32:18.568411 4768 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="eab8f0fd3ff1309f2a0385cc1a83b1fe0e40d6fca72c6c5148ffaef412c25a9c" Feb 17 14:32:18 crc kubenswrapper[4768]: I0217 14:32:18.568451 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-kmvln/crc-debug-7gxzd" Feb 17 14:32:18 crc kubenswrapper[4768]: I0217 14:32:18.886488 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-kmvln/crc-debug-9k4zn"] Feb 17 14:32:18 crc kubenswrapper[4768]: E0217 14:32:18.886880 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ac873ef3-0874-4dab-91b9-675b74f13ad3" containerName="container-00" Feb 17 14:32:18 crc kubenswrapper[4768]: I0217 14:32:18.886898 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="ac873ef3-0874-4dab-91b9-675b74f13ad3" containerName="container-00" Feb 17 14:32:18 crc kubenswrapper[4768]: I0217 14:32:18.887065 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="ac873ef3-0874-4dab-91b9-675b74f13ad3" containerName="container-00" Feb 17 14:32:18 crc kubenswrapper[4768]: I0217 14:32:18.887703 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-kmvln/crc-debug-9k4zn" Feb 17 14:32:18 crc kubenswrapper[4768]: I0217 14:32:18.898975 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2bz2l\" (UniqueName: \"kubernetes.io/projected/170fc52a-0507-4a0e-a822-8bda1a0f7b91-kube-api-access-2bz2l\") pod \"crc-debug-9k4zn\" (UID: \"170fc52a-0507-4a0e-a822-8bda1a0f7b91\") " pod="openshift-must-gather-kmvln/crc-debug-9k4zn" Feb 17 14:32:18 crc kubenswrapper[4768]: I0217 14:32:18.899036 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/170fc52a-0507-4a0e-a822-8bda1a0f7b91-host\") pod \"crc-debug-9k4zn\" (UID: \"170fc52a-0507-4a0e-a822-8bda1a0f7b91\") " pod="openshift-must-gather-kmvln/crc-debug-9k4zn" Feb 17 14:32:19 crc kubenswrapper[4768]: I0217 14:32:19.000620 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2bz2l\" (UniqueName: \"kubernetes.io/projected/170fc52a-0507-4a0e-a822-8bda1a0f7b91-kube-api-access-2bz2l\") pod \"crc-debug-9k4zn\" (UID: \"170fc52a-0507-4a0e-a822-8bda1a0f7b91\") " pod="openshift-must-gather-kmvln/crc-debug-9k4zn" Feb 17 14:32:19 crc kubenswrapper[4768]: I0217 14:32:19.000692 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/170fc52a-0507-4a0e-a822-8bda1a0f7b91-host\") pod \"crc-debug-9k4zn\" (UID: \"170fc52a-0507-4a0e-a822-8bda1a0f7b91\") " pod="openshift-must-gather-kmvln/crc-debug-9k4zn" Feb 17 14:32:19 crc kubenswrapper[4768]: I0217 14:32:19.000878 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/170fc52a-0507-4a0e-a822-8bda1a0f7b91-host\") pod \"crc-debug-9k4zn\" (UID: \"170fc52a-0507-4a0e-a822-8bda1a0f7b91\") " pod="openshift-must-gather-kmvln/crc-debug-9k4zn" Feb 17 14:32:19 crc kubenswrapper[4768]: I0217 14:32:19.022687 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2bz2l\" (UniqueName: \"kubernetes.io/projected/170fc52a-0507-4a0e-a822-8bda1a0f7b91-kube-api-access-2bz2l\") pod \"crc-debug-9k4zn\" (UID: \"170fc52a-0507-4a0e-a822-8bda1a0f7b91\") " pod="openshift-must-gather-kmvln/crc-debug-9k4zn" Feb 17 14:32:19 crc kubenswrapper[4768]: I0217 14:32:19.203844 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-kmvln/crc-debug-9k4zn" Feb 17 14:32:19 crc kubenswrapper[4768]: I0217 14:32:19.549031 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ac873ef3-0874-4dab-91b9-675b74f13ad3" path="/var/lib/kubelet/pods/ac873ef3-0874-4dab-91b9-675b74f13ad3/volumes" Feb 17 14:32:19 crc kubenswrapper[4768]: I0217 14:32:19.581221 4768 generic.go:334] "Generic (PLEG): container finished" podID="170fc52a-0507-4a0e-a822-8bda1a0f7b91" containerID="a13c64dcc7eaf05d744701f8f54cfdee01eda388b0d9843d3e56b6a966f677eb" exitCode=0 Feb 17 14:32:19 crc kubenswrapper[4768]: I0217 14:32:19.581455 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-kmvln/crc-debug-9k4zn" event={"ID":"170fc52a-0507-4a0e-a822-8bda1a0f7b91","Type":"ContainerDied","Data":"a13c64dcc7eaf05d744701f8f54cfdee01eda388b0d9843d3e56b6a966f677eb"} Feb 17 14:32:19 crc kubenswrapper[4768]: I0217 14:32:19.581603 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-kmvln/crc-debug-9k4zn" event={"ID":"170fc52a-0507-4a0e-a822-8bda1a0f7b91","Type":"ContainerStarted","Data":"32e8aac2aadd6fbd31e841a858201523f74833e852cbe4c1de21839bd9b75afb"} Feb 17 14:32:19 crc kubenswrapper[4768]: I0217 14:32:19.983367 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-kmvln/crc-debug-9k4zn"] Feb 17 14:32:19 crc kubenswrapper[4768]: I0217 14:32:19.995323 4768 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-kmvln/crc-debug-9k4zn"] Feb 17 14:32:20 crc kubenswrapper[4768]: I0217 14:32:20.709837 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-kmvln/crc-debug-9k4zn" Feb 17 14:32:20 crc kubenswrapper[4768]: I0217 14:32:20.835798 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/170fc52a-0507-4a0e-a822-8bda1a0f7b91-host\") pod \"170fc52a-0507-4a0e-a822-8bda1a0f7b91\" (UID: \"170fc52a-0507-4a0e-a822-8bda1a0f7b91\") " Feb 17 14:32:20 crc kubenswrapper[4768]: I0217 14:32:20.836068 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2bz2l\" (UniqueName: \"kubernetes.io/projected/170fc52a-0507-4a0e-a822-8bda1a0f7b91-kube-api-access-2bz2l\") pod \"170fc52a-0507-4a0e-a822-8bda1a0f7b91\" (UID: \"170fc52a-0507-4a0e-a822-8bda1a0f7b91\") " Feb 17 14:32:20 crc kubenswrapper[4768]: I0217 14:32:20.835934 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/170fc52a-0507-4a0e-a822-8bda1a0f7b91-host" (OuterVolumeSpecName: "host") pod "170fc52a-0507-4a0e-a822-8bda1a0f7b91" (UID: "170fc52a-0507-4a0e-a822-8bda1a0f7b91"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 14:32:20 crc kubenswrapper[4768]: I0217 14:32:20.836525 4768 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/170fc52a-0507-4a0e-a822-8bda1a0f7b91-host\") on node \"crc\" DevicePath \"\"" Feb 17 14:32:20 crc kubenswrapper[4768]: I0217 14:32:20.842278 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/170fc52a-0507-4a0e-a822-8bda1a0f7b91-kube-api-access-2bz2l" (OuterVolumeSpecName: "kube-api-access-2bz2l") pod "170fc52a-0507-4a0e-a822-8bda1a0f7b91" (UID: "170fc52a-0507-4a0e-a822-8bda1a0f7b91"). InnerVolumeSpecName "kube-api-access-2bz2l". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 14:32:20 crc kubenswrapper[4768]: I0217 14:32:20.939214 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2bz2l\" (UniqueName: \"kubernetes.io/projected/170fc52a-0507-4a0e-a822-8bda1a0f7b91-kube-api-access-2bz2l\") on node \"crc\" DevicePath \"\"" Feb 17 14:32:21 crc kubenswrapper[4768]: I0217 14:32:21.162635 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-kmvln/crc-debug-25npl"] Feb 17 14:32:21 crc kubenswrapper[4768]: E0217 14:32:21.162997 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="170fc52a-0507-4a0e-a822-8bda1a0f7b91" containerName="container-00" Feb 17 14:32:21 crc kubenswrapper[4768]: I0217 14:32:21.163009 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="170fc52a-0507-4a0e-a822-8bda1a0f7b91" containerName="container-00" Feb 17 14:32:21 crc kubenswrapper[4768]: I0217 14:32:21.163223 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="170fc52a-0507-4a0e-a822-8bda1a0f7b91" containerName="container-00" Feb 17 14:32:21 crc kubenswrapper[4768]: I0217 14:32:21.163755 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-kmvln/crc-debug-25npl" Feb 17 14:32:21 crc kubenswrapper[4768]: I0217 14:32:21.243237 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/46892ab0-75cd-4eec-9c97-d1dc13e00e5a-host\") pod \"crc-debug-25npl\" (UID: \"46892ab0-75cd-4eec-9c97-d1dc13e00e5a\") " pod="openshift-must-gather-kmvln/crc-debug-25npl" Feb 17 14:32:21 crc kubenswrapper[4768]: I0217 14:32:21.243386 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8l49x\" (UniqueName: \"kubernetes.io/projected/46892ab0-75cd-4eec-9c97-d1dc13e00e5a-kube-api-access-8l49x\") pod \"crc-debug-25npl\" (UID: \"46892ab0-75cd-4eec-9c97-d1dc13e00e5a\") " pod="openshift-must-gather-kmvln/crc-debug-25npl" Feb 17 14:32:21 crc kubenswrapper[4768]: I0217 14:32:21.345400 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/46892ab0-75cd-4eec-9c97-d1dc13e00e5a-host\") pod \"crc-debug-25npl\" (UID: \"46892ab0-75cd-4eec-9c97-d1dc13e00e5a\") " pod="openshift-must-gather-kmvln/crc-debug-25npl" Feb 17 14:32:21 crc kubenswrapper[4768]: I0217 14:32:21.345765 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8l49x\" (UniqueName: \"kubernetes.io/projected/46892ab0-75cd-4eec-9c97-d1dc13e00e5a-kube-api-access-8l49x\") pod \"crc-debug-25npl\" (UID: \"46892ab0-75cd-4eec-9c97-d1dc13e00e5a\") " pod="openshift-must-gather-kmvln/crc-debug-25npl" Feb 17 14:32:21 crc kubenswrapper[4768]: I0217 14:32:21.345522 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/46892ab0-75cd-4eec-9c97-d1dc13e00e5a-host\") pod \"crc-debug-25npl\" (UID: \"46892ab0-75cd-4eec-9c97-d1dc13e00e5a\") " pod="openshift-must-gather-kmvln/crc-debug-25npl" Feb 17 14:32:21 crc kubenswrapper[4768]: I0217 14:32:21.360865 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8l49x\" (UniqueName: \"kubernetes.io/projected/46892ab0-75cd-4eec-9c97-d1dc13e00e5a-kube-api-access-8l49x\") pod \"crc-debug-25npl\" (UID: \"46892ab0-75cd-4eec-9c97-d1dc13e00e5a\") " pod="openshift-must-gather-kmvln/crc-debug-25npl" Feb 17 14:32:21 crc kubenswrapper[4768]: I0217 14:32:21.482057 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-kmvln/crc-debug-25npl" Feb 17 14:32:21 crc kubenswrapper[4768]: W0217 14:32:21.511625 4768 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod46892ab0_75cd_4eec_9c97_d1dc13e00e5a.slice/crio-a4efe46d6c02fbe2a9d2b2b48669882bbbcbd5df5fec4e7c84d4c03f8f388f24 WatchSource:0}: Error finding container a4efe46d6c02fbe2a9d2b2b48669882bbbcbd5df5fec4e7c84d4c03f8f388f24: Status 404 returned error can't find the container with id a4efe46d6c02fbe2a9d2b2b48669882bbbcbd5df5fec4e7c84d4c03f8f388f24 Feb 17 14:32:21 crc kubenswrapper[4768]: I0217 14:32:21.550602 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="170fc52a-0507-4a0e-a822-8bda1a0f7b91" path="/var/lib/kubelet/pods/170fc52a-0507-4a0e-a822-8bda1a0f7b91/volumes" Feb 17 14:32:21 crc kubenswrapper[4768]: I0217 14:32:21.603336 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-kmvln/crc-debug-25npl" event={"ID":"46892ab0-75cd-4eec-9c97-d1dc13e00e5a","Type":"ContainerStarted","Data":"a4efe46d6c02fbe2a9d2b2b48669882bbbcbd5df5fec4e7c84d4c03f8f388f24"} Feb 17 14:32:21 crc kubenswrapper[4768]: I0217 14:32:21.604973 4768 scope.go:117] "RemoveContainer" containerID="a13c64dcc7eaf05d744701f8f54cfdee01eda388b0d9843d3e56b6a966f677eb" Feb 17 14:32:21 crc kubenswrapper[4768]: I0217 14:32:21.605072 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-kmvln/crc-debug-9k4zn" Feb 17 14:32:22 crc kubenswrapper[4768]: I0217 14:32:22.620011 4768 generic.go:334] "Generic (PLEG): container finished" podID="46892ab0-75cd-4eec-9c97-d1dc13e00e5a" containerID="ff53045bd3d956a58a119a209aef6811e589cd3a4c6bd874c43505c0e5712458" exitCode=0 Feb 17 14:32:22 crc kubenswrapper[4768]: I0217 14:32:22.620624 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-kmvln/crc-debug-25npl" event={"ID":"46892ab0-75cd-4eec-9c97-d1dc13e00e5a","Type":"ContainerDied","Data":"ff53045bd3d956a58a119a209aef6811e589cd3a4c6bd874c43505c0e5712458"} Feb 17 14:32:22 crc kubenswrapper[4768]: I0217 14:32:22.676209 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-kmvln/crc-debug-25npl"] Feb 17 14:32:22 crc kubenswrapper[4768]: I0217 14:32:22.690052 4768 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-kmvln/crc-debug-25npl"] Feb 17 14:32:23 crc kubenswrapper[4768]: I0217 14:32:23.470410 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-9l98f"] Feb 17 14:32:23 crc kubenswrapper[4768]: E0217 14:32:23.471879 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="46892ab0-75cd-4eec-9c97-d1dc13e00e5a" containerName="container-00" Feb 17 14:32:23 crc kubenswrapper[4768]: I0217 14:32:23.471902 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="46892ab0-75cd-4eec-9c97-d1dc13e00e5a" containerName="container-00" Feb 17 14:32:23 crc kubenswrapper[4768]: I0217 14:32:23.472235 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="46892ab0-75cd-4eec-9c97-d1dc13e00e5a" containerName="container-00" Feb 17 14:32:23 crc kubenswrapper[4768]: I0217 14:32:23.475838 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-9l98f" Feb 17 14:32:23 crc kubenswrapper[4768]: I0217 14:32:23.483668 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-9l98f"] Feb 17 14:32:23 crc kubenswrapper[4768]: I0217 14:32:23.623272 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c2e521d5-38fc-41dc-9d90-cd52ebc76308-utilities\") pod \"redhat-operators-9l98f\" (UID: \"c2e521d5-38fc-41dc-9d90-cd52ebc76308\") " pod="openshift-marketplace/redhat-operators-9l98f" Feb 17 14:32:23 crc kubenswrapper[4768]: I0217 14:32:23.623359 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c2e521d5-38fc-41dc-9d90-cd52ebc76308-catalog-content\") pod \"redhat-operators-9l98f\" (UID: \"c2e521d5-38fc-41dc-9d90-cd52ebc76308\") " pod="openshift-marketplace/redhat-operators-9l98f" Feb 17 14:32:23 crc kubenswrapper[4768]: I0217 14:32:23.623380 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tds2w\" (UniqueName: \"kubernetes.io/projected/c2e521d5-38fc-41dc-9d90-cd52ebc76308-kube-api-access-tds2w\") pod \"redhat-operators-9l98f\" (UID: \"c2e521d5-38fc-41dc-9d90-cd52ebc76308\") " pod="openshift-marketplace/redhat-operators-9l98f" Feb 17 14:32:23 crc kubenswrapper[4768]: I0217 14:32:23.725304 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c2e521d5-38fc-41dc-9d90-cd52ebc76308-catalog-content\") pod \"redhat-operators-9l98f\" (UID: \"c2e521d5-38fc-41dc-9d90-cd52ebc76308\") " pod="openshift-marketplace/redhat-operators-9l98f" Feb 17 14:32:23 crc kubenswrapper[4768]: I0217 14:32:23.725555 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tds2w\" (UniqueName: \"kubernetes.io/projected/c2e521d5-38fc-41dc-9d90-cd52ebc76308-kube-api-access-tds2w\") pod \"redhat-operators-9l98f\" (UID: \"c2e521d5-38fc-41dc-9d90-cd52ebc76308\") " pod="openshift-marketplace/redhat-operators-9l98f" Feb 17 14:32:23 crc kubenswrapper[4768]: I0217 14:32:23.725681 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c2e521d5-38fc-41dc-9d90-cd52ebc76308-utilities\") pod \"redhat-operators-9l98f\" (UID: \"c2e521d5-38fc-41dc-9d90-cd52ebc76308\") " pod="openshift-marketplace/redhat-operators-9l98f" Feb 17 14:32:23 crc kubenswrapper[4768]: I0217 14:32:23.725686 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c2e521d5-38fc-41dc-9d90-cd52ebc76308-catalog-content\") pod \"redhat-operators-9l98f\" (UID: \"c2e521d5-38fc-41dc-9d90-cd52ebc76308\") " pod="openshift-marketplace/redhat-operators-9l98f" Feb 17 14:32:23 crc kubenswrapper[4768]: I0217 14:32:23.725931 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c2e521d5-38fc-41dc-9d90-cd52ebc76308-utilities\") pod \"redhat-operators-9l98f\" (UID: \"c2e521d5-38fc-41dc-9d90-cd52ebc76308\") " pod="openshift-marketplace/redhat-operators-9l98f" Feb 17 14:32:23 crc kubenswrapper[4768]: I0217 14:32:23.727050 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-kmvln/crc-debug-25npl" Feb 17 14:32:23 crc kubenswrapper[4768]: I0217 14:32:23.744639 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tds2w\" (UniqueName: \"kubernetes.io/projected/c2e521d5-38fc-41dc-9d90-cd52ebc76308-kube-api-access-tds2w\") pod \"redhat-operators-9l98f\" (UID: \"c2e521d5-38fc-41dc-9d90-cd52ebc76308\") " pod="openshift-marketplace/redhat-operators-9l98f" Feb 17 14:32:23 crc kubenswrapper[4768]: I0217 14:32:23.798557 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-9l98f" Feb 17 14:32:23 crc kubenswrapper[4768]: I0217 14:32:23.827372 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8l49x\" (UniqueName: \"kubernetes.io/projected/46892ab0-75cd-4eec-9c97-d1dc13e00e5a-kube-api-access-8l49x\") pod \"46892ab0-75cd-4eec-9c97-d1dc13e00e5a\" (UID: \"46892ab0-75cd-4eec-9c97-d1dc13e00e5a\") " Feb 17 14:32:23 crc kubenswrapper[4768]: I0217 14:32:23.827421 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/46892ab0-75cd-4eec-9c97-d1dc13e00e5a-host\") pod \"46892ab0-75cd-4eec-9c97-d1dc13e00e5a\" (UID: \"46892ab0-75cd-4eec-9c97-d1dc13e00e5a\") " Feb 17 14:32:23 crc kubenswrapper[4768]: I0217 14:32:23.827794 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/46892ab0-75cd-4eec-9c97-d1dc13e00e5a-host" (OuterVolumeSpecName: "host") pod "46892ab0-75cd-4eec-9c97-d1dc13e00e5a" (UID: "46892ab0-75cd-4eec-9c97-d1dc13e00e5a"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 14:32:23 crc kubenswrapper[4768]: I0217 14:32:23.832271 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/46892ab0-75cd-4eec-9c97-d1dc13e00e5a-kube-api-access-8l49x" (OuterVolumeSpecName: "kube-api-access-8l49x") pod "46892ab0-75cd-4eec-9c97-d1dc13e00e5a" (UID: "46892ab0-75cd-4eec-9c97-d1dc13e00e5a"). InnerVolumeSpecName "kube-api-access-8l49x". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 14:32:23 crc kubenswrapper[4768]: I0217 14:32:23.929347 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8l49x\" (UniqueName: \"kubernetes.io/projected/46892ab0-75cd-4eec-9c97-d1dc13e00e5a-kube-api-access-8l49x\") on node \"crc\" DevicePath \"\"" Feb 17 14:32:23 crc kubenswrapper[4768]: I0217 14:32:23.929604 4768 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/46892ab0-75cd-4eec-9c97-d1dc13e00e5a-host\") on node \"crc\" DevicePath \"\"" Feb 17 14:32:24 crc kubenswrapper[4768]: I0217 14:32:24.280974 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-9l98f"] Feb 17 14:32:24 crc kubenswrapper[4768]: W0217 14:32:24.282152 4768 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc2e521d5_38fc_41dc_9d90_cd52ebc76308.slice/crio-d71fdecee4ee61a8eef9f336cabb2b0087292c6e48b38047ff6c8215bfabfa35 WatchSource:0}: Error finding container d71fdecee4ee61a8eef9f336cabb2b0087292c6e48b38047ff6c8215bfabfa35: Status 404 returned error can't find the container with id d71fdecee4ee61a8eef9f336cabb2b0087292c6e48b38047ff6c8215bfabfa35 Feb 17 14:32:24 crc kubenswrapper[4768]: I0217 14:32:24.640438 4768 scope.go:117] "RemoveContainer" containerID="ff53045bd3d956a58a119a209aef6811e589cd3a4c6bd874c43505c0e5712458" Feb 17 14:32:24 crc kubenswrapper[4768]: I0217 14:32:24.640873 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-kmvln/crc-debug-25npl" Feb 17 14:32:24 crc kubenswrapper[4768]: I0217 14:32:24.653899 4768 generic.go:334] "Generic (PLEG): container finished" podID="c2e521d5-38fc-41dc-9d90-cd52ebc76308" containerID="25098427e93d00ada5b6827e6f4ed16407f2a3c250d654f666910221f4aa8c90" exitCode=0 Feb 17 14:32:24 crc kubenswrapper[4768]: I0217 14:32:24.653953 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9l98f" event={"ID":"c2e521d5-38fc-41dc-9d90-cd52ebc76308","Type":"ContainerDied","Data":"25098427e93d00ada5b6827e6f4ed16407f2a3c250d654f666910221f4aa8c90"} Feb 17 14:32:24 crc kubenswrapper[4768]: I0217 14:32:24.653983 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9l98f" event={"ID":"c2e521d5-38fc-41dc-9d90-cd52ebc76308","Type":"ContainerStarted","Data":"d71fdecee4ee61a8eef9f336cabb2b0087292c6e48b38047ff6c8215bfabfa35"} Feb 17 14:32:25 crc kubenswrapper[4768]: I0217 14:32:25.544621 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="46892ab0-75cd-4eec-9c97-d1dc13e00e5a" path="/var/lib/kubelet/pods/46892ab0-75cd-4eec-9c97-d1dc13e00e5a/volumes" Feb 17 14:32:28 crc kubenswrapper[4768]: I0217 14:32:28.060243 4768 patch_prober.go:28] interesting pod/machine-config-daemon-p97z4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 14:32:28 crc kubenswrapper[4768]: I0217 14:32:28.060618 4768 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-p97z4" podUID="10c685ba-8fe0-425c-958c-3fb6754d3d84" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 14:32:28 crc kubenswrapper[4768]: I0217 14:32:28.060681 4768 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-p97z4" Feb 17 14:32:28 crc kubenswrapper[4768]: I0217 14:32:28.061507 4768 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"ec2ca14acc6c65d44de9f0616d9b906984ff6fc79d13b91607724b751e6b996b"} pod="openshift-machine-config-operator/machine-config-daemon-p97z4" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 17 14:32:28 crc kubenswrapper[4768]: I0217 14:32:28.061551 4768 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-p97z4" podUID="10c685ba-8fe0-425c-958c-3fb6754d3d84" containerName="machine-config-daemon" containerID="cri-o://ec2ca14acc6c65d44de9f0616d9b906984ff6fc79d13b91607724b751e6b996b" gracePeriod=600 Feb 17 14:32:28 crc kubenswrapper[4768]: I0217 14:32:28.701928 4768 generic.go:334] "Generic (PLEG): container finished" podID="10c685ba-8fe0-425c-958c-3fb6754d3d84" containerID="ec2ca14acc6c65d44de9f0616d9b906984ff6fc79d13b91607724b751e6b996b" exitCode=0 Feb 17 14:32:28 crc kubenswrapper[4768]: I0217 14:32:28.702732 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-p97z4" event={"ID":"10c685ba-8fe0-425c-958c-3fb6754d3d84","Type":"ContainerDied","Data":"ec2ca14acc6c65d44de9f0616d9b906984ff6fc79d13b91607724b751e6b996b"} Feb 17 14:32:28 crc kubenswrapper[4768]: I0217 14:32:28.702853 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-p97z4" event={"ID":"10c685ba-8fe0-425c-958c-3fb6754d3d84","Type":"ContainerStarted","Data":"1cdfe752ad5564d137aa582fce5dfd1f6e3403007387ccc408cda17cd0114903"} Feb 17 14:32:28 crc kubenswrapper[4768]: I0217 14:32:28.702946 4768 scope.go:117] "RemoveContainer" containerID="dc7bce07b3dfdfe43d79ada51697272e656711537a015463e8a8ba6fdf153965" Feb 17 14:32:36 crc kubenswrapper[4768]: I0217 14:32:36.772871 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9l98f" event={"ID":"c2e521d5-38fc-41dc-9d90-cd52ebc76308","Type":"ContainerStarted","Data":"085de37adc8c952827cbf7f22cba88dcb30ade2cceebed9cafa054c82e3a79d0"} Feb 17 14:32:39 crc kubenswrapper[4768]: I0217 14:32:39.526945 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-5f5954c4f6-p5w62_3d5e5fc2-44f3-45d7-848c-ed40f1ea1401/barbican-api/0.log" Feb 17 14:32:39 crc kubenswrapper[4768]: I0217 14:32:39.795362 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-5f5954c4f6-p5w62_3d5e5fc2-44f3-45d7-848c-ed40f1ea1401/barbican-api-log/0.log" Feb 17 14:32:39 crc kubenswrapper[4768]: I0217 14:32:39.802827 4768 generic.go:334] "Generic (PLEG): container finished" podID="c2e521d5-38fc-41dc-9d90-cd52ebc76308" containerID="085de37adc8c952827cbf7f22cba88dcb30ade2cceebed9cafa054c82e3a79d0" exitCode=0 Feb 17 14:32:39 crc kubenswrapper[4768]: I0217 14:32:39.802872 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9l98f" event={"ID":"c2e521d5-38fc-41dc-9d90-cd52ebc76308","Type":"ContainerDied","Data":"085de37adc8c952827cbf7f22cba88dcb30ade2cceebed9cafa054c82e3a79d0"} Feb 17 14:32:39 crc kubenswrapper[4768]: I0217 14:32:39.937273 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-f5cff5694-mvlv5_0ac0a67d-ed1c-4a10-8a5b-c2ecde6d3fc7/barbican-keystone-listener/0.log" Feb 17 14:32:39 crc kubenswrapper[4768]: I0217 14:32:39.976982 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-f5cff5694-mvlv5_0ac0a67d-ed1c-4a10-8a5b-c2ecde6d3fc7/barbican-keystone-listener-log/0.log" Feb 17 14:32:40 crc kubenswrapper[4768]: I0217 14:32:40.100196 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-68cd444875-wgnnm_486b688d-e9dd-4c6b-ae8d-c2e536172e53/barbican-worker/0.log" Feb 17 14:32:40 crc kubenswrapper[4768]: I0217 14:32:40.148486 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-68cd444875-wgnnm_486b688d-e9dd-4c6b-ae8d-c2e536172e53/barbican-worker-log/0.log" Feb 17 14:32:40 crc kubenswrapper[4768]: I0217 14:32:40.277312 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_bootstrap-edpm-deployment-openstack-edpm-ipam-qplfr_0cf2614c-dbfe-400c-a4ff-a19a96c2f9a0/bootstrap-edpm-deployment-openstack-edpm-ipam/0.log" Feb 17 14:32:40 crc kubenswrapper[4768]: I0217 14:32:40.361165 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_b042dd2b-3a49-4aec-a401-e0f3980f0e73/ceilometer-central-agent/0.log" Feb 17 14:32:40 crc kubenswrapper[4768]: I0217 14:32:40.411291 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_b042dd2b-3a49-4aec-a401-e0f3980f0e73/ceilometer-notification-agent/0.log" Feb 17 14:32:40 crc kubenswrapper[4768]: I0217 14:32:40.483827 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_b042dd2b-3a49-4aec-a401-e0f3980f0e73/proxy-httpd/0.log" Feb 17 14:32:40 crc kubenswrapper[4768]: I0217 14:32:40.582654 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_b042dd2b-3a49-4aec-a401-e0f3980f0e73/sg-core/0.log" Feb 17 14:32:40 crc kubenswrapper[4768]: I0217 14:32:40.692955 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_797f85b4-f933-4b20-b7a5-e2f3b17a5b56/cinder-api/0.log" Feb 17 14:32:40 crc kubenswrapper[4768]: I0217 14:32:40.738753 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_797f85b4-f933-4b20-b7a5-e2f3b17a5b56/cinder-api-log/0.log" Feb 17 14:32:40 crc kubenswrapper[4768]: I0217 14:32:40.901707 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_bd2b9dae-27bf-467c-96e0-194f0e25b814/cinder-scheduler/0.log" Feb 17 14:32:40 crc kubenswrapper[4768]: I0217 14:32:40.929853 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_bd2b9dae-27bf-467c-96e0-194f0e25b814/probe/0.log" Feb 17 14:32:41 crc kubenswrapper[4768]: I0217 14:32:41.142633 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-network-edpm-deployment-openstack-edpm-ipam-zzvmc_9749c980-c481-4841-b24e-bd1dc6625b59/configure-network-edpm-deployment-openstack-edpm-ipam/0.log" Feb 17 14:32:41 crc kubenswrapper[4768]: I0217 14:32:41.224893 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-os-edpm-deployment-openstack-edpm-ipam-mr4bd_e5fb7529-06bd-4dbe-aeb8-5753feec5be2/configure-os-edpm-deployment-openstack-edpm-ipam/0.log" Feb 17 14:32:41 crc kubenswrapper[4768]: I0217 14:32:41.351340 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-55478c4467-rbrnb_22064d12-d9c4-45c2-927e-77ce03c906bb/init/0.log" Feb 17 14:32:41 crc kubenswrapper[4768]: I0217 14:32:41.636598 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-55478c4467-rbrnb_22064d12-d9c4-45c2-927e-77ce03c906bb/dnsmasq-dns/0.log" Feb 17 14:32:41 crc kubenswrapper[4768]: I0217 14:32:41.650743 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-55478c4467-rbrnb_22064d12-d9c4-45c2-927e-77ce03c906bb/init/0.log" Feb 17 14:32:41 crc kubenswrapper[4768]: I0217 14:32:41.672181 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_download-cache-edpm-deployment-openstack-edpm-ipam-blr9d_a8163799-ddb2-4876-830f-19da3abc4578/download-cache-edpm-deployment-openstack-edpm-ipam/0.log" Feb 17 14:32:41 crc kubenswrapper[4768]: I0217 14:32:41.821297 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9l98f" event={"ID":"c2e521d5-38fc-41dc-9d90-cd52ebc76308","Type":"ContainerStarted","Data":"f5bb7daf6f923a201828c8c1138ea2c3e841776af30c7aae85fcb6cfc03e1799"} Feb 17 14:32:41 crc kubenswrapper[4768]: I0217 14:32:41.837769 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_5d72c76c-a1d7-4256-ada6-3216f5d7c71a/glance-httpd/0.log" Feb 17 14:32:41 crc kubenswrapper[4768]: I0217 14:32:41.844694 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-9l98f" podStartSLOduration=2.9453816379999997 podStartE2EDuration="18.844671136s" podCreationTimestamp="2026-02-17 14:32:23 +0000 UTC" firstStartedPulling="2026-02-17 14:32:24.656335778 +0000 UTC m=+3363.935722220" lastFinishedPulling="2026-02-17 14:32:40.555625276 +0000 UTC m=+3379.835011718" observedRunningTime="2026-02-17 14:32:41.840375029 +0000 UTC m=+3381.119761461" watchObservedRunningTime="2026-02-17 14:32:41.844671136 +0000 UTC m=+3381.124057588" Feb 17 14:32:41 crc kubenswrapper[4768]: I0217 14:32:41.909323 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_5d72c76c-a1d7-4256-ada6-3216f5d7c71a/glance-log/0.log" Feb 17 14:32:42 crc kubenswrapper[4768]: I0217 14:32:42.123274 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_46ee7793-1245-4648-aa12-ae11b1db13ca/glance-log/0.log" Feb 17 14:32:42 crc kubenswrapper[4768]: I0217 14:32:42.145164 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_46ee7793-1245-4648-aa12-ae11b1db13ca/glance-httpd/0.log" Feb 17 14:32:42 crc kubenswrapper[4768]: I0217 14:32:42.267576 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-6584d79658-wtxrc_331a37d3-96b1-4065-9941-25acc64cc6c1/horizon/0.log" Feb 17 14:32:42 crc kubenswrapper[4768]: I0217 14:32:42.461465 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-certs-edpm-deployment-openstack-edpm-ipam-7j9x5_f84244fa-e156-4bf4-bc42-22336b96a556/install-certs-edpm-deployment-openstack-edpm-ipam/0.log" Feb 17 14:32:42 crc kubenswrapper[4768]: I0217 14:32:42.610526 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-6584d79658-wtxrc_331a37d3-96b1-4065-9941-25acc64cc6c1/horizon-log/0.log" Feb 17 14:32:42 crc kubenswrapper[4768]: I0217 14:32:42.768957 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-os-edpm-deployment-openstack-edpm-ipam-rdgml_c7f3a2b7-d13a-4cb1-9b34-5cd1c21cf3c6/install-os-edpm-deployment-openstack-edpm-ipam/0.log" Feb 17 14:32:43 crc kubenswrapper[4768]: I0217 14:32:43.015686 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-77c78fc8c5-fgk9h_f8201b1d-afab-4fc2-bde1-bad212359f0a/keystone-api/0.log" Feb 17 14:32:43 crc kubenswrapper[4768]: I0217 14:32:43.029927 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-cron-29522281-zd786_b890b491-00b8-4c5c-9eb9-95f403148371/keystone-cron/0.log" Feb 17 14:32:43 crc kubenswrapper[4768]: I0217 14:32:43.175486 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_kube-state-metrics-0_dceedb47-5ab1-46d0-9e16-a8d267d73ff8/kube-state-metrics/0.log" Feb 17 14:32:43 crc kubenswrapper[4768]: I0217 14:32:43.261828 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_libvirt-edpm-deployment-openstack-edpm-ipam-nnqwp_30a6ce8f-2b64-4ba9-803c-15c5bbde1cf8/libvirt-edpm-deployment-openstack-edpm-ipam/0.log" Feb 17 14:32:43 crc kubenswrapper[4768]: I0217 14:32:43.668086 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-85664fc4b9-7bclg_f0bb15c9-ac11-47c0-893f-5f0f36554f2b/neutron-api/0.log" Feb 17 14:32:43 crc kubenswrapper[4768]: I0217 14:32:43.756611 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-85664fc4b9-7bclg_f0bb15c9-ac11-47c0-893f-5f0f36554f2b/neutron-httpd/0.log" Feb 17 14:32:43 crc kubenswrapper[4768]: I0217 14:32:43.799088 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-9l98f" Feb 17 14:32:43 crc kubenswrapper[4768]: I0217 14:32:43.799177 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-9l98f" Feb 17 14:32:43 crc kubenswrapper[4768]: I0217 14:32:43.946113 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-metadata-edpm-deployment-openstack-edpm-ipam-zxlx2_4fa13453-9d50-4130-ad98-37c224390a7e/neutron-metadata-edpm-deployment-openstack-edpm-ipam/0.log" Feb 17 14:32:44 crc kubenswrapper[4768]: I0217 14:32:44.397822 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell0-conductor-0_96c0f340-0c30-46ee-8c25-b4c96718d2b0/nova-cell0-conductor-conductor/0.log" Feb 17 14:32:44 crc kubenswrapper[4768]: I0217 14:32:44.504762 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_8fd17ba2-d7f2-4af3-a8e7-078578b6c8fe/nova-api-log/0.log" Feb 17 14:32:44 crc kubenswrapper[4768]: I0217 14:32:44.629461 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_8fd17ba2-d7f2-4af3-a8e7-078578b6c8fe/nova-api-api/0.log" Feb 17 14:32:44 crc kubenswrapper[4768]: I0217 14:32:44.726578 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-conductor-0_ee6256eb-4e26-4e93-ae49-8c6be5aace6c/nova-cell1-conductor-conductor/0.log" Feb 17 14:32:44 crc kubenswrapper[4768]: I0217 14:32:44.856308 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-novncproxy-0_15ac025d-e62d-4a1d-8f2c-86d36c7261f2/nova-cell1-novncproxy-novncproxy/0.log" Feb 17 14:32:44 crc kubenswrapper[4768]: I0217 14:32:44.909740 4768 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-9l98f" podUID="c2e521d5-38fc-41dc-9d90-cd52ebc76308" containerName="registry-server" probeResult="failure" output=< Feb 17 14:32:44 crc kubenswrapper[4768]: timeout: failed to connect service ":50051" within 1s Feb 17 14:32:44 crc kubenswrapper[4768]: > Feb 17 14:32:45 crc kubenswrapper[4768]: I0217 14:32:45.029979 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-edpm-deployment-openstack-edpm-ipam-4tgnr_7df23c60-d5f8-47e9-a852-ba39850823cb/nova-edpm-deployment-openstack-edpm-ipam/0.log" Feb 17 14:32:45 crc kubenswrapper[4768]: I0217 14:32:45.235613 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_4a274ef1-85cc-4456-960d-079fe7c8ea6d/nova-metadata-log/0.log" Feb 17 14:32:45 crc kubenswrapper[4768]: I0217 14:32:45.465452 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-scheduler-0_1b3ad6f8-7496-467a-bdeb-7cf29963af21/nova-scheduler-scheduler/0.log" Feb 17 14:32:45 crc kubenswrapper[4768]: I0217 14:32:45.490170 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_a0368ca4-d5b7-4604-b15a-a7cb4fcf5652/mysql-bootstrap/0.log" Feb 17 14:32:45 crc kubenswrapper[4768]: I0217 14:32:45.788532 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_a0368ca4-d5b7-4604-b15a-a7cb4fcf5652/galera/0.log" Feb 17 14:32:45 crc kubenswrapper[4768]: I0217 14:32:45.819563 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_a0368ca4-d5b7-4604-b15a-a7cb4fcf5652/mysql-bootstrap/0.log" Feb 17 14:32:46 crc kubenswrapper[4768]: I0217 14:32:46.108410 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_5ba1ccc6-d556-4867-8e12-a5747dba1ffa/mysql-bootstrap/0.log" Feb 17 14:32:46 crc kubenswrapper[4768]: I0217 14:32:46.293757 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_4a274ef1-85cc-4456-960d-079fe7c8ea6d/nova-metadata-metadata/0.log" Feb 17 14:32:46 crc kubenswrapper[4768]: I0217 14:32:46.356002 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_5ba1ccc6-d556-4867-8e12-a5747dba1ffa/mysql-bootstrap/0.log" Feb 17 14:32:46 crc kubenswrapper[4768]: I0217 14:32:46.382859 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_5ba1ccc6-d556-4867-8e12-a5747dba1ffa/galera/0.log" Feb 17 14:32:46 crc kubenswrapper[4768]: I0217 14:32:46.507162 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstackclient_b765d360-2c6c-4740-b75e-bd16636a41e0/openstackclient/0.log" Feb 17 14:32:46 crc kubenswrapper[4768]: I0217 14:32:46.585835 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-gnb4g_39dede0b-4ddc-46ea-81c1-a8e7e576aa78/ovn-controller/0.log" Feb 17 14:32:46 crc kubenswrapper[4768]: I0217 14:32:46.731599 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-rchd4_d969a380-827a-46eb-8f6e-9f28ae50312a/openstack-network-exporter/0.log" Feb 17 14:32:46 crc kubenswrapper[4768]: I0217 14:32:46.815558 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-rkhhj_75bf7b04-fd76-440d-b975-abf1c4972c4f/ovsdb-server-init/0.log" Feb 17 14:32:47 crc kubenswrapper[4768]: I0217 14:32:47.069771 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-rkhhj_75bf7b04-fd76-440d-b975-abf1c4972c4f/ovsdb-server-init/0.log" Feb 17 14:32:47 crc kubenswrapper[4768]: I0217 14:32:47.089719 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-rkhhj_75bf7b04-fd76-440d-b975-abf1c4972c4f/ovsdb-server/0.log" Feb 17 14:32:47 crc kubenswrapper[4768]: I0217 14:32:47.091493 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-rkhhj_75bf7b04-fd76-440d-b975-abf1c4972c4f/ovs-vswitchd/0.log" Feb 17 14:32:47 crc kubenswrapper[4768]: I0217 14:32:47.329195 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-edpm-deployment-openstack-edpm-ipam-9nqqc_20f7a484-7e3c-4df5-84b0-98bd83632fb1/ovn-edpm-deployment-openstack-edpm-ipam/0.log" Feb 17 14:32:47 crc kubenswrapper[4768]: I0217 14:32:47.347265 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_41ee36c1-d509-4c0c-960a-279955237a10/openstack-network-exporter/0.log" Feb 17 14:32:47 crc kubenswrapper[4768]: I0217 14:32:47.350025 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_41ee36c1-d509-4c0c-960a-279955237a10/ovn-northd/0.log" Feb 17 14:32:47 crc kubenswrapper[4768]: I0217 14:32:47.518287 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_1296e827-af28-4d2e-a80d-33add3697b6e/openstack-network-exporter/0.log" Feb 17 14:32:47 crc kubenswrapper[4768]: I0217 14:32:47.578488 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_1296e827-af28-4d2e-a80d-33add3697b6e/ovsdbserver-nb/0.log" Feb 17 14:32:47 crc kubenswrapper[4768]: I0217 14:32:47.700134 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_6e5947dc-7f07-4498-8be8-2b0c184c5853/openstack-network-exporter/0.log" Feb 17 14:32:47 crc kubenswrapper[4768]: I0217 14:32:47.810338 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_6e5947dc-7f07-4498-8be8-2b0c184c5853/ovsdbserver-sb/0.log" Feb 17 14:32:47 crc kubenswrapper[4768]: I0217 14:32:47.879373 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-6f459487b8-6m6q4_41aee306-e130-4ed4-ba8e-381531d03dc3/placement-api/0.log" Feb 17 14:32:48 crc kubenswrapper[4768]: I0217 14:32:48.042984 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_edccbc8c-a38a-4c5d-b31a-a3b55f182ffa/setup-container/0.log" Feb 17 14:32:48 crc kubenswrapper[4768]: I0217 14:32:48.086764 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-6f459487b8-6m6q4_41aee306-e130-4ed4-ba8e-381531d03dc3/placement-log/0.log" Feb 17 14:32:48 crc kubenswrapper[4768]: I0217 14:32:48.260281 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_edccbc8c-a38a-4c5d-b31a-a3b55f182ffa/rabbitmq/0.log" Feb 17 14:32:48 crc kubenswrapper[4768]: I0217 14:32:48.273282 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_edccbc8c-a38a-4c5d-b31a-a3b55f182ffa/setup-container/0.log" Feb 17 14:32:48 crc kubenswrapper[4768]: I0217 14:32:48.289208 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_78830acd-378f-4199-8615-9884cdca4154/setup-container/0.log" Feb 17 14:32:48 crc kubenswrapper[4768]: I0217 14:32:48.575760 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_reboot-os-edpm-deployment-openstack-edpm-ipam-kzxnk_42b3a8d2-3952-474e-9821-8472466012cb/reboot-os-edpm-deployment-openstack-edpm-ipam/0.log" Feb 17 14:32:48 crc kubenswrapper[4768]: I0217 14:32:48.582365 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_78830acd-378f-4199-8615-9884cdca4154/rabbitmq/0.log" Feb 17 14:32:48 crc kubenswrapper[4768]: I0217 14:32:48.599357 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_78830acd-378f-4199-8615-9884cdca4154/setup-container/0.log" Feb 17 14:32:48 crc kubenswrapper[4768]: I0217 14:32:48.792844 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_redhat-edpm-deployment-openstack-edpm-ipam-wsvlq_239d0b98-514d-42e7-8a8c-ac152e3410ed/redhat-edpm-deployment-openstack-edpm-ipam/0.log" Feb 17 14:32:48 crc kubenswrapper[4768]: I0217 14:32:48.972231 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_repo-setup-edpm-deployment-openstack-edpm-ipam-ttj4s_c99d698a-1af3-46d2-97c5-0c33573adaca/repo-setup-edpm-deployment-openstack-edpm-ipam/0.log" Feb 17 14:32:49 crc kubenswrapper[4768]: I0217 14:32:49.035535 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_run-os-edpm-deployment-openstack-edpm-ipam-td9b9_c3e60ab5-9a2d-4dc6-8ce8-d730becb94ff/run-os-edpm-deployment-openstack-edpm-ipam/0.log" Feb 17 14:32:49 crc kubenswrapper[4768]: I0217 14:32:49.202265 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ssh-known-hosts-edpm-deployment-4nwsr_62a034b9-286c-4b4b-aea8-8ca20fe7610f/ssh-known-hosts-edpm-deployment/0.log" Feb 17 14:32:49 crc kubenswrapper[4768]: I0217 14:32:49.384257 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-6999b7cf5c-4f5kt_4ac4ebb9-cc51-4934-b4c7-590830f2a04a/proxy-server/0.log" Feb 17 14:32:49 crc kubenswrapper[4768]: I0217 14:32:49.411959 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-6999b7cf5c-4f5kt_4ac4ebb9-cc51-4934-b4c7-590830f2a04a/proxy-httpd/0.log" Feb 17 14:32:49 crc kubenswrapper[4768]: I0217 14:32:49.469339 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-ring-rebalance-wcvmp_3fc3a6f3-433a-44de-bf42-c29e730f2da3/swift-ring-rebalance/0.log" Feb 17 14:32:49 crc kubenswrapper[4768]: I0217 14:32:49.651248 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_81d96922-74f7-4840-bcad-6f98ffb1bbdf/account-auditor/0.log" Feb 17 14:32:49 crc kubenswrapper[4768]: I0217 14:32:49.693908 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_81d96922-74f7-4840-bcad-6f98ffb1bbdf/account-reaper/0.log" Feb 17 14:32:49 crc kubenswrapper[4768]: I0217 14:32:49.759254 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_81d96922-74f7-4840-bcad-6f98ffb1bbdf/account-replicator/0.log" Feb 17 14:32:49 crc kubenswrapper[4768]: I0217 14:32:49.869038 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_81d96922-74f7-4840-bcad-6f98ffb1bbdf/account-server/0.log" Feb 17 14:32:49 crc kubenswrapper[4768]: I0217 14:32:49.942692 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_81d96922-74f7-4840-bcad-6f98ffb1bbdf/container-auditor/0.log" Feb 17 14:32:49 crc kubenswrapper[4768]: I0217 14:32:49.981060 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_81d96922-74f7-4840-bcad-6f98ffb1bbdf/container-server/0.log" Feb 17 14:32:49 crc kubenswrapper[4768]: I0217 14:32:49.994733 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_81d96922-74f7-4840-bcad-6f98ffb1bbdf/container-replicator/0.log" Feb 17 14:32:50 crc kubenswrapper[4768]: I0217 14:32:50.102393 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_81d96922-74f7-4840-bcad-6f98ffb1bbdf/container-updater/0.log" Feb 17 14:32:50 crc kubenswrapper[4768]: I0217 14:32:50.143639 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_81d96922-74f7-4840-bcad-6f98ffb1bbdf/object-auditor/0.log" Feb 17 14:32:50 crc kubenswrapper[4768]: I0217 14:32:50.211898 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_81d96922-74f7-4840-bcad-6f98ffb1bbdf/object-replicator/0.log" Feb 17 14:32:50 crc kubenswrapper[4768]: I0217 14:32:50.254915 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_81d96922-74f7-4840-bcad-6f98ffb1bbdf/object-expirer/0.log" Feb 17 14:32:50 crc kubenswrapper[4768]: I0217 14:32:50.363198 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_81d96922-74f7-4840-bcad-6f98ffb1bbdf/object-updater/0.log" Feb 17 14:32:50 crc kubenswrapper[4768]: I0217 14:32:50.379485 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_81d96922-74f7-4840-bcad-6f98ffb1bbdf/object-server/0.log" Feb 17 14:32:50 crc kubenswrapper[4768]: I0217 14:32:50.457771 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_81d96922-74f7-4840-bcad-6f98ffb1bbdf/rsync/0.log" Feb 17 14:32:50 crc kubenswrapper[4768]: I0217 14:32:50.484730 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_81d96922-74f7-4840-bcad-6f98ffb1bbdf/swift-recon-cron/0.log" Feb 17 14:32:50 crc kubenswrapper[4768]: I0217 14:32:50.705329 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_telemetry-edpm-deployment-openstack-edpm-ipam-txckp_037854ba-d107-4be1-8a90-914e9180957d/telemetry-edpm-deployment-openstack-edpm-ipam/0.log" Feb 17 14:32:50 crc kubenswrapper[4768]: I0217 14:32:50.757364 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_tempest-tests-tempest_780f2ee6-f4d9-455c-97e6-7e6451706324/tempest-tests-tempest-tests-runner/0.log" Feb 17 14:32:50 crc kubenswrapper[4768]: I0217 14:32:50.935219 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_test-operator-logs-pod-tempest-tempest-tests-tempest_0ecebf14-59d4-448f-8f09-f3b51ebd695e/test-operator-logs-container/0.log" Feb 17 14:32:51 crc kubenswrapper[4768]: I0217 14:32:51.061148 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_validate-network-edpm-deployment-openstack-edpm-ipam-tj4bn_72dee802-02e1-4ce6-adf4-a32b56d357b4/validate-network-edpm-deployment-openstack-edpm-ipam/0.log" Feb 17 14:32:54 crc kubenswrapper[4768]: I0217 14:32:54.847958 4768 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-9l98f" podUID="c2e521d5-38fc-41dc-9d90-cd52ebc76308" containerName="registry-server" probeResult="failure" output=< Feb 17 14:32:54 crc kubenswrapper[4768]: timeout: failed to connect service ":50051" within 1s Feb 17 14:32:54 crc kubenswrapper[4768]: > Feb 17 14:32:59 crc kubenswrapper[4768]: I0217 14:32:59.354700 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_memcached-0_d87a0ca2-9789-4e14-a18b-2ed216ea5d15/memcached/0.log" Feb 17 14:33:04 crc kubenswrapper[4768]: I0217 14:33:04.842785 4768 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-9l98f" podUID="c2e521d5-38fc-41dc-9d90-cd52ebc76308" containerName="registry-server" probeResult="failure" output=< Feb 17 14:33:04 crc kubenswrapper[4768]: timeout: failed to connect service ":50051" within 1s Feb 17 14:33:04 crc kubenswrapper[4768]: > Feb 17 14:33:13 crc kubenswrapper[4768]: I0217 14:33:13.848768 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-9l98f" Feb 17 14:33:13 crc kubenswrapper[4768]: I0217 14:33:13.904477 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-9l98f" Feb 17 14:33:13 crc kubenswrapper[4768]: I0217 14:33:13.987074 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-9l98f"] Feb 17 14:33:14 crc kubenswrapper[4768]: I0217 14:33:14.097647 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-fppgg"] Feb 17 14:33:14 crc kubenswrapper[4768]: I0217 14:33:14.098373 4768 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-fppgg" podUID="3a639663-ee39-4aa9-874c-c14cff7d6223" containerName="registry-server" containerID="cri-o://41fa83826ba69811f2c4579ba76776701df2ab36fb5d4716ad51ef996a957df1" gracePeriod=2 Feb 17 14:33:14 crc kubenswrapper[4768]: E0217 14:33:14.242459 4768 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3a639663_ee39_4aa9_874c_c14cff7d6223.slice/crio-conmon-41fa83826ba69811f2c4579ba76776701df2ab36fb5d4716ad51ef996a957df1.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3a639663_ee39_4aa9_874c_c14cff7d6223.slice/crio-41fa83826ba69811f2c4579ba76776701df2ab36fb5d4716ad51ef996a957df1.scope\": RecentStats: unable to find data in memory cache]" Feb 17 14:33:14 crc kubenswrapper[4768]: I0217 14:33:14.639893 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-fppgg" Feb 17 14:33:14 crc kubenswrapper[4768]: I0217 14:33:14.810788 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3a639663-ee39-4aa9-874c-c14cff7d6223-utilities\") pod \"3a639663-ee39-4aa9-874c-c14cff7d6223\" (UID: \"3a639663-ee39-4aa9-874c-c14cff7d6223\") " Feb 17 14:33:14 crc kubenswrapper[4768]: I0217 14:33:14.811295 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3a639663-ee39-4aa9-874c-c14cff7d6223-catalog-content\") pod \"3a639663-ee39-4aa9-874c-c14cff7d6223\" (UID: \"3a639663-ee39-4aa9-874c-c14cff7d6223\") " Feb 17 14:33:14 crc kubenswrapper[4768]: I0217 14:33:14.811344 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gzr5m\" (UniqueName: \"kubernetes.io/projected/3a639663-ee39-4aa9-874c-c14cff7d6223-kube-api-access-gzr5m\") pod \"3a639663-ee39-4aa9-874c-c14cff7d6223\" (UID: \"3a639663-ee39-4aa9-874c-c14cff7d6223\") " Feb 17 14:33:14 crc kubenswrapper[4768]: I0217 14:33:14.813081 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3a639663-ee39-4aa9-874c-c14cff7d6223-utilities" (OuterVolumeSpecName: "utilities") pod "3a639663-ee39-4aa9-874c-c14cff7d6223" (UID: "3a639663-ee39-4aa9-874c-c14cff7d6223"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 14:33:14 crc kubenswrapper[4768]: I0217 14:33:14.827935 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3a639663-ee39-4aa9-874c-c14cff7d6223-kube-api-access-gzr5m" (OuterVolumeSpecName: "kube-api-access-gzr5m") pod "3a639663-ee39-4aa9-874c-c14cff7d6223" (UID: "3a639663-ee39-4aa9-874c-c14cff7d6223"). InnerVolumeSpecName "kube-api-access-gzr5m". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 14:33:14 crc kubenswrapper[4768]: I0217 14:33:14.913501 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gzr5m\" (UniqueName: \"kubernetes.io/projected/3a639663-ee39-4aa9-874c-c14cff7d6223-kube-api-access-gzr5m\") on node \"crc\" DevicePath \"\"" Feb 17 14:33:14 crc kubenswrapper[4768]: I0217 14:33:14.913545 4768 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3a639663-ee39-4aa9-874c-c14cff7d6223-utilities\") on node \"crc\" DevicePath \"\"" Feb 17 14:33:14 crc kubenswrapper[4768]: I0217 14:33:14.960431 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3a639663-ee39-4aa9-874c-c14cff7d6223-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3a639663-ee39-4aa9-874c-c14cff7d6223" (UID: "3a639663-ee39-4aa9-874c-c14cff7d6223"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 14:33:15 crc kubenswrapper[4768]: I0217 14:33:15.015216 4768 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3a639663-ee39-4aa9-874c-c14cff7d6223-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 17 14:33:15 crc kubenswrapper[4768]: I0217 14:33:15.079966 4768 generic.go:334] "Generic (PLEG): container finished" podID="3a639663-ee39-4aa9-874c-c14cff7d6223" containerID="41fa83826ba69811f2c4579ba76776701df2ab36fb5d4716ad51ef996a957df1" exitCode=0 Feb 17 14:33:15 crc kubenswrapper[4768]: I0217 14:33:15.080012 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fppgg" event={"ID":"3a639663-ee39-4aa9-874c-c14cff7d6223","Type":"ContainerDied","Data":"41fa83826ba69811f2c4579ba76776701df2ab36fb5d4716ad51ef996a957df1"} Feb 17 14:33:15 crc kubenswrapper[4768]: I0217 14:33:15.080042 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-fppgg" Feb 17 14:33:15 crc kubenswrapper[4768]: I0217 14:33:15.080077 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fppgg" event={"ID":"3a639663-ee39-4aa9-874c-c14cff7d6223","Type":"ContainerDied","Data":"5c6af4ee0cb08744ee0177fa377961f649347f5b4662e20710e1bbc63f5a7ec4"} Feb 17 14:33:15 crc kubenswrapper[4768]: I0217 14:33:15.080127 4768 scope.go:117] "RemoveContainer" containerID="41fa83826ba69811f2c4579ba76776701df2ab36fb5d4716ad51ef996a957df1" Feb 17 14:33:15 crc kubenswrapper[4768]: I0217 14:33:15.118468 4768 scope.go:117] "RemoveContainer" containerID="db8f7d005f08e0bf280dfef30b4ce02dc33e6e99e18e92df9e7998d6ed408e57" Feb 17 14:33:15 crc kubenswrapper[4768]: I0217 14:33:15.127152 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-fppgg"] Feb 17 14:33:15 crc kubenswrapper[4768]: I0217 14:33:15.134723 4768 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-fppgg"] Feb 17 14:33:15 crc kubenswrapper[4768]: I0217 14:33:15.159159 4768 scope.go:117] "RemoveContainer" containerID="c4e693b707bd771e86ce359289be73beb60b2df88608378721d7360cf3efefa0" Feb 17 14:33:15 crc kubenswrapper[4768]: I0217 14:33:15.229303 4768 scope.go:117] "RemoveContainer" containerID="41fa83826ba69811f2c4579ba76776701df2ab36fb5d4716ad51ef996a957df1" Feb 17 14:33:15 crc kubenswrapper[4768]: E0217 14:33:15.233322 4768 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"41fa83826ba69811f2c4579ba76776701df2ab36fb5d4716ad51ef996a957df1\": container with ID starting with 41fa83826ba69811f2c4579ba76776701df2ab36fb5d4716ad51ef996a957df1 not found: ID does not exist" containerID="41fa83826ba69811f2c4579ba76776701df2ab36fb5d4716ad51ef996a957df1" Feb 17 14:33:15 crc kubenswrapper[4768]: I0217 14:33:15.233374 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"41fa83826ba69811f2c4579ba76776701df2ab36fb5d4716ad51ef996a957df1"} err="failed to get container status \"41fa83826ba69811f2c4579ba76776701df2ab36fb5d4716ad51ef996a957df1\": rpc error: code = NotFound desc = could not find container \"41fa83826ba69811f2c4579ba76776701df2ab36fb5d4716ad51ef996a957df1\": container with ID starting with 41fa83826ba69811f2c4579ba76776701df2ab36fb5d4716ad51ef996a957df1 not found: ID does not exist" Feb 17 14:33:15 crc kubenswrapper[4768]: I0217 14:33:15.233399 4768 scope.go:117] "RemoveContainer" containerID="db8f7d005f08e0bf280dfef30b4ce02dc33e6e99e18e92df9e7998d6ed408e57" Feb 17 14:33:15 crc kubenswrapper[4768]: E0217 14:33:15.238579 4768 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"db8f7d005f08e0bf280dfef30b4ce02dc33e6e99e18e92df9e7998d6ed408e57\": container with ID starting with db8f7d005f08e0bf280dfef30b4ce02dc33e6e99e18e92df9e7998d6ed408e57 not found: ID does not exist" containerID="db8f7d005f08e0bf280dfef30b4ce02dc33e6e99e18e92df9e7998d6ed408e57" Feb 17 14:33:15 crc kubenswrapper[4768]: I0217 14:33:15.238638 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"db8f7d005f08e0bf280dfef30b4ce02dc33e6e99e18e92df9e7998d6ed408e57"} err="failed to get container status \"db8f7d005f08e0bf280dfef30b4ce02dc33e6e99e18e92df9e7998d6ed408e57\": rpc error: code = NotFound desc = could not find container \"db8f7d005f08e0bf280dfef30b4ce02dc33e6e99e18e92df9e7998d6ed408e57\": container with ID starting with db8f7d005f08e0bf280dfef30b4ce02dc33e6e99e18e92df9e7998d6ed408e57 not found: ID does not exist" Feb 17 14:33:15 crc kubenswrapper[4768]: I0217 14:33:15.238664 4768 scope.go:117] "RemoveContainer" containerID="c4e693b707bd771e86ce359289be73beb60b2df88608378721d7360cf3efefa0" Feb 17 14:33:15 crc kubenswrapper[4768]: E0217 14:33:15.238934 4768 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c4e693b707bd771e86ce359289be73beb60b2df88608378721d7360cf3efefa0\": container with ID starting with c4e693b707bd771e86ce359289be73beb60b2df88608378721d7360cf3efefa0 not found: ID does not exist" containerID="c4e693b707bd771e86ce359289be73beb60b2df88608378721d7360cf3efefa0" Feb 17 14:33:15 crc kubenswrapper[4768]: I0217 14:33:15.238966 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c4e693b707bd771e86ce359289be73beb60b2df88608378721d7360cf3efefa0"} err="failed to get container status \"c4e693b707bd771e86ce359289be73beb60b2df88608378721d7360cf3efefa0\": rpc error: code = NotFound desc = could not find container \"c4e693b707bd771e86ce359289be73beb60b2df88608378721d7360cf3efefa0\": container with ID starting with c4e693b707bd771e86ce359289be73beb60b2df88608378721d7360cf3efefa0 not found: ID does not exist" Feb 17 14:33:15 crc kubenswrapper[4768]: I0217 14:33:15.273071 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_02aaacdbb2cdc34212ef0d4f992a08d2443727e2a4312d7c57a1078608n5lkj_536648e3-7aff-4027-8132-3aed7835b43f/util/0.log" Feb 17 14:33:15 crc kubenswrapper[4768]: I0217 14:33:15.521536 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_02aaacdbb2cdc34212ef0d4f992a08d2443727e2a4312d7c57a1078608n5lkj_536648e3-7aff-4027-8132-3aed7835b43f/pull/0.log" Feb 17 14:33:15 crc kubenswrapper[4768]: I0217 14:33:15.534008 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_02aaacdbb2cdc34212ef0d4f992a08d2443727e2a4312d7c57a1078608n5lkj_536648e3-7aff-4027-8132-3aed7835b43f/util/0.log" Feb 17 14:33:15 crc kubenswrapper[4768]: I0217 14:33:15.543215 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3a639663-ee39-4aa9-874c-c14cff7d6223" path="/var/lib/kubelet/pods/3a639663-ee39-4aa9-874c-c14cff7d6223/volumes" Feb 17 14:33:15 crc kubenswrapper[4768]: I0217 14:33:15.571236 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_02aaacdbb2cdc34212ef0d4f992a08d2443727e2a4312d7c57a1078608n5lkj_536648e3-7aff-4027-8132-3aed7835b43f/pull/0.log" Feb 17 14:33:15 crc kubenswrapper[4768]: I0217 14:33:15.731505 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_02aaacdbb2cdc34212ef0d4f992a08d2443727e2a4312d7c57a1078608n5lkj_536648e3-7aff-4027-8132-3aed7835b43f/util/0.log" Feb 17 14:33:15 crc kubenswrapper[4768]: I0217 14:33:15.742974 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_02aaacdbb2cdc34212ef0d4f992a08d2443727e2a4312d7c57a1078608n5lkj_536648e3-7aff-4027-8132-3aed7835b43f/pull/0.log" Feb 17 14:33:15 crc kubenswrapper[4768]: I0217 14:33:15.764982 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_02aaacdbb2cdc34212ef0d4f992a08d2443727e2a4312d7c57a1078608n5lkj_536648e3-7aff-4027-8132-3aed7835b43f/extract/0.log" Feb 17 14:33:16 crc kubenswrapper[4768]: I0217 14:33:16.221835 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-6d8bf5c495-hn2hg_663c818c-0255-4f9c-827e-ccb2b430c5e3/manager/0.log" Feb 17 14:33:16 crc kubenswrapper[4768]: I0217 14:33:16.587562 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-77987464f4-7wnck_f5689d3e-d755-485e-80a1-e808c460022d/manager/0.log" Feb 17 14:33:16 crc kubenswrapper[4768]: I0217 14:33:16.919125 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-69f49c598c-bl6rp_8b8ebdec-5fc0-4f66-9a22-b833d3cd4283/manager/0.log" Feb 17 14:33:17 crc kubenswrapper[4768]: I0217 14:33:17.277476 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-5b9b8895d5-hrb5z_aa6bb524-9950-4add-9b03-04f324c9a02d/manager/0.log" Feb 17 14:33:17 crc kubenswrapper[4768]: I0217 14:33:17.531508 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-5d946d989d-hrkzn_633a0666-42b2-4422-9b47-fb69c1105655/manager/0.log" Feb 17 14:33:17 crc kubenswrapper[4768]: I0217 14:33:17.777681 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-554564d7fc-pfr2g_2560f30e-2ede-4f2e-a3a1-e3e7e96b5792/manager/0.log" Feb 17 14:33:18 crc kubenswrapper[4768]: I0217 14:33:18.009485 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-79d975b745-cpkx6_9e699840-e748-4e5d-8629-f0379a7cce08/manager/0.log" Feb 17 14:33:18 crc kubenswrapper[4768]: I0217 14:33:18.231490 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-b4d948c87-7ntmb_4f912c2e-e494-46c0-9231-40c106b00c40/manager/0.log" Feb 17 14:33:18 crc kubenswrapper[4768]: I0217 14:33:18.284133 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-54f6768c69-qfz4j_c040f799-8668-44a6-b694-0b253aaf7930/manager/0.log" Feb 17 14:33:18 crc kubenswrapper[4768]: I0217 14:33:18.579117 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-6994f66f48-v5svk_93f56e48-e402-471a-b9c0-0fac088f7a7e/manager/0.log" Feb 17 14:33:18 crc kubenswrapper[4768]: I0217 14:33:18.709445 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-64ddbf8bb-4wm78_2e1d5b4f-7ff1-43bf-99ed-48cb79cb86df/manager/0.log" Feb 17 14:33:18 crc kubenswrapper[4768]: I0217 14:33:18.870428 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-567668f5cf-9krmz_e48b6c11-496b-4f36-9155-119bbfb506f8/manager/0.log" Feb 17 14:33:19 crc kubenswrapper[4768]: I0217 14:33:19.162447 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-7c6767dc9cg22r7_09c0d3ef-49e2-4dec-a95f-951be73d5740/manager/0.log" Feb 17 14:33:19 crc kubenswrapper[4768]: I0217 14:33:19.546938 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-init-5b99dcf57b-tb622_3f9e32d0-4476-4d44-8266-d821ad79f322/operator/0.log" Feb 17 14:33:19 crc kubenswrapper[4768]: I0217 14:33:19.780752 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-46vqg_ee54ff6a-14d8-4701-beac-8f6eeafc5d84/registry-server/0.log" Feb 17 14:33:20 crc kubenswrapper[4768]: I0217 14:33:20.017254 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-d44cf6b75-dp9tq_96ea84f4-0ad3-49bc-9eb5-83bfbbb7ee0a/manager/0.log" Feb 17 14:33:20 crc kubenswrapper[4768]: I0217 14:33:20.229483 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-8497b45c89-pvjkl_9aee7c4a-404a-434e-8aa9-b671553532d2/manager/0.log" Feb 17 14:33:20 crc kubenswrapper[4768]: I0217 14:33:20.476281 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-pqvzt_a5348349-c195-4af1-b367-a6cb0842305b/operator/0.log" Feb 17 14:33:20 crc kubenswrapper[4768]: I0217 14:33:20.485468 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-69f8888797-j4n52_2567e2d5-83bd-4345-b94b-36527465ce1b/manager/0.log" Feb 17 14:33:20 crc kubenswrapper[4768]: I0217 14:33:20.722384 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-68f46476f-kmgl4_a9e933fb-130b-4a7e-91c4-9ca5f2747e35/manager/0.log" Feb 17 14:33:20 crc kubenswrapper[4768]: I0217 14:33:20.854358 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-7f45b4ff68-4c9sb_d8d8a911-905e-45e3-a4ed-35338f74806f/manager/0.log" Feb 17 14:33:21 crc kubenswrapper[4768]: I0217 14:33:21.034300 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-7866795846-ln8v5_c8a74650-d867-4ab8-92a3-fcdc815247c4/manager/0.log" Feb 17 14:33:21 crc kubenswrapper[4768]: I0217 14:33:21.115916 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-5db88f68c-xx7vm_71305e38-208f-43be-9bb9-32341555750c/manager/0.log" Feb 17 14:33:21 crc kubenswrapper[4768]: I0217 14:33:21.413182 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-57785b79bf-sjndd_e7b1071b-c742-4578-8226-12a6cce613f1/manager/0.log" Feb 17 14:33:22 crc kubenswrapper[4768]: I0217 14:33:22.795066 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-868647ff47-99tll_ea7f039d-d594-4b9e-9dac-06e9f13bdba2/manager/0.log" Feb 17 14:33:25 crc kubenswrapper[4768]: I0217 14:33:25.878498 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-pzmnc"] Feb 17 14:33:25 crc kubenswrapper[4768]: E0217 14:33:25.879283 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3a639663-ee39-4aa9-874c-c14cff7d6223" containerName="extract-utilities" Feb 17 14:33:25 crc kubenswrapper[4768]: I0217 14:33:25.879296 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a639663-ee39-4aa9-874c-c14cff7d6223" containerName="extract-utilities" Feb 17 14:33:25 crc kubenswrapper[4768]: E0217 14:33:25.879354 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3a639663-ee39-4aa9-874c-c14cff7d6223" containerName="registry-server" Feb 17 14:33:25 crc kubenswrapper[4768]: I0217 14:33:25.879362 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a639663-ee39-4aa9-874c-c14cff7d6223" containerName="registry-server" Feb 17 14:33:25 crc kubenswrapper[4768]: E0217 14:33:25.879389 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3a639663-ee39-4aa9-874c-c14cff7d6223" containerName="extract-content" Feb 17 14:33:25 crc kubenswrapper[4768]: I0217 14:33:25.879397 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a639663-ee39-4aa9-874c-c14cff7d6223" containerName="extract-content" Feb 17 14:33:25 crc kubenswrapper[4768]: I0217 14:33:25.879558 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="3a639663-ee39-4aa9-874c-c14cff7d6223" containerName="registry-server" Feb 17 14:33:25 crc kubenswrapper[4768]: I0217 14:33:25.880727 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-pzmnc" Feb 17 14:33:25 crc kubenswrapper[4768]: I0217 14:33:25.903267 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-pzmnc"] Feb 17 14:33:25 crc kubenswrapper[4768]: I0217 14:33:25.999455 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a587d82f-a2ac-4dbc-91be-66049fd5662a-catalog-content\") pod \"redhat-marketplace-pzmnc\" (UID: \"a587d82f-a2ac-4dbc-91be-66049fd5662a\") " pod="openshift-marketplace/redhat-marketplace-pzmnc" Feb 17 14:33:25 crc kubenswrapper[4768]: I0217 14:33:25.999599 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5mm4f\" (UniqueName: \"kubernetes.io/projected/a587d82f-a2ac-4dbc-91be-66049fd5662a-kube-api-access-5mm4f\") pod \"redhat-marketplace-pzmnc\" (UID: \"a587d82f-a2ac-4dbc-91be-66049fd5662a\") " pod="openshift-marketplace/redhat-marketplace-pzmnc" Feb 17 14:33:25 crc kubenswrapper[4768]: I0217 14:33:25.999702 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a587d82f-a2ac-4dbc-91be-66049fd5662a-utilities\") pod \"redhat-marketplace-pzmnc\" (UID: \"a587d82f-a2ac-4dbc-91be-66049fd5662a\") " pod="openshift-marketplace/redhat-marketplace-pzmnc" Feb 17 14:33:26 crc kubenswrapper[4768]: I0217 14:33:26.101053 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a587d82f-a2ac-4dbc-91be-66049fd5662a-utilities\") pod \"redhat-marketplace-pzmnc\" (UID: \"a587d82f-a2ac-4dbc-91be-66049fd5662a\") " pod="openshift-marketplace/redhat-marketplace-pzmnc" Feb 17 14:33:26 crc kubenswrapper[4768]: I0217 14:33:26.101222 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a587d82f-a2ac-4dbc-91be-66049fd5662a-catalog-content\") pod \"redhat-marketplace-pzmnc\" (UID: \"a587d82f-a2ac-4dbc-91be-66049fd5662a\") " pod="openshift-marketplace/redhat-marketplace-pzmnc" Feb 17 14:33:26 crc kubenswrapper[4768]: I0217 14:33:26.101276 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5mm4f\" (UniqueName: \"kubernetes.io/projected/a587d82f-a2ac-4dbc-91be-66049fd5662a-kube-api-access-5mm4f\") pod \"redhat-marketplace-pzmnc\" (UID: \"a587d82f-a2ac-4dbc-91be-66049fd5662a\") " pod="openshift-marketplace/redhat-marketplace-pzmnc" Feb 17 14:33:26 crc kubenswrapper[4768]: I0217 14:33:26.101632 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a587d82f-a2ac-4dbc-91be-66049fd5662a-utilities\") pod \"redhat-marketplace-pzmnc\" (UID: \"a587d82f-a2ac-4dbc-91be-66049fd5662a\") " pod="openshift-marketplace/redhat-marketplace-pzmnc" Feb 17 14:33:26 crc kubenswrapper[4768]: I0217 14:33:26.101938 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a587d82f-a2ac-4dbc-91be-66049fd5662a-catalog-content\") pod \"redhat-marketplace-pzmnc\" (UID: \"a587d82f-a2ac-4dbc-91be-66049fd5662a\") " pod="openshift-marketplace/redhat-marketplace-pzmnc" Feb 17 14:33:26 crc kubenswrapper[4768]: I0217 14:33:26.119769 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5mm4f\" (UniqueName: \"kubernetes.io/projected/a587d82f-a2ac-4dbc-91be-66049fd5662a-kube-api-access-5mm4f\") pod \"redhat-marketplace-pzmnc\" (UID: \"a587d82f-a2ac-4dbc-91be-66049fd5662a\") " pod="openshift-marketplace/redhat-marketplace-pzmnc" Feb 17 14:33:26 crc kubenswrapper[4768]: I0217 14:33:26.206826 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-pzmnc" Feb 17 14:33:26 crc kubenswrapper[4768]: I0217 14:33:26.714235 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-pzmnc"] Feb 17 14:33:27 crc kubenswrapper[4768]: I0217 14:33:27.232688 4768 generic.go:334] "Generic (PLEG): container finished" podID="a587d82f-a2ac-4dbc-91be-66049fd5662a" containerID="4e5c243b97a15b65d65915b198d8f156f4f4017b4a040a6c866acbb88f030948" exitCode=0 Feb 17 14:33:27 crc kubenswrapper[4768]: I0217 14:33:27.232802 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pzmnc" event={"ID":"a587d82f-a2ac-4dbc-91be-66049fd5662a","Type":"ContainerDied","Data":"4e5c243b97a15b65d65915b198d8f156f4f4017b4a040a6c866acbb88f030948"} Feb 17 14:33:27 crc kubenswrapper[4768]: I0217 14:33:27.233038 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pzmnc" event={"ID":"a587d82f-a2ac-4dbc-91be-66049fd5662a","Type":"ContainerStarted","Data":"6f38e5f831ba73aa36455d68852a4ffcf6ffadbb1b966c3425e90ce2da963219"} Feb 17 14:33:28 crc kubenswrapper[4768]: I0217 14:33:28.246306 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pzmnc" event={"ID":"a587d82f-a2ac-4dbc-91be-66049fd5662a","Type":"ContainerStarted","Data":"90cf1bf9f16cffcc6e86a1e2d5fe3b07b1dc132b6644057958b0ffb10591a5f5"} Feb 17 14:33:29 crc kubenswrapper[4768]: I0217 14:33:29.256431 4768 generic.go:334] "Generic (PLEG): container finished" podID="a587d82f-a2ac-4dbc-91be-66049fd5662a" containerID="90cf1bf9f16cffcc6e86a1e2d5fe3b07b1dc132b6644057958b0ffb10591a5f5" exitCode=0 Feb 17 14:33:29 crc kubenswrapper[4768]: I0217 14:33:29.256545 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pzmnc" event={"ID":"a587d82f-a2ac-4dbc-91be-66049fd5662a","Type":"ContainerDied","Data":"90cf1bf9f16cffcc6e86a1e2d5fe3b07b1dc132b6644057958b0ffb10591a5f5"} Feb 17 14:33:30 crc kubenswrapper[4768]: I0217 14:33:30.277783 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pzmnc" event={"ID":"a587d82f-a2ac-4dbc-91be-66049fd5662a","Type":"ContainerStarted","Data":"48074a2b5d14cbaae61b90ce8ad5c9e5982b2b8e853a3569cc3076f1339fa5e1"} Feb 17 14:33:36 crc kubenswrapper[4768]: I0217 14:33:36.207316 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-pzmnc" Feb 17 14:33:36 crc kubenswrapper[4768]: I0217 14:33:36.208182 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-pzmnc" Feb 17 14:33:36 crc kubenswrapper[4768]: I0217 14:33:36.254919 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-pzmnc" Feb 17 14:33:36 crc kubenswrapper[4768]: I0217 14:33:36.280865 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-pzmnc" podStartSLOduration=8.641297698 podStartE2EDuration="11.280840341s" podCreationTimestamp="2026-02-17 14:33:25 +0000 UTC" firstStartedPulling="2026-02-17 14:33:27.234389662 +0000 UTC m=+3426.513776094" lastFinishedPulling="2026-02-17 14:33:29.873932295 +0000 UTC m=+3429.153318737" observedRunningTime="2026-02-17 14:33:30.314585041 +0000 UTC m=+3429.593971483" watchObservedRunningTime="2026-02-17 14:33:36.280840341 +0000 UTC m=+3435.560226803" Feb 17 14:33:36 crc kubenswrapper[4768]: I0217 14:33:36.368152 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-pzmnc" Feb 17 14:33:37 crc kubenswrapper[4768]: I0217 14:33:37.060937 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-pzmnc"] Feb 17 14:33:38 crc kubenswrapper[4768]: I0217 14:33:38.343258 4768 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-pzmnc" podUID="a587d82f-a2ac-4dbc-91be-66049fd5662a" containerName="registry-server" containerID="cri-o://48074a2b5d14cbaae61b90ce8ad5c9e5982b2b8e853a3569cc3076f1339fa5e1" gracePeriod=2 Feb 17 14:33:38 crc kubenswrapper[4768]: I0217 14:33:38.854248 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-pzmnc" Feb 17 14:33:38 crc kubenswrapper[4768]: I0217 14:33:38.953267 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a587d82f-a2ac-4dbc-91be-66049fd5662a-utilities\") pod \"a587d82f-a2ac-4dbc-91be-66049fd5662a\" (UID: \"a587d82f-a2ac-4dbc-91be-66049fd5662a\") " Feb 17 14:33:38 crc kubenswrapper[4768]: I0217 14:33:38.953398 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a587d82f-a2ac-4dbc-91be-66049fd5662a-catalog-content\") pod \"a587d82f-a2ac-4dbc-91be-66049fd5662a\" (UID: \"a587d82f-a2ac-4dbc-91be-66049fd5662a\") " Feb 17 14:33:38 crc kubenswrapper[4768]: I0217 14:33:38.953513 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5mm4f\" (UniqueName: \"kubernetes.io/projected/a587d82f-a2ac-4dbc-91be-66049fd5662a-kube-api-access-5mm4f\") pod \"a587d82f-a2ac-4dbc-91be-66049fd5662a\" (UID: \"a587d82f-a2ac-4dbc-91be-66049fd5662a\") " Feb 17 14:33:38 crc kubenswrapper[4768]: I0217 14:33:38.954037 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a587d82f-a2ac-4dbc-91be-66049fd5662a-utilities" (OuterVolumeSpecName: "utilities") pod "a587d82f-a2ac-4dbc-91be-66049fd5662a" (UID: "a587d82f-a2ac-4dbc-91be-66049fd5662a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 14:33:38 crc kubenswrapper[4768]: I0217 14:33:38.964965 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a587d82f-a2ac-4dbc-91be-66049fd5662a-kube-api-access-5mm4f" (OuterVolumeSpecName: "kube-api-access-5mm4f") pod "a587d82f-a2ac-4dbc-91be-66049fd5662a" (UID: "a587d82f-a2ac-4dbc-91be-66049fd5662a"). InnerVolumeSpecName "kube-api-access-5mm4f". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 14:33:38 crc kubenswrapper[4768]: I0217 14:33:38.975051 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a587d82f-a2ac-4dbc-91be-66049fd5662a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a587d82f-a2ac-4dbc-91be-66049fd5662a" (UID: "a587d82f-a2ac-4dbc-91be-66049fd5662a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 14:33:39 crc kubenswrapper[4768]: I0217 14:33:39.055650 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5mm4f\" (UniqueName: \"kubernetes.io/projected/a587d82f-a2ac-4dbc-91be-66049fd5662a-kube-api-access-5mm4f\") on node \"crc\" DevicePath \"\"" Feb 17 14:33:39 crc kubenswrapper[4768]: I0217 14:33:39.055694 4768 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a587d82f-a2ac-4dbc-91be-66049fd5662a-utilities\") on node \"crc\" DevicePath \"\"" Feb 17 14:33:39 crc kubenswrapper[4768]: I0217 14:33:39.055705 4768 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a587d82f-a2ac-4dbc-91be-66049fd5662a-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 17 14:33:39 crc kubenswrapper[4768]: I0217 14:33:39.355947 4768 generic.go:334] "Generic (PLEG): container finished" podID="a587d82f-a2ac-4dbc-91be-66049fd5662a" containerID="48074a2b5d14cbaae61b90ce8ad5c9e5982b2b8e853a3569cc3076f1339fa5e1" exitCode=0 Feb 17 14:33:39 crc kubenswrapper[4768]: I0217 14:33:39.356016 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pzmnc" event={"ID":"a587d82f-a2ac-4dbc-91be-66049fd5662a","Type":"ContainerDied","Data":"48074a2b5d14cbaae61b90ce8ad5c9e5982b2b8e853a3569cc3076f1339fa5e1"} Feb 17 14:33:39 crc kubenswrapper[4768]: I0217 14:33:39.356069 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pzmnc" event={"ID":"a587d82f-a2ac-4dbc-91be-66049fd5662a","Type":"ContainerDied","Data":"6f38e5f831ba73aa36455d68852a4ffcf6ffadbb1b966c3425e90ce2da963219"} Feb 17 14:33:39 crc kubenswrapper[4768]: I0217 14:33:39.356140 4768 scope.go:117] "RemoveContainer" containerID="48074a2b5d14cbaae61b90ce8ad5c9e5982b2b8e853a3569cc3076f1339fa5e1" Feb 17 14:33:39 crc kubenswrapper[4768]: I0217 14:33:39.356372 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-pzmnc" Feb 17 14:33:39 crc kubenswrapper[4768]: I0217 14:33:39.383162 4768 scope.go:117] "RemoveContainer" containerID="90cf1bf9f16cffcc6e86a1e2d5fe3b07b1dc132b6644057958b0ffb10591a5f5" Feb 17 14:33:39 crc kubenswrapper[4768]: I0217 14:33:39.413695 4768 scope.go:117] "RemoveContainer" containerID="4e5c243b97a15b65d65915b198d8f156f4f4017b4a040a6c866acbb88f030948" Feb 17 14:33:39 crc kubenswrapper[4768]: I0217 14:33:39.426283 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-pzmnc"] Feb 17 14:33:39 crc kubenswrapper[4768]: I0217 14:33:39.446415 4768 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-pzmnc"] Feb 17 14:33:39 crc kubenswrapper[4768]: I0217 14:33:39.467894 4768 scope.go:117] "RemoveContainer" containerID="48074a2b5d14cbaae61b90ce8ad5c9e5982b2b8e853a3569cc3076f1339fa5e1" Feb 17 14:33:39 crc kubenswrapper[4768]: E0217 14:33:39.468380 4768 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"48074a2b5d14cbaae61b90ce8ad5c9e5982b2b8e853a3569cc3076f1339fa5e1\": container with ID starting with 48074a2b5d14cbaae61b90ce8ad5c9e5982b2b8e853a3569cc3076f1339fa5e1 not found: ID does not exist" containerID="48074a2b5d14cbaae61b90ce8ad5c9e5982b2b8e853a3569cc3076f1339fa5e1" Feb 17 14:33:39 crc kubenswrapper[4768]: I0217 14:33:39.468424 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"48074a2b5d14cbaae61b90ce8ad5c9e5982b2b8e853a3569cc3076f1339fa5e1"} err="failed to get container status \"48074a2b5d14cbaae61b90ce8ad5c9e5982b2b8e853a3569cc3076f1339fa5e1\": rpc error: code = NotFound desc = could not find container \"48074a2b5d14cbaae61b90ce8ad5c9e5982b2b8e853a3569cc3076f1339fa5e1\": container with ID starting with 48074a2b5d14cbaae61b90ce8ad5c9e5982b2b8e853a3569cc3076f1339fa5e1 not found: ID does not exist" Feb 17 14:33:39 crc kubenswrapper[4768]: I0217 14:33:39.468450 4768 scope.go:117] "RemoveContainer" containerID="90cf1bf9f16cffcc6e86a1e2d5fe3b07b1dc132b6644057958b0ffb10591a5f5" Feb 17 14:33:39 crc kubenswrapper[4768]: E0217 14:33:39.468767 4768 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"90cf1bf9f16cffcc6e86a1e2d5fe3b07b1dc132b6644057958b0ffb10591a5f5\": container with ID starting with 90cf1bf9f16cffcc6e86a1e2d5fe3b07b1dc132b6644057958b0ffb10591a5f5 not found: ID does not exist" containerID="90cf1bf9f16cffcc6e86a1e2d5fe3b07b1dc132b6644057958b0ffb10591a5f5" Feb 17 14:33:39 crc kubenswrapper[4768]: I0217 14:33:39.468813 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"90cf1bf9f16cffcc6e86a1e2d5fe3b07b1dc132b6644057958b0ffb10591a5f5"} err="failed to get container status \"90cf1bf9f16cffcc6e86a1e2d5fe3b07b1dc132b6644057958b0ffb10591a5f5\": rpc error: code = NotFound desc = could not find container \"90cf1bf9f16cffcc6e86a1e2d5fe3b07b1dc132b6644057958b0ffb10591a5f5\": container with ID starting with 90cf1bf9f16cffcc6e86a1e2d5fe3b07b1dc132b6644057958b0ffb10591a5f5 not found: ID does not exist" Feb 17 14:33:39 crc kubenswrapper[4768]: I0217 14:33:39.468841 4768 scope.go:117] "RemoveContainer" containerID="4e5c243b97a15b65d65915b198d8f156f4f4017b4a040a6c866acbb88f030948" Feb 17 14:33:39 crc kubenswrapper[4768]: E0217 14:33:39.469239 4768 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4e5c243b97a15b65d65915b198d8f156f4f4017b4a040a6c866acbb88f030948\": container with ID starting with 4e5c243b97a15b65d65915b198d8f156f4f4017b4a040a6c866acbb88f030948 not found: ID does not exist" containerID="4e5c243b97a15b65d65915b198d8f156f4f4017b4a040a6c866acbb88f030948" Feb 17 14:33:39 crc kubenswrapper[4768]: I0217 14:33:39.469268 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4e5c243b97a15b65d65915b198d8f156f4f4017b4a040a6c866acbb88f030948"} err="failed to get container status \"4e5c243b97a15b65d65915b198d8f156f4f4017b4a040a6c866acbb88f030948\": rpc error: code = NotFound desc = could not find container \"4e5c243b97a15b65d65915b198d8f156f4f4017b4a040a6c866acbb88f030948\": container with ID starting with 4e5c243b97a15b65d65915b198d8f156f4f4017b4a040a6c866acbb88f030948 not found: ID does not exist" Feb 17 14:33:39 crc kubenswrapper[4768]: I0217 14:33:39.553750 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a587d82f-a2ac-4dbc-91be-66049fd5662a" path="/var/lib/kubelet/pods/a587d82f-a2ac-4dbc-91be-66049fd5662a/volumes" Feb 17 14:33:41 crc kubenswrapper[4768]: I0217 14:33:41.439092 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-tpnsx_6a5895c9-f283-43f0-82d7-c8a0cbf377ce/control-plane-machine-set-operator/0.log" Feb 17 14:33:41 crc kubenswrapper[4768]: I0217 14:33:41.559397 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-z2b5c_38dc9a37-3332-40e5-b4cd-3c702455584d/kube-rbac-proxy/0.log" Feb 17 14:33:41 crc kubenswrapper[4768]: I0217 14:33:41.623978 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-z2b5c_38dc9a37-3332-40e5-b4cd-3c702455584d/machine-api-operator/0.log" Feb 17 14:33:54 crc kubenswrapper[4768]: I0217 14:33:54.534659 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-858654f9db-ktxxp_390428c9-7c97-428f-b609-39f72ff5e558/cert-manager-controller/0.log" Feb 17 14:33:54 crc kubenswrapper[4768]: I0217 14:33:54.673511 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-cf98fcc89-g4sjn_ddf2aeae-0541-4180-883f-a7bdfeb65a57/cert-manager-cainjector/0.log" Feb 17 14:33:54 crc kubenswrapper[4768]: I0217 14:33:54.764383 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-687f57d79b-8nwpk_c39c013f-68bd-4b7b-9582-2cecc55854a5/cert-manager-webhook/0.log" Feb 17 14:34:08 crc kubenswrapper[4768]: I0217 14:34:08.024987 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-5c78fc5d65-2cs2r_b7911594-28b5-4c40-b08b-5f3b33d9bd11/nmstate-console-plugin/0.log" Feb 17 14:34:08 crc kubenswrapper[4768]: I0217 14:34:08.221558 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-dfmfq_aef4be82-c769-456d-90be-c95789ab9c2c/nmstate-handler/0.log" Feb 17 14:34:08 crc kubenswrapper[4768]: I0217 14:34:08.268147 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-58c85c668d-f5lj2_77308fca-ea01-49d6-b264-61df88438fd0/kube-rbac-proxy/0.log" Feb 17 14:34:08 crc kubenswrapper[4768]: I0217 14:34:08.325181 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-58c85c668d-f5lj2_77308fca-ea01-49d6-b264-61df88438fd0/nmstate-metrics/0.log" Feb 17 14:34:08 crc kubenswrapper[4768]: I0217 14:34:08.418523 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-694c9596b7-2fndh_89be9918-4f1d-4c85-8d1c-73f9245fd232/nmstate-operator/0.log" Feb 17 14:34:08 crc kubenswrapper[4768]: I0217 14:34:08.531770 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-866bcb46dc-ghl29_f4ddc594-4af8-4856-9542-0a76bf8c5acc/nmstate-webhook/0.log" Feb 17 14:34:28 crc kubenswrapper[4768]: I0217 14:34:28.060001 4768 patch_prober.go:28] interesting pod/machine-config-daemon-p97z4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 14:34:28 crc kubenswrapper[4768]: I0217 14:34:28.060706 4768 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-p97z4" podUID="10c685ba-8fe0-425c-958c-3fb6754d3d84" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 14:34:35 crc kubenswrapper[4768]: I0217 14:34:35.123653 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-69bbfbf88f-nv4f7_1a756f9a-bd11-42b8-9b67-1585ee9a5322/kube-rbac-proxy/0.log" Feb 17 14:34:35 crc kubenswrapper[4768]: I0217 14:34:35.153439 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-69bbfbf88f-nv4f7_1a756f9a-bd11-42b8-9b67-1585ee9a5322/controller/0.log" Feb 17 14:34:35 crc kubenswrapper[4768]: I0217 14:34:35.335759 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-hr4bc_9267631c-d9e1-49dc-a9bc-40f8ef1182ca/cp-frr-files/0.log" Feb 17 14:34:35 crc kubenswrapper[4768]: I0217 14:34:35.489544 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-hr4bc_9267631c-d9e1-49dc-a9bc-40f8ef1182ca/cp-frr-files/0.log" Feb 17 14:34:35 crc kubenswrapper[4768]: I0217 14:34:35.510859 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-hr4bc_9267631c-d9e1-49dc-a9bc-40f8ef1182ca/cp-metrics/0.log" Feb 17 14:34:35 crc kubenswrapper[4768]: I0217 14:34:35.556233 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-hr4bc_9267631c-d9e1-49dc-a9bc-40f8ef1182ca/cp-reloader/0.log" Feb 17 14:34:35 crc kubenswrapper[4768]: I0217 14:34:35.558850 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-hr4bc_9267631c-d9e1-49dc-a9bc-40f8ef1182ca/cp-reloader/0.log" Feb 17 14:34:35 crc kubenswrapper[4768]: I0217 14:34:35.738809 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-hr4bc_9267631c-d9e1-49dc-a9bc-40f8ef1182ca/cp-metrics/0.log" Feb 17 14:34:35 crc kubenswrapper[4768]: I0217 14:34:35.761332 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-hr4bc_9267631c-d9e1-49dc-a9bc-40f8ef1182ca/cp-metrics/0.log" Feb 17 14:34:35 crc kubenswrapper[4768]: I0217 14:34:35.765935 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-hr4bc_9267631c-d9e1-49dc-a9bc-40f8ef1182ca/cp-reloader/0.log" Feb 17 14:34:35 crc kubenswrapper[4768]: I0217 14:34:35.772679 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-hr4bc_9267631c-d9e1-49dc-a9bc-40f8ef1182ca/cp-frr-files/0.log" Feb 17 14:34:35 crc kubenswrapper[4768]: I0217 14:34:35.927860 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-hr4bc_9267631c-d9e1-49dc-a9bc-40f8ef1182ca/cp-frr-files/0.log" Feb 17 14:34:35 crc kubenswrapper[4768]: I0217 14:34:35.960970 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-hr4bc_9267631c-d9e1-49dc-a9bc-40f8ef1182ca/cp-metrics/0.log" Feb 17 14:34:35 crc kubenswrapper[4768]: I0217 14:34:35.965280 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-hr4bc_9267631c-d9e1-49dc-a9bc-40f8ef1182ca/controller/0.log" Feb 17 14:34:35 crc kubenswrapper[4768]: I0217 14:34:35.993166 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-hr4bc_9267631c-d9e1-49dc-a9bc-40f8ef1182ca/cp-reloader/0.log" Feb 17 14:34:36 crc kubenswrapper[4768]: I0217 14:34:36.314230 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-hr4bc_9267631c-d9e1-49dc-a9bc-40f8ef1182ca/frr-metrics/0.log" Feb 17 14:34:36 crc kubenswrapper[4768]: I0217 14:34:36.429135 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-hr4bc_9267631c-d9e1-49dc-a9bc-40f8ef1182ca/kube-rbac-proxy/0.log" Feb 17 14:34:36 crc kubenswrapper[4768]: I0217 14:34:36.453484 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-hr4bc_9267631c-d9e1-49dc-a9bc-40f8ef1182ca/kube-rbac-proxy-frr/0.log" Feb 17 14:34:36 crc kubenswrapper[4768]: I0217 14:34:36.534361 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-hr4bc_9267631c-d9e1-49dc-a9bc-40f8ef1182ca/reloader/0.log" Feb 17 14:34:36 crc kubenswrapper[4768]: I0217 14:34:36.661054 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-78b44bf5bb-qq5lm_8d47747b-4164-4e0e-b424-513d688cf6a8/frr-k8s-webhook-server/0.log" Feb 17 14:34:36 crc kubenswrapper[4768]: I0217 14:34:36.886776 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-df9f8fb7d-rjc2w_75aabc4c-e213-4a4b-a0ec-b0907ae8fd0e/manager/0.log" Feb 17 14:34:37 crc kubenswrapper[4768]: I0217 14:34:37.004361 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-654b8769b8-plb5c_fac3924e-d369-478e-9c10-c0a381b8696c/webhook-server/0.log" Feb 17 14:34:37 crc kubenswrapper[4768]: I0217 14:34:37.149434 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-8trsw_90e8e26b-3dc0-4bf7-a493-8c089ace61a0/kube-rbac-proxy/0.log" Feb 17 14:34:37 crc kubenswrapper[4768]: I0217 14:34:37.604705 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-8trsw_90e8e26b-3dc0-4bf7-a493-8c089ace61a0/speaker/0.log" Feb 17 14:34:37 crc kubenswrapper[4768]: I0217 14:34:37.611162 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-hr4bc_9267631c-d9e1-49dc-a9bc-40f8ef1182ca/frr/0.log" Feb 17 14:34:50 crc kubenswrapper[4768]: I0217 14:34:50.496274 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc2137gw9s_1b729781-75da-4a19-afbf-7a9459f6a7da/util/0.log" Feb 17 14:34:50 crc kubenswrapper[4768]: I0217 14:34:50.707532 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc2137gw9s_1b729781-75da-4a19-afbf-7a9459f6a7da/pull/0.log" Feb 17 14:34:50 crc kubenswrapper[4768]: I0217 14:34:50.761701 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc2137gw9s_1b729781-75da-4a19-afbf-7a9459f6a7da/util/0.log" Feb 17 14:34:50 crc kubenswrapper[4768]: I0217 14:34:50.765917 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc2137gw9s_1b729781-75da-4a19-afbf-7a9459f6a7da/pull/0.log" Feb 17 14:34:50 crc kubenswrapper[4768]: I0217 14:34:50.950331 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc2137gw9s_1b729781-75da-4a19-afbf-7a9459f6a7da/extract/0.log" Feb 17 14:34:50 crc kubenswrapper[4768]: I0217 14:34:50.969287 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc2137gw9s_1b729781-75da-4a19-afbf-7a9459f6a7da/util/0.log" Feb 17 14:34:51 crc kubenswrapper[4768]: I0217 14:34:51.004806 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc2137gw9s_1b729781-75da-4a19-afbf-7a9459f6a7da/pull/0.log" Feb 17 14:34:51 crc kubenswrapper[4768]: I0217 14:34:51.187091 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-hl2m8_8a1604e7-c6f5-498e-ac94-a9e888e3e6b3/extract-utilities/0.log" Feb 17 14:34:51 crc kubenswrapper[4768]: I0217 14:34:51.342135 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-hl2m8_8a1604e7-c6f5-498e-ac94-a9e888e3e6b3/extract-content/0.log" Feb 17 14:34:51 crc kubenswrapper[4768]: I0217 14:34:51.351776 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-hl2m8_8a1604e7-c6f5-498e-ac94-a9e888e3e6b3/extract-content/0.log" Feb 17 14:34:51 crc kubenswrapper[4768]: I0217 14:34:51.373451 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-hl2m8_8a1604e7-c6f5-498e-ac94-a9e888e3e6b3/extract-utilities/0.log" Feb 17 14:34:51 crc kubenswrapper[4768]: I0217 14:34:51.724811 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-hl2m8_8a1604e7-c6f5-498e-ac94-a9e888e3e6b3/extract-content/0.log" Feb 17 14:34:51 crc kubenswrapper[4768]: I0217 14:34:51.741310 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-hl2m8_8a1604e7-c6f5-498e-ac94-a9e888e3e6b3/extract-utilities/0.log" Feb 17 14:34:51 crc kubenswrapper[4768]: I0217 14:34:51.951199 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-wjsd2_3e01cac9-3463-4a68-be1d-e64867827ad3/extract-utilities/0.log" Feb 17 14:34:52 crc kubenswrapper[4768]: I0217 14:34:52.199448 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-wjsd2_3e01cac9-3463-4a68-be1d-e64867827ad3/extract-content/0.log" Feb 17 14:34:52 crc kubenswrapper[4768]: I0217 14:34:52.223950 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-wjsd2_3e01cac9-3463-4a68-be1d-e64867827ad3/extract-utilities/0.log" Feb 17 14:34:52 crc kubenswrapper[4768]: I0217 14:34:52.232709 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-wjsd2_3e01cac9-3463-4a68-be1d-e64867827ad3/extract-content/0.log" Feb 17 14:34:52 crc kubenswrapper[4768]: I0217 14:34:52.299699 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-hl2m8_8a1604e7-c6f5-498e-ac94-a9e888e3e6b3/registry-server/0.log" Feb 17 14:34:52 crc kubenswrapper[4768]: I0217 14:34:52.392207 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-wjsd2_3e01cac9-3463-4a68-be1d-e64867827ad3/extract-utilities/0.log" Feb 17 14:34:52 crc kubenswrapper[4768]: I0217 14:34:52.409741 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-wjsd2_3e01cac9-3463-4a68-be1d-e64867827ad3/extract-content/0.log" Feb 17 14:34:52 crc kubenswrapper[4768]: I0217 14:34:52.588769 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecant4wt_4d7e7247-8115-4259-b218-d5d8dceac01d/util/0.log" Feb 17 14:34:52 crc kubenswrapper[4768]: I0217 14:34:52.885846 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecant4wt_4d7e7247-8115-4259-b218-d5d8dceac01d/util/0.log" Feb 17 14:34:52 crc kubenswrapper[4768]: I0217 14:34:52.894986 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecant4wt_4d7e7247-8115-4259-b218-d5d8dceac01d/pull/0.log" Feb 17 14:34:52 crc kubenswrapper[4768]: I0217 14:34:52.975734 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecant4wt_4d7e7247-8115-4259-b218-d5d8dceac01d/pull/0.log" Feb 17 14:34:52 crc kubenswrapper[4768]: I0217 14:34:52.979019 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-wjsd2_3e01cac9-3463-4a68-be1d-e64867827ad3/registry-server/0.log" Feb 17 14:34:53 crc kubenswrapper[4768]: I0217 14:34:53.068180 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecant4wt_4d7e7247-8115-4259-b218-d5d8dceac01d/util/0.log" Feb 17 14:34:53 crc kubenswrapper[4768]: I0217 14:34:53.120317 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecant4wt_4d7e7247-8115-4259-b218-d5d8dceac01d/pull/0.log" Feb 17 14:34:53 crc kubenswrapper[4768]: I0217 14:34:53.185937 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecant4wt_4d7e7247-8115-4259-b218-d5d8dceac01d/extract/0.log" Feb 17 14:34:53 crc kubenswrapper[4768]: I0217 14:34:53.278422 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-grp2v_949f4cbc-e86f-4f30-bac7-d31c24169e4e/marketplace-operator/0.log" Feb 17 14:34:53 crc kubenswrapper[4768]: I0217 14:34:53.419223 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-288nc_317cb29b-f26f-4bed-a923-9fe5e7d15391/extract-utilities/0.log" Feb 17 14:34:53 crc kubenswrapper[4768]: I0217 14:34:53.570006 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-288nc_317cb29b-f26f-4bed-a923-9fe5e7d15391/extract-content/0.log" Feb 17 14:34:53 crc kubenswrapper[4768]: I0217 14:34:53.637025 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-288nc_317cb29b-f26f-4bed-a923-9fe5e7d15391/extract-content/0.log" Feb 17 14:34:53 crc kubenswrapper[4768]: I0217 14:34:53.674922 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-288nc_317cb29b-f26f-4bed-a923-9fe5e7d15391/extract-utilities/0.log" Feb 17 14:34:53 crc kubenswrapper[4768]: I0217 14:34:53.863461 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-288nc_317cb29b-f26f-4bed-a923-9fe5e7d15391/extract-content/0.log" Feb 17 14:34:53 crc kubenswrapper[4768]: I0217 14:34:53.879006 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-288nc_317cb29b-f26f-4bed-a923-9fe5e7d15391/extract-utilities/0.log" Feb 17 14:34:54 crc kubenswrapper[4768]: I0217 14:34:54.062083 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-288nc_317cb29b-f26f-4bed-a923-9fe5e7d15391/registry-server/0.log" Feb 17 14:34:54 crc kubenswrapper[4768]: I0217 14:34:54.122757 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-9l98f_c2e521d5-38fc-41dc-9d90-cd52ebc76308/extract-utilities/0.log" Feb 17 14:34:54 crc kubenswrapper[4768]: I0217 14:34:54.285531 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-9l98f_c2e521d5-38fc-41dc-9d90-cd52ebc76308/extract-content/0.log" Feb 17 14:34:54 crc kubenswrapper[4768]: I0217 14:34:54.296358 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-9l98f_c2e521d5-38fc-41dc-9d90-cd52ebc76308/extract-utilities/0.log" Feb 17 14:34:54 crc kubenswrapper[4768]: I0217 14:34:54.328367 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-9l98f_c2e521d5-38fc-41dc-9d90-cd52ebc76308/extract-content/0.log" Feb 17 14:34:54 crc kubenswrapper[4768]: I0217 14:34:54.545292 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-9l98f_c2e521d5-38fc-41dc-9d90-cd52ebc76308/extract-utilities/0.log" Feb 17 14:34:54 crc kubenswrapper[4768]: I0217 14:34:54.546523 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-9l98f_c2e521d5-38fc-41dc-9d90-cd52ebc76308/extract-content/0.log" Feb 17 14:34:54 crc kubenswrapper[4768]: I0217 14:34:54.666633 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-9l98f_c2e521d5-38fc-41dc-9d90-cd52ebc76308/registry-server/0.log" Feb 17 14:34:58 crc kubenswrapper[4768]: I0217 14:34:58.060359 4768 patch_prober.go:28] interesting pod/machine-config-daemon-p97z4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 14:34:58 crc kubenswrapper[4768]: I0217 14:34:58.060903 4768 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-p97z4" podUID="10c685ba-8fe0-425c-958c-3fb6754d3d84" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 14:35:28 crc kubenswrapper[4768]: I0217 14:35:28.059656 4768 patch_prober.go:28] interesting pod/machine-config-daemon-p97z4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 14:35:28 crc kubenswrapper[4768]: I0217 14:35:28.060216 4768 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-p97z4" podUID="10c685ba-8fe0-425c-958c-3fb6754d3d84" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 14:35:28 crc kubenswrapper[4768]: I0217 14:35:28.060256 4768 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-p97z4" Feb 17 14:35:28 crc kubenswrapper[4768]: I0217 14:35:28.061125 4768 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"1cdfe752ad5564d137aa582fce5dfd1f6e3403007387ccc408cda17cd0114903"} pod="openshift-machine-config-operator/machine-config-daemon-p97z4" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 17 14:35:28 crc kubenswrapper[4768]: I0217 14:35:28.061215 4768 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-p97z4" podUID="10c685ba-8fe0-425c-958c-3fb6754d3d84" containerName="machine-config-daemon" containerID="cri-o://1cdfe752ad5564d137aa582fce5dfd1f6e3403007387ccc408cda17cd0114903" gracePeriod=600 Feb 17 14:35:28 crc kubenswrapper[4768]: E0217 14:35:28.197029 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p97z4_openshift-machine-config-operator(10c685ba-8fe0-425c-958c-3fb6754d3d84)\"" pod="openshift-machine-config-operator/machine-config-daemon-p97z4" podUID="10c685ba-8fe0-425c-958c-3fb6754d3d84" Feb 17 14:35:28 crc kubenswrapper[4768]: I0217 14:35:28.316876 4768 generic.go:334] "Generic (PLEG): container finished" podID="10c685ba-8fe0-425c-958c-3fb6754d3d84" containerID="1cdfe752ad5564d137aa582fce5dfd1f6e3403007387ccc408cda17cd0114903" exitCode=0 Feb 17 14:35:28 crc kubenswrapper[4768]: I0217 14:35:28.316927 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-p97z4" event={"ID":"10c685ba-8fe0-425c-958c-3fb6754d3d84","Type":"ContainerDied","Data":"1cdfe752ad5564d137aa582fce5dfd1f6e3403007387ccc408cda17cd0114903"} Feb 17 14:35:28 crc kubenswrapper[4768]: I0217 14:35:28.316965 4768 scope.go:117] "RemoveContainer" containerID="ec2ca14acc6c65d44de9f0616d9b906984ff6fc79d13b91607724b751e6b996b" Feb 17 14:35:28 crc kubenswrapper[4768]: I0217 14:35:28.317786 4768 scope.go:117] "RemoveContainer" containerID="1cdfe752ad5564d137aa582fce5dfd1f6e3403007387ccc408cda17cd0114903" Feb 17 14:35:28 crc kubenswrapper[4768]: E0217 14:35:28.320165 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p97z4_openshift-machine-config-operator(10c685ba-8fe0-425c-958c-3fb6754d3d84)\"" pod="openshift-machine-config-operator/machine-config-daemon-p97z4" podUID="10c685ba-8fe0-425c-958c-3fb6754d3d84" Feb 17 14:35:43 crc kubenswrapper[4768]: I0217 14:35:43.534775 4768 scope.go:117] "RemoveContainer" containerID="1cdfe752ad5564d137aa582fce5dfd1f6e3403007387ccc408cda17cd0114903" Feb 17 14:35:43 crc kubenswrapper[4768]: E0217 14:35:43.535718 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p97z4_openshift-machine-config-operator(10c685ba-8fe0-425c-958c-3fb6754d3d84)\"" pod="openshift-machine-config-operator/machine-config-daemon-p97z4" podUID="10c685ba-8fe0-425c-958c-3fb6754d3d84" Feb 17 14:35:58 crc kubenswrapper[4768]: I0217 14:35:58.534608 4768 scope.go:117] "RemoveContainer" containerID="1cdfe752ad5564d137aa582fce5dfd1f6e3403007387ccc408cda17cd0114903" Feb 17 14:35:58 crc kubenswrapper[4768]: E0217 14:35:58.535583 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p97z4_openshift-machine-config-operator(10c685ba-8fe0-425c-958c-3fb6754d3d84)\"" pod="openshift-machine-config-operator/machine-config-daemon-p97z4" podUID="10c685ba-8fe0-425c-958c-3fb6754d3d84" Feb 17 14:36:10 crc kubenswrapper[4768]: I0217 14:36:10.535440 4768 scope.go:117] "RemoveContainer" containerID="1cdfe752ad5564d137aa582fce5dfd1f6e3403007387ccc408cda17cd0114903" Feb 17 14:36:10 crc kubenswrapper[4768]: E0217 14:36:10.536246 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p97z4_openshift-machine-config-operator(10c685ba-8fe0-425c-958c-3fb6754d3d84)\"" pod="openshift-machine-config-operator/machine-config-daemon-p97z4" podUID="10c685ba-8fe0-425c-958c-3fb6754d3d84" Feb 17 14:36:22 crc kubenswrapper[4768]: I0217 14:36:22.534959 4768 scope.go:117] "RemoveContainer" containerID="1cdfe752ad5564d137aa582fce5dfd1f6e3403007387ccc408cda17cd0114903" Feb 17 14:36:22 crc kubenswrapper[4768]: E0217 14:36:22.536484 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p97z4_openshift-machine-config-operator(10c685ba-8fe0-425c-958c-3fb6754d3d84)\"" pod="openshift-machine-config-operator/machine-config-daemon-p97z4" podUID="10c685ba-8fe0-425c-958c-3fb6754d3d84" Feb 17 14:36:33 crc kubenswrapper[4768]: I0217 14:36:33.534645 4768 scope.go:117] "RemoveContainer" containerID="1cdfe752ad5564d137aa582fce5dfd1f6e3403007387ccc408cda17cd0114903" Feb 17 14:36:33 crc kubenswrapper[4768]: E0217 14:36:33.535941 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p97z4_openshift-machine-config-operator(10c685ba-8fe0-425c-958c-3fb6754d3d84)\"" pod="openshift-machine-config-operator/machine-config-daemon-p97z4" podUID="10c685ba-8fe0-425c-958c-3fb6754d3d84" Feb 17 14:36:42 crc kubenswrapper[4768]: I0217 14:36:42.107465 4768 generic.go:334] "Generic (PLEG): container finished" podID="0435c3c9-4bd7-46a0-9cd2-df744778c614" containerID="93ea69202fbb8aa300d562dd517c73d6043d6f52d1f44aa73a2edda55a7218f1" exitCode=0 Feb 17 14:36:42 crc kubenswrapper[4768]: I0217 14:36:42.107561 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-kmvln/must-gather-2qkjw" event={"ID":"0435c3c9-4bd7-46a0-9cd2-df744778c614","Type":"ContainerDied","Data":"93ea69202fbb8aa300d562dd517c73d6043d6f52d1f44aa73a2edda55a7218f1"} Feb 17 14:36:42 crc kubenswrapper[4768]: I0217 14:36:42.108899 4768 scope.go:117] "RemoveContainer" containerID="93ea69202fbb8aa300d562dd517c73d6043d6f52d1f44aa73a2edda55a7218f1" Feb 17 14:36:42 crc kubenswrapper[4768]: I0217 14:36:42.898655 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-kmvln_must-gather-2qkjw_0435c3c9-4bd7-46a0-9cd2-df744778c614/gather/0.log" Feb 17 14:36:47 crc kubenswrapper[4768]: I0217 14:36:47.534643 4768 scope.go:117] "RemoveContainer" containerID="1cdfe752ad5564d137aa582fce5dfd1f6e3403007387ccc408cda17cd0114903" Feb 17 14:36:47 crc kubenswrapper[4768]: E0217 14:36:47.535900 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p97z4_openshift-machine-config-operator(10c685ba-8fe0-425c-958c-3fb6754d3d84)\"" pod="openshift-machine-config-operator/machine-config-daemon-p97z4" podUID="10c685ba-8fe0-425c-958c-3fb6754d3d84" Feb 17 14:36:51 crc kubenswrapper[4768]: I0217 14:36:51.286605 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-kmvln/must-gather-2qkjw"] Feb 17 14:36:51 crc kubenswrapper[4768]: I0217 14:36:51.287377 4768 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-kmvln/must-gather-2qkjw" podUID="0435c3c9-4bd7-46a0-9cd2-df744778c614" containerName="copy" containerID="cri-o://aeea91b5f8db2c28dd88a7f6e7eb105996e7d91bd1ec3a1f9442b3b385aa2dcb" gracePeriod=2 Feb 17 14:36:51 crc kubenswrapper[4768]: I0217 14:36:51.294220 4768 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-kmvln/must-gather-2qkjw"] Feb 17 14:36:51 crc kubenswrapper[4768]: I0217 14:36:51.775332 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-kmvln_must-gather-2qkjw_0435c3c9-4bd7-46a0-9cd2-df744778c614/copy/0.log" Feb 17 14:36:51 crc kubenswrapper[4768]: I0217 14:36:51.776302 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-kmvln/must-gather-2qkjw" Feb 17 14:36:51 crc kubenswrapper[4768]: I0217 14:36:51.853822 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/0435c3c9-4bd7-46a0-9cd2-df744778c614-must-gather-output\") pod \"0435c3c9-4bd7-46a0-9cd2-df744778c614\" (UID: \"0435c3c9-4bd7-46a0-9cd2-df744778c614\") " Feb 17 14:36:51 crc kubenswrapper[4768]: I0217 14:36:51.853880 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hfnkw\" (UniqueName: \"kubernetes.io/projected/0435c3c9-4bd7-46a0-9cd2-df744778c614-kube-api-access-hfnkw\") pod \"0435c3c9-4bd7-46a0-9cd2-df744778c614\" (UID: \"0435c3c9-4bd7-46a0-9cd2-df744778c614\") " Feb 17 14:36:51 crc kubenswrapper[4768]: I0217 14:36:51.859736 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0435c3c9-4bd7-46a0-9cd2-df744778c614-kube-api-access-hfnkw" (OuterVolumeSpecName: "kube-api-access-hfnkw") pod "0435c3c9-4bd7-46a0-9cd2-df744778c614" (UID: "0435c3c9-4bd7-46a0-9cd2-df744778c614"). InnerVolumeSpecName "kube-api-access-hfnkw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 14:36:51 crc kubenswrapper[4768]: I0217 14:36:51.957722 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hfnkw\" (UniqueName: \"kubernetes.io/projected/0435c3c9-4bd7-46a0-9cd2-df744778c614-kube-api-access-hfnkw\") on node \"crc\" DevicePath \"\"" Feb 17 14:36:52 crc kubenswrapper[4768]: I0217 14:36:52.005693 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0435c3c9-4bd7-46a0-9cd2-df744778c614-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "0435c3c9-4bd7-46a0-9cd2-df744778c614" (UID: "0435c3c9-4bd7-46a0-9cd2-df744778c614"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 14:36:52 crc kubenswrapper[4768]: I0217 14:36:52.059083 4768 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/0435c3c9-4bd7-46a0-9cd2-df744778c614-must-gather-output\") on node \"crc\" DevicePath \"\"" Feb 17 14:36:52 crc kubenswrapper[4768]: I0217 14:36:52.217708 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-kmvln_must-gather-2qkjw_0435c3c9-4bd7-46a0-9cd2-df744778c614/copy/0.log" Feb 17 14:36:52 crc kubenswrapper[4768]: I0217 14:36:52.218182 4768 generic.go:334] "Generic (PLEG): container finished" podID="0435c3c9-4bd7-46a0-9cd2-df744778c614" containerID="aeea91b5f8db2c28dd88a7f6e7eb105996e7d91bd1ec3a1f9442b3b385aa2dcb" exitCode=143 Feb 17 14:36:52 crc kubenswrapper[4768]: I0217 14:36:52.218249 4768 scope.go:117] "RemoveContainer" containerID="aeea91b5f8db2c28dd88a7f6e7eb105996e7d91bd1ec3a1f9442b3b385aa2dcb" Feb 17 14:36:52 crc kubenswrapper[4768]: I0217 14:36:52.218444 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-kmvln/must-gather-2qkjw" Feb 17 14:36:52 crc kubenswrapper[4768]: I0217 14:36:52.240337 4768 scope.go:117] "RemoveContainer" containerID="93ea69202fbb8aa300d562dd517c73d6043d6f52d1f44aa73a2edda55a7218f1" Feb 17 14:36:52 crc kubenswrapper[4768]: I0217 14:36:52.311271 4768 scope.go:117] "RemoveContainer" containerID="aeea91b5f8db2c28dd88a7f6e7eb105996e7d91bd1ec3a1f9442b3b385aa2dcb" Feb 17 14:36:52 crc kubenswrapper[4768]: E0217 14:36:52.311726 4768 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"aeea91b5f8db2c28dd88a7f6e7eb105996e7d91bd1ec3a1f9442b3b385aa2dcb\": container with ID starting with aeea91b5f8db2c28dd88a7f6e7eb105996e7d91bd1ec3a1f9442b3b385aa2dcb not found: ID does not exist" containerID="aeea91b5f8db2c28dd88a7f6e7eb105996e7d91bd1ec3a1f9442b3b385aa2dcb" Feb 17 14:36:52 crc kubenswrapper[4768]: I0217 14:36:52.311768 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"aeea91b5f8db2c28dd88a7f6e7eb105996e7d91bd1ec3a1f9442b3b385aa2dcb"} err="failed to get container status \"aeea91b5f8db2c28dd88a7f6e7eb105996e7d91bd1ec3a1f9442b3b385aa2dcb\": rpc error: code = NotFound desc = could not find container \"aeea91b5f8db2c28dd88a7f6e7eb105996e7d91bd1ec3a1f9442b3b385aa2dcb\": container with ID starting with aeea91b5f8db2c28dd88a7f6e7eb105996e7d91bd1ec3a1f9442b3b385aa2dcb not found: ID does not exist" Feb 17 14:36:52 crc kubenswrapper[4768]: I0217 14:36:52.311794 4768 scope.go:117] "RemoveContainer" containerID="93ea69202fbb8aa300d562dd517c73d6043d6f52d1f44aa73a2edda55a7218f1" Feb 17 14:36:52 crc kubenswrapper[4768]: E0217 14:36:52.312429 4768 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"93ea69202fbb8aa300d562dd517c73d6043d6f52d1f44aa73a2edda55a7218f1\": container with ID starting with 93ea69202fbb8aa300d562dd517c73d6043d6f52d1f44aa73a2edda55a7218f1 not found: ID does not exist" containerID="93ea69202fbb8aa300d562dd517c73d6043d6f52d1f44aa73a2edda55a7218f1" Feb 17 14:36:52 crc kubenswrapper[4768]: I0217 14:36:52.312460 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"93ea69202fbb8aa300d562dd517c73d6043d6f52d1f44aa73a2edda55a7218f1"} err="failed to get container status \"93ea69202fbb8aa300d562dd517c73d6043d6f52d1f44aa73a2edda55a7218f1\": rpc error: code = NotFound desc = could not find container \"93ea69202fbb8aa300d562dd517c73d6043d6f52d1f44aa73a2edda55a7218f1\": container with ID starting with 93ea69202fbb8aa300d562dd517c73d6043d6f52d1f44aa73a2edda55a7218f1 not found: ID does not exist" Feb 17 14:36:53 crc kubenswrapper[4768]: I0217 14:36:53.551184 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0435c3c9-4bd7-46a0-9cd2-df744778c614" path="/var/lib/kubelet/pods/0435c3c9-4bd7-46a0-9cd2-df744778c614/volumes" Feb 17 14:37:00 crc kubenswrapper[4768]: I0217 14:37:00.534825 4768 scope.go:117] "RemoveContainer" containerID="1cdfe752ad5564d137aa582fce5dfd1f6e3403007387ccc408cda17cd0114903" Feb 17 14:37:00 crc kubenswrapper[4768]: E0217 14:37:00.535755 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p97z4_openshift-machine-config-operator(10c685ba-8fe0-425c-958c-3fb6754d3d84)\"" pod="openshift-machine-config-operator/machine-config-daemon-p97z4" podUID="10c685ba-8fe0-425c-958c-3fb6754d3d84" Feb 17 14:37:14 crc kubenswrapper[4768]: I0217 14:37:14.534225 4768 scope.go:117] "RemoveContainer" containerID="1cdfe752ad5564d137aa582fce5dfd1f6e3403007387ccc408cda17cd0114903" Feb 17 14:37:14 crc kubenswrapper[4768]: E0217 14:37:14.534765 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p97z4_openshift-machine-config-operator(10c685ba-8fe0-425c-958c-3fb6754d3d84)\"" pod="openshift-machine-config-operator/machine-config-daemon-p97z4" podUID="10c685ba-8fe0-425c-958c-3fb6754d3d84" Feb 17 14:37:27 crc kubenswrapper[4768]: I0217 14:37:27.534521 4768 scope.go:117] "RemoveContainer" containerID="1cdfe752ad5564d137aa582fce5dfd1f6e3403007387ccc408cda17cd0114903" Feb 17 14:37:27 crc kubenswrapper[4768]: E0217 14:37:27.535705 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p97z4_openshift-machine-config-operator(10c685ba-8fe0-425c-958c-3fb6754d3d84)\"" pod="openshift-machine-config-operator/machine-config-daemon-p97z4" podUID="10c685ba-8fe0-425c-958c-3fb6754d3d84" Feb 17 14:37:39 crc kubenswrapper[4768]: I0217 14:37:39.535454 4768 scope.go:117] "RemoveContainer" containerID="1cdfe752ad5564d137aa582fce5dfd1f6e3403007387ccc408cda17cd0114903" Feb 17 14:37:39 crc kubenswrapper[4768]: E0217 14:37:39.536553 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p97z4_openshift-machine-config-operator(10c685ba-8fe0-425c-958c-3fb6754d3d84)\"" pod="openshift-machine-config-operator/machine-config-daemon-p97z4" podUID="10c685ba-8fe0-425c-958c-3fb6754d3d84" Feb 17 14:37:50 crc kubenswrapper[4768]: I0217 14:37:50.534133 4768 scope.go:117] "RemoveContainer" containerID="1cdfe752ad5564d137aa582fce5dfd1f6e3403007387ccc408cda17cd0114903" Feb 17 14:37:50 crc kubenswrapper[4768]: E0217 14:37:50.534971 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p97z4_openshift-machine-config-operator(10c685ba-8fe0-425c-958c-3fb6754d3d84)\"" pod="openshift-machine-config-operator/machine-config-daemon-p97z4" podUID="10c685ba-8fe0-425c-958c-3fb6754d3d84" Feb 17 14:38:04 crc kubenswrapper[4768]: I0217 14:38:04.534579 4768 scope.go:117] "RemoveContainer" containerID="1cdfe752ad5564d137aa582fce5dfd1f6e3403007387ccc408cda17cd0114903" Feb 17 14:38:04 crc kubenswrapper[4768]: E0217 14:38:04.535306 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p97z4_openshift-machine-config-operator(10c685ba-8fe0-425c-958c-3fb6754d3d84)\"" pod="openshift-machine-config-operator/machine-config-daemon-p97z4" podUID="10c685ba-8fe0-425c-958c-3fb6754d3d84" Feb 17 14:38:18 crc kubenswrapper[4768]: I0217 14:38:18.534901 4768 scope.go:117] "RemoveContainer" containerID="1cdfe752ad5564d137aa582fce5dfd1f6e3403007387ccc408cda17cd0114903" Feb 17 14:38:18 crc kubenswrapper[4768]: E0217 14:38:18.535915 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p97z4_openshift-machine-config-operator(10c685ba-8fe0-425c-958c-3fb6754d3d84)\"" pod="openshift-machine-config-operator/machine-config-daemon-p97z4" podUID="10c685ba-8fe0-425c-958c-3fb6754d3d84" Feb 17 14:38:30 crc kubenswrapper[4768]: I0217 14:38:30.182238 4768 scope.go:117] "RemoveContainer" containerID="0dad2640359b822dc7f772626cf4a0f8faa958d737327fac924fbf56ec5c8da6" Feb 17 14:38:31 crc kubenswrapper[4768]: I0217 14:38:31.546702 4768 scope.go:117] "RemoveContainer" containerID="1cdfe752ad5564d137aa582fce5dfd1f6e3403007387ccc408cda17cd0114903" Feb 17 14:38:31 crc kubenswrapper[4768]: E0217 14:38:31.547556 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p97z4_openshift-machine-config-operator(10c685ba-8fe0-425c-958c-3fb6754d3d84)\"" pod="openshift-machine-config-operator/machine-config-daemon-p97z4" podUID="10c685ba-8fe0-425c-958c-3fb6754d3d84" Feb 17 14:38:43 crc kubenswrapper[4768]: I0217 14:38:43.535180 4768 scope.go:117] "RemoveContainer" containerID="1cdfe752ad5564d137aa582fce5dfd1f6e3403007387ccc408cda17cd0114903" Feb 17 14:38:43 crc kubenswrapper[4768]: E0217 14:38:43.536025 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p97z4_openshift-machine-config-operator(10c685ba-8fe0-425c-958c-3fb6754d3d84)\"" pod="openshift-machine-config-operator/machine-config-daemon-p97z4" podUID="10c685ba-8fe0-425c-958c-3fb6754d3d84" Feb 17 14:38:58 crc kubenswrapper[4768]: I0217 14:38:58.535204 4768 scope.go:117] "RemoveContainer" containerID="1cdfe752ad5564d137aa582fce5dfd1f6e3403007387ccc408cda17cd0114903" Feb 17 14:38:58 crc kubenswrapper[4768]: E0217 14:38:58.535857 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p97z4_openshift-machine-config-operator(10c685ba-8fe0-425c-958c-3fb6754d3d84)\"" pod="openshift-machine-config-operator/machine-config-daemon-p97z4" podUID="10c685ba-8fe0-425c-958c-3fb6754d3d84" Feb 17 14:39:13 crc kubenswrapper[4768]: I0217 14:39:13.534741 4768 scope.go:117] "RemoveContainer" containerID="1cdfe752ad5564d137aa582fce5dfd1f6e3403007387ccc408cda17cd0114903" Feb 17 14:39:13 crc kubenswrapper[4768]: E0217 14:39:13.538076 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p97z4_openshift-machine-config-operator(10c685ba-8fe0-425c-958c-3fb6754d3d84)\"" pod="openshift-machine-config-operator/machine-config-daemon-p97z4" podUID="10c685ba-8fe0-425c-958c-3fb6754d3d84" Feb 17 14:39:26 crc kubenswrapper[4768]: I0217 14:39:26.534959 4768 scope.go:117] "RemoveContainer" containerID="1cdfe752ad5564d137aa582fce5dfd1f6e3403007387ccc408cda17cd0114903" Feb 17 14:39:26 crc kubenswrapper[4768]: E0217 14:39:26.535627 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p97z4_openshift-machine-config-operator(10c685ba-8fe0-425c-958c-3fb6754d3d84)\"" pod="openshift-machine-config-operator/machine-config-daemon-p97z4" podUID="10c685ba-8fe0-425c-958c-3fb6754d3d84" Feb 17 14:39:41 crc kubenswrapper[4768]: I0217 14:39:41.544187 4768 scope.go:117] "RemoveContainer" containerID="1cdfe752ad5564d137aa582fce5dfd1f6e3403007387ccc408cda17cd0114903" Feb 17 14:39:41 crc kubenswrapper[4768]: E0217 14:39:41.545085 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p97z4_openshift-machine-config-operator(10c685ba-8fe0-425c-958c-3fb6754d3d84)\"" pod="openshift-machine-config-operator/machine-config-daemon-p97z4" podUID="10c685ba-8fe0-425c-958c-3fb6754d3d84" Feb 17 14:39:52 crc kubenswrapper[4768]: I0217 14:39:52.534914 4768 scope.go:117] "RemoveContainer" containerID="1cdfe752ad5564d137aa582fce5dfd1f6e3403007387ccc408cda17cd0114903" Feb 17 14:39:52 crc kubenswrapper[4768]: E0217 14:39:52.536008 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p97z4_openshift-machine-config-operator(10c685ba-8fe0-425c-958c-3fb6754d3d84)\"" pod="openshift-machine-config-operator/machine-config-daemon-p97z4" podUID="10c685ba-8fe0-425c-958c-3fb6754d3d84" Feb 17 14:39:55 crc kubenswrapper[4768]: I0217 14:39:55.806608 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-92sxp"] Feb 17 14:39:55 crc kubenswrapper[4768]: E0217 14:39:55.807419 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0435c3c9-4bd7-46a0-9cd2-df744778c614" containerName="gather" Feb 17 14:39:55 crc kubenswrapper[4768]: I0217 14:39:55.807436 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="0435c3c9-4bd7-46a0-9cd2-df744778c614" containerName="gather" Feb 17 14:39:55 crc kubenswrapper[4768]: E0217 14:39:55.807464 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a587d82f-a2ac-4dbc-91be-66049fd5662a" containerName="extract-utilities" Feb 17 14:39:55 crc kubenswrapper[4768]: I0217 14:39:55.807474 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="a587d82f-a2ac-4dbc-91be-66049fd5662a" containerName="extract-utilities" Feb 17 14:39:55 crc kubenswrapper[4768]: E0217 14:39:55.807489 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a587d82f-a2ac-4dbc-91be-66049fd5662a" containerName="extract-content" Feb 17 14:39:55 crc kubenswrapper[4768]: I0217 14:39:55.807497 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="a587d82f-a2ac-4dbc-91be-66049fd5662a" containerName="extract-content" Feb 17 14:39:55 crc kubenswrapper[4768]: E0217 14:39:55.807513 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a587d82f-a2ac-4dbc-91be-66049fd5662a" containerName="registry-server" Feb 17 14:39:55 crc kubenswrapper[4768]: I0217 14:39:55.807520 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="a587d82f-a2ac-4dbc-91be-66049fd5662a" containerName="registry-server" Feb 17 14:39:55 crc kubenswrapper[4768]: E0217 14:39:55.807543 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0435c3c9-4bd7-46a0-9cd2-df744778c614" containerName="copy" Feb 17 14:39:55 crc kubenswrapper[4768]: I0217 14:39:55.807551 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="0435c3c9-4bd7-46a0-9cd2-df744778c614" containerName="copy" Feb 17 14:39:55 crc kubenswrapper[4768]: I0217 14:39:55.807743 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="a587d82f-a2ac-4dbc-91be-66049fd5662a" containerName="registry-server" Feb 17 14:39:55 crc kubenswrapper[4768]: I0217 14:39:55.807768 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="0435c3c9-4bd7-46a0-9cd2-df744778c614" containerName="gather" Feb 17 14:39:55 crc kubenswrapper[4768]: I0217 14:39:55.807786 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="0435c3c9-4bd7-46a0-9cd2-df744778c614" containerName="copy" Feb 17 14:39:55 crc kubenswrapper[4768]: I0217 14:39:55.809379 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-92sxp" Feb 17 14:39:55 crc kubenswrapper[4768]: I0217 14:39:55.820735 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-92sxp"] Feb 17 14:39:56 crc kubenswrapper[4768]: I0217 14:39:56.001658 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h574k\" (UniqueName: \"kubernetes.io/projected/c81725f1-d754-4601-a524-82d393c5bbdc-kube-api-access-h574k\") pod \"certified-operators-92sxp\" (UID: \"c81725f1-d754-4601-a524-82d393c5bbdc\") " pod="openshift-marketplace/certified-operators-92sxp" Feb 17 14:39:56 crc kubenswrapper[4768]: I0217 14:39:56.001739 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c81725f1-d754-4601-a524-82d393c5bbdc-utilities\") pod \"certified-operators-92sxp\" (UID: \"c81725f1-d754-4601-a524-82d393c5bbdc\") " pod="openshift-marketplace/certified-operators-92sxp" Feb 17 14:39:56 crc kubenswrapper[4768]: I0217 14:39:56.001775 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c81725f1-d754-4601-a524-82d393c5bbdc-catalog-content\") pod \"certified-operators-92sxp\" (UID: \"c81725f1-d754-4601-a524-82d393c5bbdc\") " pod="openshift-marketplace/certified-operators-92sxp" Feb 17 14:39:56 crc kubenswrapper[4768]: I0217 14:39:56.104309 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c81725f1-d754-4601-a524-82d393c5bbdc-catalog-content\") pod \"certified-operators-92sxp\" (UID: \"c81725f1-d754-4601-a524-82d393c5bbdc\") " pod="openshift-marketplace/certified-operators-92sxp" Feb 17 14:39:56 crc kubenswrapper[4768]: I0217 14:39:56.104512 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h574k\" (UniqueName: \"kubernetes.io/projected/c81725f1-d754-4601-a524-82d393c5bbdc-kube-api-access-h574k\") pod \"certified-operators-92sxp\" (UID: \"c81725f1-d754-4601-a524-82d393c5bbdc\") " pod="openshift-marketplace/certified-operators-92sxp" Feb 17 14:39:56 crc kubenswrapper[4768]: I0217 14:39:56.104596 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c81725f1-d754-4601-a524-82d393c5bbdc-utilities\") pod \"certified-operators-92sxp\" (UID: \"c81725f1-d754-4601-a524-82d393c5bbdc\") " pod="openshift-marketplace/certified-operators-92sxp" Feb 17 14:39:56 crc kubenswrapper[4768]: I0217 14:39:56.105176 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c81725f1-d754-4601-a524-82d393c5bbdc-utilities\") pod \"certified-operators-92sxp\" (UID: \"c81725f1-d754-4601-a524-82d393c5bbdc\") " pod="openshift-marketplace/certified-operators-92sxp" Feb 17 14:39:56 crc kubenswrapper[4768]: I0217 14:39:56.105439 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c81725f1-d754-4601-a524-82d393c5bbdc-catalog-content\") pod \"certified-operators-92sxp\" (UID: \"c81725f1-d754-4601-a524-82d393c5bbdc\") " pod="openshift-marketplace/certified-operators-92sxp" Feb 17 14:39:56 crc kubenswrapper[4768]: I0217 14:39:56.124949 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h574k\" (UniqueName: \"kubernetes.io/projected/c81725f1-d754-4601-a524-82d393c5bbdc-kube-api-access-h574k\") pod \"certified-operators-92sxp\" (UID: \"c81725f1-d754-4601-a524-82d393c5bbdc\") " pod="openshift-marketplace/certified-operators-92sxp" Feb 17 14:39:56 crc kubenswrapper[4768]: I0217 14:39:56.128883 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-92sxp" Feb 17 14:39:56 crc kubenswrapper[4768]: I0217 14:39:56.600805 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-92sxp"] Feb 17 14:39:57 crc kubenswrapper[4768]: I0217 14:39:57.095067 4768 generic.go:334] "Generic (PLEG): container finished" podID="c81725f1-d754-4601-a524-82d393c5bbdc" containerID="055eec7c1d02481c8c1a31f2ef112ae94021f063afa69401aab2969db5335bee" exitCode=0 Feb 17 14:39:57 crc kubenswrapper[4768]: I0217 14:39:57.095146 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-92sxp" event={"ID":"c81725f1-d754-4601-a524-82d393c5bbdc","Type":"ContainerDied","Data":"055eec7c1d02481c8c1a31f2ef112ae94021f063afa69401aab2969db5335bee"} Feb 17 14:39:57 crc kubenswrapper[4768]: I0217 14:39:57.095419 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-92sxp" event={"ID":"c81725f1-d754-4601-a524-82d393c5bbdc","Type":"ContainerStarted","Data":"777f390649f3e94b839e955ee2513a8b4d5c06167009860aa1fa5b0c65b6c95c"} Feb 17 14:39:57 crc kubenswrapper[4768]: I0217 14:39:57.097701 4768 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 17 14:39:59 crc kubenswrapper[4768]: I0217 14:39:59.115148 4768 generic.go:334] "Generic (PLEG): container finished" podID="c81725f1-d754-4601-a524-82d393c5bbdc" containerID="2609125eb05870a285c080ed48503f08b660733caf199d321b1eb8dadf0c2bce" exitCode=0 Feb 17 14:39:59 crc kubenswrapper[4768]: I0217 14:39:59.115278 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-92sxp" event={"ID":"c81725f1-d754-4601-a524-82d393c5bbdc","Type":"ContainerDied","Data":"2609125eb05870a285c080ed48503f08b660733caf199d321b1eb8dadf0c2bce"} Feb 17 14:40:00 crc kubenswrapper[4768]: I0217 14:40:00.128401 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-92sxp" event={"ID":"c81725f1-d754-4601-a524-82d393c5bbdc","Type":"ContainerStarted","Data":"b7777c288a900c759bdd69d9ae865a17ad44cd52e9efb7c3a195c726a98d8429"} Feb 17 14:40:00 crc kubenswrapper[4768]: I0217 14:40:00.155336 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-92sxp" podStartSLOduration=2.648849989 podStartE2EDuration="5.155317545s" podCreationTimestamp="2026-02-17 14:39:55 +0000 UTC" firstStartedPulling="2026-02-17 14:39:57.097343901 +0000 UTC m=+3816.376730353" lastFinishedPulling="2026-02-17 14:39:59.603811467 +0000 UTC m=+3818.883197909" observedRunningTime="2026-02-17 14:40:00.152850227 +0000 UTC m=+3819.432236669" watchObservedRunningTime="2026-02-17 14:40:00.155317545 +0000 UTC m=+3819.434703987" Feb 17 14:40:02 crc kubenswrapper[4768]: I0217 14:40:02.651312 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-dxhrg/must-gather-7z8dt"] Feb 17 14:40:02 crc kubenswrapper[4768]: I0217 14:40:02.653697 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-dxhrg/must-gather-7z8dt" Feb 17 14:40:02 crc kubenswrapper[4768]: I0217 14:40:02.655084 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-dxhrg"/"default-dockercfg-hmbrn" Feb 17 14:40:02 crc kubenswrapper[4768]: I0217 14:40:02.656461 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-dxhrg"/"kube-root-ca.crt" Feb 17 14:40:02 crc kubenswrapper[4768]: I0217 14:40:02.656461 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-dxhrg"/"openshift-service-ca.crt" Feb 17 14:40:02 crc kubenswrapper[4768]: I0217 14:40:02.659378 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-dxhrg/must-gather-7z8dt"] Feb 17 14:40:02 crc kubenswrapper[4768]: I0217 14:40:02.838133 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6nps8\" (UniqueName: \"kubernetes.io/projected/f52c76e0-cf87-47a2-a917-fb08c2924e10-kube-api-access-6nps8\") pod \"must-gather-7z8dt\" (UID: \"f52c76e0-cf87-47a2-a917-fb08c2924e10\") " pod="openshift-must-gather-dxhrg/must-gather-7z8dt" Feb 17 14:40:02 crc kubenswrapper[4768]: I0217 14:40:02.838197 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/f52c76e0-cf87-47a2-a917-fb08c2924e10-must-gather-output\") pod \"must-gather-7z8dt\" (UID: \"f52c76e0-cf87-47a2-a917-fb08c2924e10\") " pod="openshift-must-gather-dxhrg/must-gather-7z8dt" Feb 17 14:40:02 crc kubenswrapper[4768]: I0217 14:40:02.941319 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6nps8\" (UniqueName: \"kubernetes.io/projected/f52c76e0-cf87-47a2-a917-fb08c2924e10-kube-api-access-6nps8\") pod \"must-gather-7z8dt\" (UID: \"f52c76e0-cf87-47a2-a917-fb08c2924e10\") " pod="openshift-must-gather-dxhrg/must-gather-7z8dt" Feb 17 14:40:02 crc kubenswrapper[4768]: I0217 14:40:02.941493 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/f52c76e0-cf87-47a2-a917-fb08c2924e10-must-gather-output\") pod \"must-gather-7z8dt\" (UID: \"f52c76e0-cf87-47a2-a917-fb08c2924e10\") " pod="openshift-must-gather-dxhrg/must-gather-7z8dt" Feb 17 14:40:02 crc kubenswrapper[4768]: I0217 14:40:02.942125 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/f52c76e0-cf87-47a2-a917-fb08c2924e10-must-gather-output\") pod \"must-gather-7z8dt\" (UID: \"f52c76e0-cf87-47a2-a917-fb08c2924e10\") " pod="openshift-must-gather-dxhrg/must-gather-7z8dt" Feb 17 14:40:02 crc kubenswrapper[4768]: I0217 14:40:02.973048 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6nps8\" (UniqueName: \"kubernetes.io/projected/f52c76e0-cf87-47a2-a917-fb08c2924e10-kube-api-access-6nps8\") pod \"must-gather-7z8dt\" (UID: \"f52c76e0-cf87-47a2-a917-fb08c2924e10\") " pod="openshift-must-gather-dxhrg/must-gather-7z8dt" Feb 17 14:40:02 crc kubenswrapper[4768]: I0217 14:40:02.978301 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-dxhrg/must-gather-7z8dt" Feb 17 14:40:03 crc kubenswrapper[4768]: I0217 14:40:03.527118 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-dxhrg/must-gather-7z8dt"] Feb 17 14:40:04 crc kubenswrapper[4768]: I0217 14:40:04.171832 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-dxhrg/must-gather-7z8dt" event={"ID":"f52c76e0-cf87-47a2-a917-fb08c2924e10","Type":"ContainerStarted","Data":"82ece0c4675758509eb23f0766680df57b2c93ddcce4db2aa1315f89b91c1eb4"} Feb 17 14:40:05 crc kubenswrapper[4768]: I0217 14:40:05.180711 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-dxhrg/must-gather-7z8dt" event={"ID":"f52c76e0-cf87-47a2-a917-fb08c2924e10","Type":"ContainerStarted","Data":"243869db233e1b96abda2353cba2410ab458cae2152203c6a111528af54b5045"} Feb 17 14:40:05 crc kubenswrapper[4768]: I0217 14:40:05.181244 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-dxhrg/must-gather-7z8dt" event={"ID":"f52c76e0-cf87-47a2-a917-fb08c2924e10","Type":"ContainerStarted","Data":"3ea3261530326f7ab96b33a47ae07820ce803bf0c594bbc9b2530b30e4340c2e"} Feb 17 14:40:05 crc kubenswrapper[4768]: I0217 14:40:05.202953 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-dxhrg/must-gather-7z8dt" podStartSLOduration=3.202935683 podStartE2EDuration="3.202935683s" podCreationTimestamp="2026-02-17 14:40:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 14:40:05.196215841 +0000 UTC m=+3824.475602303" watchObservedRunningTime="2026-02-17 14:40:05.202935683 +0000 UTC m=+3824.482322125" Feb 17 14:40:05 crc kubenswrapper[4768]: I0217 14:40:05.539053 4768 scope.go:117] "RemoveContainer" containerID="1cdfe752ad5564d137aa582fce5dfd1f6e3403007387ccc408cda17cd0114903" Feb 17 14:40:05 crc kubenswrapper[4768]: E0217 14:40:05.539274 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p97z4_openshift-machine-config-operator(10c685ba-8fe0-425c-958c-3fb6754d3d84)\"" pod="openshift-machine-config-operator/machine-config-daemon-p97z4" podUID="10c685ba-8fe0-425c-958c-3fb6754d3d84" Feb 17 14:40:06 crc kubenswrapper[4768]: I0217 14:40:06.137950 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-92sxp" Feb 17 14:40:06 crc kubenswrapper[4768]: I0217 14:40:06.137986 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-92sxp" Feb 17 14:40:06 crc kubenswrapper[4768]: I0217 14:40:06.214462 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-92sxp" Feb 17 14:40:06 crc kubenswrapper[4768]: I0217 14:40:06.267077 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-92sxp" Feb 17 14:40:06 crc kubenswrapper[4768]: I0217 14:40:06.455819 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-92sxp"] Feb 17 14:40:07 crc kubenswrapper[4768]: I0217 14:40:07.968975 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-dxhrg/crc-debug-h9nq5"] Feb 17 14:40:07 crc kubenswrapper[4768]: I0217 14:40:07.970282 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-dxhrg/crc-debug-h9nq5" Feb 17 14:40:08 crc kubenswrapper[4768]: I0217 14:40:08.069387 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pb8k6\" (UniqueName: \"kubernetes.io/projected/3533ca27-4b12-427c-aa62-b34231d09a64-kube-api-access-pb8k6\") pod \"crc-debug-h9nq5\" (UID: \"3533ca27-4b12-427c-aa62-b34231d09a64\") " pod="openshift-must-gather-dxhrg/crc-debug-h9nq5" Feb 17 14:40:08 crc kubenswrapper[4768]: I0217 14:40:08.069516 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/3533ca27-4b12-427c-aa62-b34231d09a64-host\") pod \"crc-debug-h9nq5\" (UID: \"3533ca27-4b12-427c-aa62-b34231d09a64\") " pod="openshift-must-gather-dxhrg/crc-debug-h9nq5" Feb 17 14:40:08 crc kubenswrapper[4768]: I0217 14:40:08.171365 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pb8k6\" (UniqueName: \"kubernetes.io/projected/3533ca27-4b12-427c-aa62-b34231d09a64-kube-api-access-pb8k6\") pod \"crc-debug-h9nq5\" (UID: \"3533ca27-4b12-427c-aa62-b34231d09a64\") " pod="openshift-must-gather-dxhrg/crc-debug-h9nq5" Feb 17 14:40:08 crc kubenswrapper[4768]: I0217 14:40:08.171454 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/3533ca27-4b12-427c-aa62-b34231d09a64-host\") pod \"crc-debug-h9nq5\" (UID: \"3533ca27-4b12-427c-aa62-b34231d09a64\") " pod="openshift-must-gather-dxhrg/crc-debug-h9nq5" Feb 17 14:40:08 crc kubenswrapper[4768]: I0217 14:40:08.171602 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/3533ca27-4b12-427c-aa62-b34231d09a64-host\") pod \"crc-debug-h9nq5\" (UID: \"3533ca27-4b12-427c-aa62-b34231d09a64\") " pod="openshift-must-gather-dxhrg/crc-debug-h9nq5" Feb 17 14:40:08 crc kubenswrapper[4768]: I0217 14:40:08.189755 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pb8k6\" (UniqueName: \"kubernetes.io/projected/3533ca27-4b12-427c-aa62-b34231d09a64-kube-api-access-pb8k6\") pod \"crc-debug-h9nq5\" (UID: \"3533ca27-4b12-427c-aa62-b34231d09a64\") " pod="openshift-must-gather-dxhrg/crc-debug-h9nq5" Feb 17 14:40:08 crc kubenswrapper[4768]: I0217 14:40:08.209507 4768 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-92sxp" podUID="c81725f1-d754-4601-a524-82d393c5bbdc" containerName="registry-server" containerID="cri-o://b7777c288a900c759bdd69d9ae865a17ad44cd52e9efb7c3a195c726a98d8429" gracePeriod=2 Feb 17 14:40:08 crc kubenswrapper[4768]: I0217 14:40:08.307977 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-dxhrg/crc-debug-h9nq5" Feb 17 14:40:08 crc kubenswrapper[4768]: W0217 14:40:08.370027 4768 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3533ca27_4b12_427c_aa62_b34231d09a64.slice/crio-371d2e9fef53fe3c89793785892b5495523c6432918ddab520a8c394b6f98c21 WatchSource:0}: Error finding container 371d2e9fef53fe3c89793785892b5495523c6432918ddab520a8c394b6f98c21: Status 404 returned error can't find the container with id 371d2e9fef53fe3c89793785892b5495523c6432918ddab520a8c394b6f98c21 Feb 17 14:40:08 crc kubenswrapper[4768]: I0217 14:40:08.631808 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-92sxp" Feb 17 14:40:08 crc kubenswrapper[4768]: I0217 14:40:08.783830 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c81725f1-d754-4601-a524-82d393c5bbdc-utilities\") pod \"c81725f1-d754-4601-a524-82d393c5bbdc\" (UID: \"c81725f1-d754-4601-a524-82d393c5bbdc\") " Feb 17 14:40:08 crc kubenswrapper[4768]: I0217 14:40:08.784443 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h574k\" (UniqueName: \"kubernetes.io/projected/c81725f1-d754-4601-a524-82d393c5bbdc-kube-api-access-h574k\") pod \"c81725f1-d754-4601-a524-82d393c5bbdc\" (UID: \"c81725f1-d754-4601-a524-82d393c5bbdc\") " Feb 17 14:40:08 crc kubenswrapper[4768]: I0217 14:40:08.784546 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c81725f1-d754-4601-a524-82d393c5bbdc-catalog-content\") pod \"c81725f1-d754-4601-a524-82d393c5bbdc\" (UID: \"c81725f1-d754-4601-a524-82d393c5bbdc\") " Feb 17 14:40:08 crc kubenswrapper[4768]: I0217 14:40:08.784699 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c81725f1-d754-4601-a524-82d393c5bbdc-utilities" (OuterVolumeSpecName: "utilities") pod "c81725f1-d754-4601-a524-82d393c5bbdc" (UID: "c81725f1-d754-4601-a524-82d393c5bbdc"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 14:40:08 crc kubenswrapper[4768]: I0217 14:40:08.784934 4768 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c81725f1-d754-4601-a524-82d393c5bbdc-utilities\") on node \"crc\" DevicePath \"\"" Feb 17 14:40:08 crc kubenswrapper[4768]: I0217 14:40:08.792032 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c81725f1-d754-4601-a524-82d393c5bbdc-kube-api-access-h574k" (OuterVolumeSpecName: "kube-api-access-h574k") pod "c81725f1-d754-4601-a524-82d393c5bbdc" (UID: "c81725f1-d754-4601-a524-82d393c5bbdc"). InnerVolumeSpecName "kube-api-access-h574k". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 14:40:08 crc kubenswrapper[4768]: I0217 14:40:08.886785 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h574k\" (UniqueName: \"kubernetes.io/projected/c81725f1-d754-4601-a524-82d393c5bbdc-kube-api-access-h574k\") on node \"crc\" DevicePath \"\"" Feb 17 14:40:08 crc kubenswrapper[4768]: I0217 14:40:08.996538 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c81725f1-d754-4601-a524-82d393c5bbdc-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c81725f1-d754-4601-a524-82d393c5bbdc" (UID: "c81725f1-d754-4601-a524-82d393c5bbdc"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 14:40:09 crc kubenswrapper[4768]: I0217 14:40:09.090742 4768 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c81725f1-d754-4601-a524-82d393c5bbdc-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 17 14:40:09 crc kubenswrapper[4768]: I0217 14:40:09.222871 4768 generic.go:334] "Generic (PLEG): container finished" podID="c81725f1-d754-4601-a524-82d393c5bbdc" containerID="b7777c288a900c759bdd69d9ae865a17ad44cd52e9efb7c3a195c726a98d8429" exitCode=0 Feb 17 14:40:09 crc kubenswrapper[4768]: I0217 14:40:09.222962 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-92sxp" event={"ID":"c81725f1-d754-4601-a524-82d393c5bbdc","Type":"ContainerDied","Data":"b7777c288a900c759bdd69d9ae865a17ad44cd52e9efb7c3a195c726a98d8429"} Feb 17 14:40:09 crc kubenswrapper[4768]: I0217 14:40:09.223000 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-92sxp" event={"ID":"c81725f1-d754-4601-a524-82d393c5bbdc","Type":"ContainerDied","Data":"777f390649f3e94b839e955ee2513a8b4d5c06167009860aa1fa5b0c65b6c95c"} Feb 17 14:40:09 crc kubenswrapper[4768]: I0217 14:40:09.223028 4768 scope.go:117] "RemoveContainer" containerID="b7777c288a900c759bdd69d9ae865a17ad44cd52e9efb7c3a195c726a98d8429" Feb 17 14:40:09 crc kubenswrapper[4768]: I0217 14:40:09.223235 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-92sxp" Feb 17 14:40:09 crc kubenswrapper[4768]: I0217 14:40:09.240678 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-dxhrg/crc-debug-h9nq5" event={"ID":"3533ca27-4b12-427c-aa62-b34231d09a64","Type":"ContainerStarted","Data":"f135f0b99d75614b026845c58ed7f13b53a517a3386838b7e7a663e76dfedd87"} Feb 17 14:40:09 crc kubenswrapper[4768]: I0217 14:40:09.240743 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-dxhrg/crc-debug-h9nq5" event={"ID":"3533ca27-4b12-427c-aa62-b34231d09a64","Type":"ContainerStarted","Data":"371d2e9fef53fe3c89793785892b5495523c6432918ddab520a8c394b6f98c21"} Feb 17 14:40:09 crc kubenswrapper[4768]: I0217 14:40:09.260465 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-dxhrg/crc-debug-h9nq5" podStartSLOduration=2.260443144 podStartE2EDuration="2.260443144s" podCreationTimestamp="2026-02-17 14:40:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 14:40:09.255201731 +0000 UTC m=+3828.534588173" watchObservedRunningTime="2026-02-17 14:40:09.260443144 +0000 UTC m=+3828.539829586" Feb 17 14:40:09 crc kubenswrapper[4768]: I0217 14:40:09.284313 4768 scope.go:117] "RemoveContainer" containerID="2609125eb05870a285c080ed48503f08b660733caf199d321b1eb8dadf0c2bce" Feb 17 14:40:09 crc kubenswrapper[4768]: I0217 14:40:09.291286 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-92sxp"] Feb 17 14:40:09 crc kubenswrapper[4768]: I0217 14:40:09.305341 4768 scope.go:117] "RemoveContainer" containerID="055eec7c1d02481c8c1a31f2ef112ae94021f063afa69401aab2969db5335bee" Feb 17 14:40:09 crc kubenswrapper[4768]: I0217 14:40:09.318029 4768 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-92sxp"] Feb 17 14:40:09 crc kubenswrapper[4768]: I0217 14:40:09.353487 4768 scope.go:117] "RemoveContainer" containerID="b7777c288a900c759bdd69d9ae865a17ad44cd52e9efb7c3a195c726a98d8429" Feb 17 14:40:09 crc kubenswrapper[4768]: E0217 14:40:09.354484 4768 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b7777c288a900c759bdd69d9ae865a17ad44cd52e9efb7c3a195c726a98d8429\": container with ID starting with b7777c288a900c759bdd69d9ae865a17ad44cd52e9efb7c3a195c726a98d8429 not found: ID does not exist" containerID="b7777c288a900c759bdd69d9ae865a17ad44cd52e9efb7c3a195c726a98d8429" Feb 17 14:40:09 crc kubenswrapper[4768]: I0217 14:40:09.354524 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b7777c288a900c759bdd69d9ae865a17ad44cd52e9efb7c3a195c726a98d8429"} err="failed to get container status \"b7777c288a900c759bdd69d9ae865a17ad44cd52e9efb7c3a195c726a98d8429\": rpc error: code = NotFound desc = could not find container \"b7777c288a900c759bdd69d9ae865a17ad44cd52e9efb7c3a195c726a98d8429\": container with ID starting with b7777c288a900c759bdd69d9ae865a17ad44cd52e9efb7c3a195c726a98d8429 not found: ID does not exist" Feb 17 14:40:09 crc kubenswrapper[4768]: I0217 14:40:09.354549 4768 scope.go:117] "RemoveContainer" containerID="2609125eb05870a285c080ed48503f08b660733caf199d321b1eb8dadf0c2bce" Feb 17 14:40:09 crc kubenswrapper[4768]: E0217 14:40:09.354859 4768 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2609125eb05870a285c080ed48503f08b660733caf199d321b1eb8dadf0c2bce\": container with ID starting with 2609125eb05870a285c080ed48503f08b660733caf199d321b1eb8dadf0c2bce not found: ID does not exist" containerID="2609125eb05870a285c080ed48503f08b660733caf199d321b1eb8dadf0c2bce" Feb 17 14:40:09 crc kubenswrapper[4768]: I0217 14:40:09.354900 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2609125eb05870a285c080ed48503f08b660733caf199d321b1eb8dadf0c2bce"} err="failed to get container status \"2609125eb05870a285c080ed48503f08b660733caf199d321b1eb8dadf0c2bce\": rpc error: code = NotFound desc = could not find container \"2609125eb05870a285c080ed48503f08b660733caf199d321b1eb8dadf0c2bce\": container with ID starting with 2609125eb05870a285c080ed48503f08b660733caf199d321b1eb8dadf0c2bce not found: ID does not exist" Feb 17 14:40:09 crc kubenswrapper[4768]: I0217 14:40:09.354917 4768 scope.go:117] "RemoveContainer" containerID="055eec7c1d02481c8c1a31f2ef112ae94021f063afa69401aab2969db5335bee" Feb 17 14:40:09 crc kubenswrapper[4768]: E0217 14:40:09.355224 4768 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"055eec7c1d02481c8c1a31f2ef112ae94021f063afa69401aab2969db5335bee\": container with ID starting with 055eec7c1d02481c8c1a31f2ef112ae94021f063afa69401aab2969db5335bee not found: ID does not exist" containerID="055eec7c1d02481c8c1a31f2ef112ae94021f063afa69401aab2969db5335bee" Feb 17 14:40:09 crc kubenswrapper[4768]: I0217 14:40:09.355256 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"055eec7c1d02481c8c1a31f2ef112ae94021f063afa69401aab2969db5335bee"} err="failed to get container status \"055eec7c1d02481c8c1a31f2ef112ae94021f063afa69401aab2969db5335bee\": rpc error: code = NotFound desc = could not find container \"055eec7c1d02481c8c1a31f2ef112ae94021f063afa69401aab2969db5335bee\": container with ID starting with 055eec7c1d02481c8c1a31f2ef112ae94021f063afa69401aab2969db5335bee not found: ID does not exist" Feb 17 14:40:09 crc kubenswrapper[4768]: I0217 14:40:09.545972 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c81725f1-d754-4601-a524-82d393c5bbdc" path="/var/lib/kubelet/pods/c81725f1-d754-4601-a524-82d393c5bbdc/volumes" Feb 17 14:40:17 crc kubenswrapper[4768]: I0217 14:40:17.538273 4768 scope.go:117] "RemoveContainer" containerID="1cdfe752ad5564d137aa582fce5dfd1f6e3403007387ccc408cda17cd0114903" Feb 17 14:40:17 crc kubenswrapper[4768]: E0217 14:40:17.538976 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p97z4_openshift-machine-config-operator(10c685ba-8fe0-425c-958c-3fb6754d3d84)\"" pod="openshift-machine-config-operator/machine-config-daemon-p97z4" podUID="10c685ba-8fe0-425c-958c-3fb6754d3d84" Feb 17 14:40:29 crc kubenswrapper[4768]: I0217 14:40:29.537930 4768 scope.go:117] "RemoveContainer" containerID="1cdfe752ad5564d137aa582fce5dfd1f6e3403007387ccc408cda17cd0114903" Feb 17 14:40:30 crc kubenswrapper[4768]: I0217 14:40:30.421133 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-p97z4" event={"ID":"10c685ba-8fe0-425c-958c-3fb6754d3d84","Type":"ContainerStarted","Data":"19f98ba1d898b39bf0a21ff629e9d39b7fa1b2d21cfcd6ddb2bf6f26918bf017"} Feb 17 14:40:42 crc kubenswrapper[4768]: I0217 14:40:42.515553 4768 generic.go:334] "Generic (PLEG): container finished" podID="3533ca27-4b12-427c-aa62-b34231d09a64" containerID="f135f0b99d75614b026845c58ed7f13b53a517a3386838b7e7a663e76dfedd87" exitCode=0 Feb 17 14:40:42 crc kubenswrapper[4768]: I0217 14:40:42.515644 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-dxhrg/crc-debug-h9nq5" event={"ID":"3533ca27-4b12-427c-aa62-b34231d09a64","Type":"ContainerDied","Data":"f135f0b99d75614b026845c58ed7f13b53a517a3386838b7e7a663e76dfedd87"} Feb 17 14:40:43 crc kubenswrapper[4768]: I0217 14:40:43.635209 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-dxhrg/crc-debug-h9nq5" Feb 17 14:40:43 crc kubenswrapper[4768]: I0217 14:40:43.663238 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-dxhrg/crc-debug-h9nq5"] Feb 17 14:40:43 crc kubenswrapper[4768]: I0217 14:40:43.669982 4768 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-dxhrg/crc-debug-h9nq5"] Feb 17 14:40:43 crc kubenswrapper[4768]: I0217 14:40:43.799210 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pb8k6\" (UniqueName: \"kubernetes.io/projected/3533ca27-4b12-427c-aa62-b34231d09a64-kube-api-access-pb8k6\") pod \"3533ca27-4b12-427c-aa62-b34231d09a64\" (UID: \"3533ca27-4b12-427c-aa62-b34231d09a64\") " Feb 17 14:40:43 crc kubenswrapper[4768]: I0217 14:40:43.799346 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/3533ca27-4b12-427c-aa62-b34231d09a64-host\") pod \"3533ca27-4b12-427c-aa62-b34231d09a64\" (UID: \"3533ca27-4b12-427c-aa62-b34231d09a64\") " Feb 17 14:40:43 crc kubenswrapper[4768]: I0217 14:40:43.799665 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3533ca27-4b12-427c-aa62-b34231d09a64-host" (OuterVolumeSpecName: "host") pod "3533ca27-4b12-427c-aa62-b34231d09a64" (UID: "3533ca27-4b12-427c-aa62-b34231d09a64"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 14:40:43 crc kubenswrapper[4768]: I0217 14:40:43.800418 4768 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/3533ca27-4b12-427c-aa62-b34231d09a64-host\") on node \"crc\" DevicePath \"\"" Feb 17 14:40:43 crc kubenswrapper[4768]: I0217 14:40:43.804565 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3533ca27-4b12-427c-aa62-b34231d09a64-kube-api-access-pb8k6" (OuterVolumeSpecName: "kube-api-access-pb8k6") pod "3533ca27-4b12-427c-aa62-b34231d09a64" (UID: "3533ca27-4b12-427c-aa62-b34231d09a64"). InnerVolumeSpecName "kube-api-access-pb8k6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 14:40:43 crc kubenswrapper[4768]: I0217 14:40:43.902101 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pb8k6\" (UniqueName: \"kubernetes.io/projected/3533ca27-4b12-427c-aa62-b34231d09a64-kube-api-access-pb8k6\") on node \"crc\" DevicePath \"\"" Feb 17 14:40:44 crc kubenswrapper[4768]: I0217 14:40:44.537165 4768 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="371d2e9fef53fe3c89793785892b5495523c6432918ddab520a8c394b6f98c21" Feb 17 14:40:44 crc kubenswrapper[4768]: I0217 14:40:44.537211 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-dxhrg/crc-debug-h9nq5" Feb 17 14:40:44 crc kubenswrapper[4768]: I0217 14:40:44.869552 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-dxhrg/crc-debug-z6stf"] Feb 17 14:40:44 crc kubenswrapper[4768]: E0217 14:40:44.869956 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3533ca27-4b12-427c-aa62-b34231d09a64" containerName="container-00" Feb 17 14:40:44 crc kubenswrapper[4768]: I0217 14:40:44.869970 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="3533ca27-4b12-427c-aa62-b34231d09a64" containerName="container-00" Feb 17 14:40:44 crc kubenswrapper[4768]: E0217 14:40:44.869994 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c81725f1-d754-4601-a524-82d393c5bbdc" containerName="registry-server" Feb 17 14:40:44 crc kubenswrapper[4768]: I0217 14:40:44.869999 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="c81725f1-d754-4601-a524-82d393c5bbdc" containerName="registry-server" Feb 17 14:40:44 crc kubenswrapper[4768]: E0217 14:40:44.870007 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c81725f1-d754-4601-a524-82d393c5bbdc" containerName="extract-utilities" Feb 17 14:40:44 crc kubenswrapper[4768]: I0217 14:40:44.870015 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="c81725f1-d754-4601-a524-82d393c5bbdc" containerName="extract-utilities" Feb 17 14:40:44 crc kubenswrapper[4768]: E0217 14:40:44.870034 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c81725f1-d754-4601-a524-82d393c5bbdc" containerName="extract-content" Feb 17 14:40:44 crc kubenswrapper[4768]: I0217 14:40:44.870040 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="c81725f1-d754-4601-a524-82d393c5bbdc" containerName="extract-content" Feb 17 14:40:44 crc kubenswrapper[4768]: I0217 14:40:44.870244 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="c81725f1-d754-4601-a524-82d393c5bbdc" containerName="registry-server" Feb 17 14:40:44 crc kubenswrapper[4768]: I0217 14:40:44.870258 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="3533ca27-4b12-427c-aa62-b34231d09a64" containerName="container-00" Feb 17 14:40:44 crc kubenswrapper[4768]: I0217 14:40:44.870812 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-dxhrg/crc-debug-z6stf" Feb 17 14:40:44 crc kubenswrapper[4768]: I0217 14:40:44.918908 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/3e883806-a4fe-487b-80a3-386fcb643162-host\") pod \"crc-debug-z6stf\" (UID: \"3e883806-a4fe-487b-80a3-386fcb643162\") " pod="openshift-must-gather-dxhrg/crc-debug-z6stf" Feb 17 14:40:44 crc kubenswrapper[4768]: I0217 14:40:44.919163 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-79fbm\" (UniqueName: \"kubernetes.io/projected/3e883806-a4fe-487b-80a3-386fcb643162-kube-api-access-79fbm\") pod \"crc-debug-z6stf\" (UID: \"3e883806-a4fe-487b-80a3-386fcb643162\") " pod="openshift-must-gather-dxhrg/crc-debug-z6stf" Feb 17 14:40:45 crc kubenswrapper[4768]: I0217 14:40:45.020332 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-79fbm\" (UniqueName: \"kubernetes.io/projected/3e883806-a4fe-487b-80a3-386fcb643162-kube-api-access-79fbm\") pod \"crc-debug-z6stf\" (UID: \"3e883806-a4fe-487b-80a3-386fcb643162\") " pod="openshift-must-gather-dxhrg/crc-debug-z6stf" Feb 17 14:40:45 crc kubenswrapper[4768]: I0217 14:40:45.020414 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/3e883806-a4fe-487b-80a3-386fcb643162-host\") pod \"crc-debug-z6stf\" (UID: \"3e883806-a4fe-487b-80a3-386fcb643162\") " pod="openshift-must-gather-dxhrg/crc-debug-z6stf" Feb 17 14:40:45 crc kubenswrapper[4768]: I0217 14:40:45.020584 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/3e883806-a4fe-487b-80a3-386fcb643162-host\") pod \"crc-debug-z6stf\" (UID: \"3e883806-a4fe-487b-80a3-386fcb643162\") " pod="openshift-must-gather-dxhrg/crc-debug-z6stf" Feb 17 14:40:45 crc kubenswrapper[4768]: I0217 14:40:45.042032 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-79fbm\" (UniqueName: \"kubernetes.io/projected/3e883806-a4fe-487b-80a3-386fcb643162-kube-api-access-79fbm\") pod \"crc-debug-z6stf\" (UID: \"3e883806-a4fe-487b-80a3-386fcb643162\") " pod="openshift-must-gather-dxhrg/crc-debug-z6stf" Feb 17 14:40:45 crc kubenswrapper[4768]: I0217 14:40:45.188997 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-dxhrg/crc-debug-z6stf" Feb 17 14:40:45 crc kubenswrapper[4768]: W0217 14:40:45.250199 4768 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3e883806_a4fe_487b_80a3_386fcb643162.slice/crio-d6269055429a9bcc8c49f1472c7340a5c239f3751cc59aa9601ba72854de5418 WatchSource:0}: Error finding container d6269055429a9bcc8c49f1472c7340a5c239f3751cc59aa9601ba72854de5418: Status 404 returned error can't find the container with id d6269055429a9bcc8c49f1472c7340a5c239f3751cc59aa9601ba72854de5418 Feb 17 14:40:45 crc kubenswrapper[4768]: I0217 14:40:45.546664 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3533ca27-4b12-427c-aa62-b34231d09a64" path="/var/lib/kubelet/pods/3533ca27-4b12-427c-aa62-b34231d09a64/volumes" Feb 17 14:40:45 crc kubenswrapper[4768]: I0217 14:40:45.548483 4768 generic.go:334] "Generic (PLEG): container finished" podID="3e883806-a4fe-487b-80a3-386fcb643162" containerID="4beba13f028fb85efa97662e6c749bc53a3d2e3f2e2b3529093d1f8931106a22" exitCode=0 Feb 17 14:40:45 crc kubenswrapper[4768]: I0217 14:40:45.548523 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-dxhrg/crc-debug-z6stf" event={"ID":"3e883806-a4fe-487b-80a3-386fcb643162","Type":"ContainerDied","Data":"4beba13f028fb85efa97662e6c749bc53a3d2e3f2e2b3529093d1f8931106a22"} Feb 17 14:40:45 crc kubenswrapper[4768]: I0217 14:40:45.548549 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-dxhrg/crc-debug-z6stf" event={"ID":"3e883806-a4fe-487b-80a3-386fcb643162","Type":"ContainerStarted","Data":"d6269055429a9bcc8c49f1472c7340a5c239f3751cc59aa9601ba72854de5418"} Feb 17 14:40:46 crc kubenswrapper[4768]: I0217 14:40:46.009229 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-dxhrg/crc-debug-z6stf"] Feb 17 14:40:46 crc kubenswrapper[4768]: I0217 14:40:46.017349 4768 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-dxhrg/crc-debug-z6stf"] Feb 17 14:40:46 crc kubenswrapper[4768]: I0217 14:40:46.654902 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-dxhrg/crc-debug-z6stf" Feb 17 14:40:46 crc kubenswrapper[4768]: I0217 14:40:46.775806 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-79fbm\" (UniqueName: \"kubernetes.io/projected/3e883806-a4fe-487b-80a3-386fcb643162-kube-api-access-79fbm\") pod \"3e883806-a4fe-487b-80a3-386fcb643162\" (UID: \"3e883806-a4fe-487b-80a3-386fcb643162\") " Feb 17 14:40:46 crc kubenswrapper[4768]: I0217 14:40:46.775845 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/3e883806-a4fe-487b-80a3-386fcb643162-host\") pod \"3e883806-a4fe-487b-80a3-386fcb643162\" (UID: \"3e883806-a4fe-487b-80a3-386fcb643162\") " Feb 17 14:40:46 crc kubenswrapper[4768]: I0217 14:40:46.776329 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3e883806-a4fe-487b-80a3-386fcb643162-host" (OuterVolumeSpecName: "host") pod "3e883806-a4fe-487b-80a3-386fcb643162" (UID: "3e883806-a4fe-487b-80a3-386fcb643162"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 14:40:46 crc kubenswrapper[4768]: I0217 14:40:46.781835 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3e883806-a4fe-487b-80a3-386fcb643162-kube-api-access-79fbm" (OuterVolumeSpecName: "kube-api-access-79fbm") pod "3e883806-a4fe-487b-80a3-386fcb643162" (UID: "3e883806-a4fe-487b-80a3-386fcb643162"). InnerVolumeSpecName "kube-api-access-79fbm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 14:40:46 crc kubenswrapper[4768]: I0217 14:40:46.877442 4768 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/3e883806-a4fe-487b-80a3-386fcb643162-host\") on node \"crc\" DevicePath \"\"" Feb 17 14:40:46 crc kubenswrapper[4768]: I0217 14:40:46.877783 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-79fbm\" (UniqueName: \"kubernetes.io/projected/3e883806-a4fe-487b-80a3-386fcb643162-kube-api-access-79fbm\") on node \"crc\" DevicePath \"\"" Feb 17 14:40:47 crc kubenswrapper[4768]: I0217 14:40:47.221371 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-dxhrg/crc-debug-24x5v"] Feb 17 14:40:47 crc kubenswrapper[4768]: E0217 14:40:47.221802 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3e883806-a4fe-487b-80a3-386fcb643162" containerName="container-00" Feb 17 14:40:47 crc kubenswrapper[4768]: I0217 14:40:47.221817 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="3e883806-a4fe-487b-80a3-386fcb643162" containerName="container-00" Feb 17 14:40:47 crc kubenswrapper[4768]: I0217 14:40:47.222053 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="3e883806-a4fe-487b-80a3-386fcb643162" containerName="container-00" Feb 17 14:40:47 crc kubenswrapper[4768]: I0217 14:40:47.222820 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-dxhrg/crc-debug-24x5v" Feb 17 14:40:47 crc kubenswrapper[4768]: I0217 14:40:47.386667 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/d9fdb26b-d4d9-4dd0-a2cd-5a65b868419a-host\") pod \"crc-debug-24x5v\" (UID: \"d9fdb26b-d4d9-4dd0-a2cd-5a65b868419a\") " pod="openshift-must-gather-dxhrg/crc-debug-24x5v" Feb 17 14:40:47 crc kubenswrapper[4768]: I0217 14:40:47.386881 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5cxwj\" (UniqueName: \"kubernetes.io/projected/d9fdb26b-d4d9-4dd0-a2cd-5a65b868419a-kube-api-access-5cxwj\") pod \"crc-debug-24x5v\" (UID: \"d9fdb26b-d4d9-4dd0-a2cd-5a65b868419a\") " pod="openshift-must-gather-dxhrg/crc-debug-24x5v" Feb 17 14:40:47 crc kubenswrapper[4768]: I0217 14:40:47.489586 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5cxwj\" (UniqueName: \"kubernetes.io/projected/d9fdb26b-d4d9-4dd0-a2cd-5a65b868419a-kube-api-access-5cxwj\") pod \"crc-debug-24x5v\" (UID: \"d9fdb26b-d4d9-4dd0-a2cd-5a65b868419a\") " pod="openshift-must-gather-dxhrg/crc-debug-24x5v" Feb 17 14:40:47 crc kubenswrapper[4768]: I0217 14:40:47.489803 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/d9fdb26b-d4d9-4dd0-a2cd-5a65b868419a-host\") pod \"crc-debug-24x5v\" (UID: \"d9fdb26b-d4d9-4dd0-a2cd-5a65b868419a\") " pod="openshift-must-gather-dxhrg/crc-debug-24x5v" Feb 17 14:40:47 crc kubenswrapper[4768]: I0217 14:40:47.489897 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/d9fdb26b-d4d9-4dd0-a2cd-5a65b868419a-host\") pod \"crc-debug-24x5v\" (UID: \"d9fdb26b-d4d9-4dd0-a2cd-5a65b868419a\") " pod="openshift-must-gather-dxhrg/crc-debug-24x5v" Feb 17 14:40:47 crc kubenswrapper[4768]: I0217 14:40:47.507051 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5cxwj\" (UniqueName: \"kubernetes.io/projected/d9fdb26b-d4d9-4dd0-a2cd-5a65b868419a-kube-api-access-5cxwj\") pod \"crc-debug-24x5v\" (UID: \"d9fdb26b-d4d9-4dd0-a2cd-5a65b868419a\") " pod="openshift-must-gather-dxhrg/crc-debug-24x5v" Feb 17 14:40:47 crc kubenswrapper[4768]: I0217 14:40:47.539338 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-dxhrg/crc-debug-24x5v" Feb 17 14:40:47 crc kubenswrapper[4768]: I0217 14:40:47.550228 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3e883806-a4fe-487b-80a3-386fcb643162" path="/var/lib/kubelet/pods/3e883806-a4fe-487b-80a3-386fcb643162/volumes" Feb 17 14:40:47 crc kubenswrapper[4768]: I0217 14:40:47.571974 4768 scope.go:117] "RemoveContainer" containerID="4beba13f028fb85efa97662e6c749bc53a3d2e3f2e2b3529093d1f8931106a22" Feb 17 14:40:47 crc kubenswrapper[4768]: I0217 14:40:47.572340 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-dxhrg/crc-debug-z6stf" Feb 17 14:40:48 crc kubenswrapper[4768]: I0217 14:40:48.581609 4768 generic.go:334] "Generic (PLEG): container finished" podID="d9fdb26b-d4d9-4dd0-a2cd-5a65b868419a" containerID="4cf2e4ca2e65446ed6bfac1ca4701f5c70241acc62c17669d37c28bff2a6bf2c" exitCode=0 Feb 17 14:40:48 crc kubenswrapper[4768]: I0217 14:40:48.581908 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-dxhrg/crc-debug-24x5v" event={"ID":"d9fdb26b-d4d9-4dd0-a2cd-5a65b868419a","Type":"ContainerDied","Data":"4cf2e4ca2e65446ed6bfac1ca4701f5c70241acc62c17669d37c28bff2a6bf2c"} Feb 17 14:40:48 crc kubenswrapper[4768]: I0217 14:40:48.581933 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-dxhrg/crc-debug-24x5v" event={"ID":"d9fdb26b-d4d9-4dd0-a2cd-5a65b868419a","Type":"ContainerStarted","Data":"1d815cc34a47295ee81d1b5df194b705e0c47de0299fa2d42d51d0dcba81a9ac"} Feb 17 14:40:48 crc kubenswrapper[4768]: I0217 14:40:48.620755 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-dxhrg/crc-debug-24x5v"] Feb 17 14:40:48 crc kubenswrapper[4768]: I0217 14:40:48.629827 4768 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-dxhrg/crc-debug-24x5v"] Feb 17 14:40:49 crc kubenswrapper[4768]: I0217 14:40:49.682641 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-dxhrg/crc-debug-24x5v" Feb 17 14:40:49 crc kubenswrapper[4768]: I0217 14:40:49.833228 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5cxwj\" (UniqueName: \"kubernetes.io/projected/d9fdb26b-d4d9-4dd0-a2cd-5a65b868419a-kube-api-access-5cxwj\") pod \"d9fdb26b-d4d9-4dd0-a2cd-5a65b868419a\" (UID: \"d9fdb26b-d4d9-4dd0-a2cd-5a65b868419a\") " Feb 17 14:40:49 crc kubenswrapper[4768]: I0217 14:40:49.833364 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/d9fdb26b-d4d9-4dd0-a2cd-5a65b868419a-host\") pod \"d9fdb26b-d4d9-4dd0-a2cd-5a65b868419a\" (UID: \"d9fdb26b-d4d9-4dd0-a2cd-5a65b868419a\") " Feb 17 14:40:49 crc kubenswrapper[4768]: I0217 14:40:49.833516 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d9fdb26b-d4d9-4dd0-a2cd-5a65b868419a-host" (OuterVolumeSpecName: "host") pod "d9fdb26b-d4d9-4dd0-a2cd-5a65b868419a" (UID: "d9fdb26b-d4d9-4dd0-a2cd-5a65b868419a"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 14:40:49 crc kubenswrapper[4768]: I0217 14:40:49.833808 4768 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/d9fdb26b-d4d9-4dd0-a2cd-5a65b868419a-host\") on node \"crc\" DevicePath \"\"" Feb 17 14:40:49 crc kubenswrapper[4768]: I0217 14:40:49.839613 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d9fdb26b-d4d9-4dd0-a2cd-5a65b868419a-kube-api-access-5cxwj" (OuterVolumeSpecName: "kube-api-access-5cxwj") pod "d9fdb26b-d4d9-4dd0-a2cd-5a65b868419a" (UID: "d9fdb26b-d4d9-4dd0-a2cd-5a65b868419a"). InnerVolumeSpecName "kube-api-access-5cxwj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 14:40:49 crc kubenswrapper[4768]: I0217 14:40:49.935555 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5cxwj\" (UniqueName: \"kubernetes.io/projected/d9fdb26b-d4d9-4dd0-a2cd-5a65b868419a-kube-api-access-5cxwj\") on node \"crc\" DevicePath \"\"" Feb 17 14:40:50 crc kubenswrapper[4768]: I0217 14:40:50.602733 4768 scope.go:117] "RemoveContainer" containerID="4cf2e4ca2e65446ed6bfac1ca4701f5c70241acc62c17669d37c28bff2a6bf2c" Feb 17 14:40:50 crc kubenswrapper[4768]: I0217 14:40:50.602867 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-dxhrg/crc-debug-24x5v" Feb 17 14:40:51 crc kubenswrapper[4768]: I0217 14:40:51.550431 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d9fdb26b-d4d9-4dd0-a2cd-5a65b868419a" path="/var/lib/kubelet/pods/d9fdb26b-d4d9-4dd0-a2cd-5a65b868419a/volumes" Feb 17 14:41:17 crc kubenswrapper[4768]: I0217 14:41:17.187172 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-5f5954c4f6-p5w62_3d5e5fc2-44f3-45d7-848c-ed40f1ea1401/barbican-api/0.log" Feb 17 14:41:17 crc kubenswrapper[4768]: I0217 14:41:17.273233 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-5f5954c4f6-p5w62_3d5e5fc2-44f3-45d7-848c-ed40f1ea1401/barbican-api-log/0.log" Feb 17 14:41:17 crc kubenswrapper[4768]: I0217 14:41:17.391747 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-f5cff5694-mvlv5_0ac0a67d-ed1c-4a10-8a5b-c2ecde6d3fc7/barbican-keystone-listener/0.log" Feb 17 14:41:17 crc kubenswrapper[4768]: I0217 14:41:17.446364 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-f5cff5694-mvlv5_0ac0a67d-ed1c-4a10-8a5b-c2ecde6d3fc7/barbican-keystone-listener-log/0.log" Feb 17 14:41:17 crc kubenswrapper[4768]: I0217 14:41:17.588281 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-68cd444875-wgnnm_486b688d-e9dd-4c6b-ae8d-c2e536172e53/barbican-worker-log/0.log" Feb 17 14:41:17 crc kubenswrapper[4768]: I0217 14:41:17.595887 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-68cd444875-wgnnm_486b688d-e9dd-4c6b-ae8d-c2e536172e53/barbican-worker/0.log" Feb 17 14:41:17 crc kubenswrapper[4768]: I0217 14:41:17.697294 4768 pod_container_manager_linux.go:210] "Failed to delete cgroup paths" cgroupName=["kubepods","besteffort","pod3e883806-a4fe-487b-80a3-386fcb643162"] err="unable to destroy cgroup paths for cgroup [kubepods besteffort pod3e883806-a4fe-487b-80a3-386fcb643162] : Timed out while waiting for systemd to remove kubepods-besteffort-pod3e883806_a4fe_487b_80a3_386fcb643162.slice" Feb 17 14:41:17 crc kubenswrapper[4768]: I0217 14:41:17.750004 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_bootstrap-edpm-deployment-openstack-edpm-ipam-qplfr_0cf2614c-dbfe-400c-a4ff-a19a96c2f9a0/bootstrap-edpm-deployment-openstack-edpm-ipam/0.log" Feb 17 14:41:17 crc kubenswrapper[4768]: I0217 14:41:17.805414 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_b042dd2b-3a49-4aec-a401-e0f3980f0e73/ceilometer-central-agent/0.log" Feb 17 14:41:17 crc kubenswrapper[4768]: I0217 14:41:17.885115 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_b042dd2b-3a49-4aec-a401-e0f3980f0e73/ceilometer-notification-agent/0.log" Feb 17 14:41:17 crc kubenswrapper[4768]: I0217 14:41:17.954155 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_b042dd2b-3a49-4aec-a401-e0f3980f0e73/proxy-httpd/0.log" Feb 17 14:41:17 crc kubenswrapper[4768]: I0217 14:41:17.980270 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_b042dd2b-3a49-4aec-a401-e0f3980f0e73/sg-core/0.log" Feb 17 14:41:18 crc kubenswrapper[4768]: I0217 14:41:18.099280 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_797f85b4-f933-4b20-b7a5-e2f3b17a5b56/cinder-api/0.log" Feb 17 14:41:18 crc kubenswrapper[4768]: I0217 14:41:18.171951 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_797f85b4-f933-4b20-b7a5-e2f3b17a5b56/cinder-api-log/0.log" Feb 17 14:41:18 crc kubenswrapper[4768]: I0217 14:41:18.257358 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_bd2b9dae-27bf-467c-96e0-194f0e25b814/cinder-scheduler/0.log" Feb 17 14:41:18 crc kubenswrapper[4768]: I0217 14:41:18.338175 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_bd2b9dae-27bf-467c-96e0-194f0e25b814/probe/0.log" Feb 17 14:41:18 crc kubenswrapper[4768]: I0217 14:41:18.412487 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-network-edpm-deployment-openstack-edpm-ipam-zzvmc_9749c980-c481-4841-b24e-bd1dc6625b59/configure-network-edpm-deployment-openstack-edpm-ipam/0.log" Feb 17 14:41:18 crc kubenswrapper[4768]: I0217 14:41:18.548342 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-os-edpm-deployment-openstack-edpm-ipam-mr4bd_e5fb7529-06bd-4dbe-aeb8-5753feec5be2/configure-os-edpm-deployment-openstack-edpm-ipam/0.log" Feb 17 14:41:18 crc kubenswrapper[4768]: I0217 14:41:18.594090 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-55478c4467-rbrnb_22064d12-d9c4-45c2-927e-77ce03c906bb/init/0.log" Feb 17 14:41:18 crc kubenswrapper[4768]: I0217 14:41:18.774702 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-55478c4467-rbrnb_22064d12-d9c4-45c2-927e-77ce03c906bb/init/0.log" Feb 17 14:41:18 crc kubenswrapper[4768]: I0217 14:41:18.868487 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-55478c4467-rbrnb_22064d12-d9c4-45c2-927e-77ce03c906bb/dnsmasq-dns/0.log" Feb 17 14:41:18 crc kubenswrapper[4768]: I0217 14:41:18.884172 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_download-cache-edpm-deployment-openstack-edpm-ipam-blr9d_a8163799-ddb2-4876-830f-19da3abc4578/download-cache-edpm-deployment-openstack-edpm-ipam/0.log" Feb 17 14:41:19 crc kubenswrapper[4768]: I0217 14:41:19.041474 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_5d72c76c-a1d7-4256-ada6-3216f5d7c71a/glance-httpd/0.log" Feb 17 14:41:19 crc kubenswrapper[4768]: I0217 14:41:19.056493 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_5d72c76c-a1d7-4256-ada6-3216f5d7c71a/glance-log/0.log" Feb 17 14:41:19 crc kubenswrapper[4768]: I0217 14:41:19.361361 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_46ee7793-1245-4648-aa12-ae11b1db13ca/glance-httpd/0.log" Feb 17 14:41:19 crc kubenswrapper[4768]: I0217 14:41:19.382530 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_46ee7793-1245-4648-aa12-ae11b1db13ca/glance-log/0.log" Feb 17 14:41:19 crc kubenswrapper[4768]: I0217 14:41:19.573983 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-6584d79658-wtxrc_331a37d3-96b1-4065-9941-25acc64cc6c1/horizon/0.log" Feb 17 14:41:19 crc kubenswrapper[4768]: I0217 14:41:19.634187 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-certs-edpm-deployment-openstack-edpm-ipam-7j9x5_f84244fa-e156-4bf4-bc42-22336b96a556/install-certs-edpm-deployment-openstack-edpm-ipam/0.log" Feb 17 14:41:19 crc kubenswrapper[4768]: I0217 14:41:19.835512 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-6584d79658-wtxrc_331a37d3-96b1-4065-9941-25acc64cc6c1/horizon-log/0.log" Feb 17 14:41:19 crc kubenswrapper[4768]: I0217 14:41:19.922324 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-os-edpm-deployment-openstack-edpm-ipam-rdgml_c7f3a2b7-d13a-4cb1-9b34-5cd1c21cf3c6/install-os-edpm-deployment-openstack-edpm-ipam/0.log" Feb 17 14:41:20 crc kubenswrapper[4768]: I0217 14:41:20.121136 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-cron-29522281-zd786_b890b491-00b8-4c5c-9eb9-95f403148371/keystone-cron/0.log" Feb 17 14:41:20 crc kubenswrapper[4768]: I0217 14:41:20.182895 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-77c78fc8c5-fgk9h_f8201b1d-afab-4fc2-bde1-bad212359f0a/keystone-api/0.log" Feb 17 14:41:20 crc kubenswrapper[4768]: I0217 14:41:20.314388 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_kube-state-metrics-0_dceedb47-5ab1-46d0-9e16-a8d267d73ff8/kube-state-metrics/0.log" Feb 17 14:41:20 crc kubenswrapper[4768]: I0217 14:41:20.435947 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_libvirt-edpm-deployment-openstack-edpm-ipam-nnqwp_30a6ce8f-2b64-4ba9-803c-15c5bbde1cf8/libvirt-edpm-deployment-openstack-edpm-ipam/0.log" Feb 17 14:41:20 crc kubenswrapper[4768]: I0217 14:41:20.738556 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-85664fc4b9-7bclg_f0bb15c9-ac11-47c0-893f-5f0f36554f2b/neutron-httpd/0.log" Feb 17 14:41:20 crc kubenswrapper[4768]: I0217 14:41:20.746428 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-85664fc4b9-7bclg_f0bb15c9-ac11-47c0-893f-5f0f36554f2b/neutron-api/0.log" Feb 17 14:41:20 crc kubenswrapper[4768]: I0217 14:41:20.956549 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-metadata-edpm-deployment-openstack-edpm-ipam-zxlx2_4fa13453-9d50-4130-ad98-37c224390a7e/neutron-metadata-edpm-deployment-openstack-edpm-ipam/0.log" Feb 17 14:41:21 crc kubenswrapper[4768]: I0217 14:41:21.539654 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_8fd17ba2-d7f2-4af3-a8e7-078578b6c8fe/nova-api-log/0.log" Feb 17 14:41:21 crc kubenswrapper[4768]: I0217 14:41:21.773755 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell0-conductor-0_96c0f340-0c30-46ee-8c25-b4c96718d2b0/nova-cell0-conductor-conductor/0.log" Feb 17 14:41:21 crc kubenswrapper[4768]: I0217 14:41:21.829467 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_8fd17ba2-d7f2-4af3-a8e7-078578b6c8fe/nova-api-api/0.log" Feb 17 14:41:22 crc kubenswrapper[4768]: I0217 14:41:22.293084 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-conductor-0_ee6256eb-4e26-4e93-ae49-8c6be5aace6c/nova-cell1-conductor-conductor/0.log" Feb 17 14:41:22 crc kubenswrapper[4768]: I0217 14:41:22.328873 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-novncproxy-0_15ac025d-e62d-4a1d-8f2c-86d36c7261f2/nova-cell1-novncproxy-novncproxy/0.log" Feb 17 14:41:22 crc kubenswrapper[4768]: I0217 14:41:22.419494 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-edpm-deployment-openstack-edpm-ipam-4tgnr_7df23c60-d5f8-47e9-a852-ba39850823cb/nova-edpm-deployment-openstack-edpm-ipam/0.log" Feb 17 14:41:22 crc kubenswrapper[4768]: I0217 14:41:22.641694 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_4a274ef1-85cc-4456-960d-079fe7c8ea6d/nova-metadata-log/0.log" Feb 17 14:41:23 crc kubenswrapper[4768]: I0217 14:41:23.247800 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_a0368ca4-d5b7-4604-b15a-a7cb4fcf5652/mysql-bootstrap/0.log" Feb 17 14:41:23 crc kubenswrapper[4768]: I0217 14:41:23.448604 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-scheduler-0_1b3ad6f8-7496-467a-bdeb-7cf29963af21/nova-scheduler-scheduler/0.log" Feb 17 14:41:23 crc kubenswrapper[4768]: I0217 14:41:23.539736 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_a0368ca4-d5b7-4604-b15a-a7cb4fcf5652/mysql-bootstrap/0.log" Feb 17 14:41:23 crc kubenswrapper[4768]: I0217 14:41:23.554412 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_a0368ca4-d5b7-4604-b15a-a7cb4fcf5652/galera/0.log" Feb 17 14:41:23 crc kubenswrapper[4768]: I0217 14:41:23.765644 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_5ba1ccc6-d556-4867-8e12-a5747dba1ffa/mysql-bootstrap/0.log" Feb 17 14:41:23 crc kubenswrapper[4768]: I0217 14:41:23.823186 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_4a274ef1-85cc-4456-960d-079fe7c8ea6d/nova-metadata-metadata/0.log" Feb 17 14:41:23 crc kubenswrapper[4768]: I0217 14:41:23.942158 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_5ba1ccc6-d556-4867-8e12-a5747dba1ffa/mysql-bootstrap/0.log" Feb 17 14:41:23 crc kubenswrapper[4768]: I0217 14:41:23.949498 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_5ba1ccc6-d556-4867-8e12-a5747dba1ffa/galera/0.log" Feb 17 14:41:24 crc kubenswrapper[4768]: I0217 14:41:24.360273 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstackclient_b765d360-2c6c-4740-b75e-bd16636a41e0/openstackclient/0.log" Feb 17 14:41:24 crc kubenswrapper[4768]: I0217 14:41:24.659719 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-gnb4g_39dede0b-4ddc-46ea-81c1-a8e7e576aa78/ovn-controller/0.log" Feb 17 14:41:24 crc kubenswrapper[4768]: I0217 14:41:24.782408 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-rchd4_d969a380-827a-46eb-8f6e-9f28ae50312a/openstack-network-exporter/0.log" Feb 17 14:41:24 crc kubenswrapper[4768]: I0217 14:41:24.955259 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-rkhhj_75bf7b04-fd76-440d-b975-abf1c4972c4f/ovsdb-server-init/0.log" Feb 17 14:41:25 crc kubenswrapper[4768]: I0217 14:41:25.265119 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-rkhhj_75bf7b04-fd76-440d-b975-abf1c4972c4f/ovs-vswitchd/0.log" Feb 17 14:41:25 crc kubenswrapper[4768]: I0217 14:41:25.304071 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-rkhhj_75bf7b04-fd76-440d-b975-abf1c4972c4f/ovsdb-server/0.log" Feb 17 14:41:25 crc kubenswrapper[4768]: I0217 14:41:25.406131 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-rkhhj_75bf7b04-fd76-440d-b975-abf1c4972c4f/ovsdb-server-init/0.log" Feb 17 14:41:25 crc kubenswrapper[4768]: I0217 14:41:25.550346 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-edpm-deployment-openstack-edpm-ipam-9nqqc_20f7a484-7e3c-4df5-84b0-98bd83632fb1/ovn-edpm-deployment-openstack-edpm-ipam/0.log" Feb 17 14:41:25 crc kubenswrapper[4768]: I0217 14:41:25.581187 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_41ee36c1-d509-4c0c-960a-279955237a10/openstack-network-exporter/0.log" Feb 17 14:41:25 crc kubenswrapper[4768]: I0217 14:41:25.677321 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_41ee36c1-d509-4c0c-960a-279955237a10/ovn-northd/0.log" Feb 17 14:41:25 crc kubenswrapper[4768]: I0217 14:41:25.791056 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_1296e827-af28-4d2e-a80d-33add3697b6e/openstack-network-exporter/0.log" Feb 17 14:41:25 crc kubenswrapper[4768]: I0217 14:41:25.830700 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_1296e827-af28-4d2e-a80d-33add3697b6e/ovsdbserver-nb/0.log" Feb 17 14:41:26 crc kubenswrapper[4768]: I0217 14:41:26.011717 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_6e5947dc-7f07-4498-8be8-2b0c184c5853/openstack-network-exporter/0.log" Feb 17 14:41:26 crc kubenswrapper[4768]: I0217 14:41:26.038057 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_6e5947dc-7f07-4498-8be8-2b0c184c5853/ovsdbserver-sb/0.log" Feb 17 14:41:26 crc kubenswrapper[4768]: I0217 14:41:26.214292 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-6f459487b8-6m6q4_41aee306-e130-4ed4-ba8e-381531d03dc3/placement-api/0.log" Feb 17 14:41:26 crc kubenswrapper[4768]: I0217 14:41:26.323024 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_edccbc8c-a38a-4c5d-b31a-a3b55f182ffa/setup-container/0.log" Feb 17 14:41:26 crc kubenswrapper[4768]: I0217 14:41:26.331083 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-6f459487b8-6m6q4_41aee306-e130-4ed4-ba8e-381531d03dc3/placement-log/0.log" Feb 17 14:41:26 crc kubenswrapper[4768]: I0217 14:41:26.478063 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_edccbc8c-a38a-4c5d-b31a-a3b55f182ffa/setup-container/0.log" Feb 17 14:41:26 crc kubenswrapper[4768]: I0217 14:41:26.620020 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_78830acd-378f-4199-8615-9884cdca4154/setup-container/0.log" Feb 17 14:41:26 crc kubenswrapper[4768]: I0217 14:41:26.620288 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_edccbc8c-a38a-4c5d-b31a-a3b55f182ffa/rabbitmq/0.log" Feb 17 14:41:26 crc kubenswrapper[4768]: I0217 14:41:26.744600 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_78830acd-378f-4199-8615-9884cdca4154/setup-container/0.log" Feb 17 14:41:26 crc kubenswrapper[4768]: I0217 14:41:26.792373 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_78830acd-378f-4199-8615-9884cdca4154/rabbitmq/0.log" Feb 17 14:41:26 crc kubenswrapper[4768]: I0217 14:41:26.820719 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_reboot-os-edpm-deployment-openstack-edpm-ipam-kzxnk_42b3a8d2-3952-474e-9821-8472466012cb/reboot-os-edpm-deployment-openstack-edpm-ipam/0.log" Feb 17 14:41:27 crc kubenswrapper[4768]: I0217 14:41:27.023344 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_repo-setup-edpm-deployment-openstack-edpm-ipam-ttj4s_c99d698a-1af3-46d2-97c5-0c33573adaca/repo-setup-edpm-deployment-openstack-edpm-ipam/0.log" Feb 17 14:41:27 crc kubenswrapper[4768]: I0217 14:41:27.042680 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_redhat-edpm-deployment-openstack-edpm-ipam-wsvlq_239d0b98-514d-42e7-8a8c-ac152e3410ed/redhat-edpm-deployment-openstack-edpm-ipam/0.log" Feb 17 14:41:27 crc kubenswrapper[4768]: I0217 14:41:27.216677 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_run-os-edpm-deployment-openstack-edpm-ipam-td9b9_c3e60ab5-9a2d-4dc6-8ce8-d730becb94ff/run-os-edpm-deployment-openstack-edpm-ipam/0.log" Feb 17 14:41:27 crc kubenswrapper[4768]: I0217 14:41:27.298825 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ssh-known-hosts-edpm-deployment-4nwsr_62a034b9-286c-4b4b-aea8-8ca20fe7610f/ssh-known-hosts-edpm-deployment/0.log" Feb 17 14:41:27 crc kubenswrapper[4768]: I0217 14:41:27.553981 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-6999b7cf5c-4f5kt_4ac4ebb9-cc51-4934-b4c7-590830f2a04a/proxy-server/0.log" Feb 17 14:41:27 crc kubenswrapper[4768]: I0217 14:41:27.596837 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-6999b7cf5c-4f5kt_4ac4ebb9-cc51-4934-b4c7-590830f2a04a/proxy-httpd/0.log" Feb 17 14:41:27 crc kubenswrapper[4768]: I0217 14:41:27.659331 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-ring-rebalance-wcvmp_3fc3a6f3-433a-44de-bf42-c29e730f2da3/swift-ring-rebalance/0.log" Feb 17 14:41:27 crc kubenswrapper[4768]: I0217 14:41:27.787698 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_81d96922-74f7-4840-bcad-6f98ffb1bbdf/account-auditor/0.log" Feb 17 14:41:27 crc kubenswrapper[4768]: I0217 14:41:27.806923 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_81d96922-74f7-4840-bcad-6f98ffb1bbdf/account-reaper/0.log" Feb 17 14:41:27 crc kubenswrapper[4768]: I0217 14:41:27.947915 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_81d96922-74f7-4840-bcad-6f98ffb1bbdf/account-replicator/0.log" Feb 17 14:41:27 crc kubenswrapper[4768]: I0217 14:41:27.961006 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_81d96922-74f7-4840-bcad-6f98ffb1bbdf/account-server/0.log" Feb 17 14:41:27 crc kubenswrapper[4768]: I0217 14:41:27.985783 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_81d96922-74f7-4840-bcad-6f98ffb1bbdf/container-auditor/0.log" Feb 17 14:41:28 crc kubenswrapper[4768]: I0217 14:41:28.065395 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_81d96922-74f7-4840-bcad-6f98ffb1bbdf/container-replicator/0.log" Feb 17 14:41:28 crc kubenswrapper[4768]: I0217 14:41:28.148820 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_81d96922-74f7-4840-bcad-6f98ffb1bbdf/container-server/0.log" Feb 17 14:41:28 crc kubenswrapper[4768]: I0217 14:41:28.148892 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_81d96922-74f7-4840-bcad-6f98ffb1bbdf/container-updater/0.log" Feb 17 14:41:28 crc kubenswrapper[4768]: I0217 14:41:28.189336 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_81d96922-74f7-4840-bcad-6f98ffb1bbdf/object-auditor/0.log" Feb 17 14:41:28 crc kubenswrapper[4768]: I0217 14:41:28.261319 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_81d96922-74f7-4840-bcad-6f98ffb1bbdf/object-expirer/0.log" Feb 17 14:41:28 crc kubenswrapper[4768]: I0217 14:41:28.398783 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_81d96922-74f7-4840-bcad-6f98ffb1bbdf/object-server/0.log" Feb 17 14:41:28 crc kubenswrapper[4768]: I0217 14:41:28.420562 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_81d96922-74f7-4840-bcad-6f98ffb1bbdf/object-replicator/0.log" Feb 17 14:41:28 crc kubenswrapper[4768]: I0217 14:41:28.439273 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_81d96922-74f7-4840-bcad-6f98ffb1bbdf/object-updater/0.log" Feb 17 14:41:28 crc kubenswrapper[4768]: I0217 14:41:28.502499 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_81d96922-74f7-4840-bcad-6f98ffb1bbdf/rsync/0.log" Feb 17 14:41:28 crc kubenswrapper[4768]: I0217 14:41:28.602090 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_81d96922-74f7-4840-bcad-6f98ffb1bbdf/swift-recon-cron/0.log" Feb 17 14:41:28 crc kubenswrapper[4768]: I0217 14:41:28.712450 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_telemetry-edpm-deployment-openstack-edpm-ipam-txckp_037854ba-d107-4be1-8a90-914e9180957d/telemetry-edpm-deployment-openstack-edpm-ipam/0.log" Feb 17 14:41:28 crc kubenswrapper[4768]: I0217 14:41:28.849412 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_tempest-tests-tempest_780f2ee6-f4d9-455c-97e6-7e6451706324/tempest-tests-tempest-tests-runner/0.log" Feb 17 14:41:28 crc kubenswrapper[4768]: I0217 14:41:28.964321 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_test-operator-logs-pod-tempest-tempest-tests-tempest_0ecebf14-59d4-448f-8f09-f3b51ebd695e/test-operator-logs-container/0.log" Feb 17 14:41:29 crc kubenswrapper[4768]: I0217 14:41:29.082094 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_validate-network-edpm-deployment-openstack-edpm-ipam-tj4bn_72dee802-02e1-4ce6-adf4-a32b56d357b4/validate-network-edpm-deployment-openstack-edpm-ipam/0.log" Feb 17 14:41:37 crc kubenswrapper[4768]: I0217 14:41:37.526866 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_memcached-0_d87a0ca2-9789-4e14-a18b-2ed216ea5d15/memcached/0.log" Feb 17 14:41:54 crc kubenswrapper[4768]: I0217 14:41:54.050664 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_02aaacdbb2cdc34212ef0d4f992a08d2443727e2a4312d7c57a1078608n5lkj_536648e3-7aff-4027-8132-3aed7835b43f/util/0.log" Feb 17 14:41:54 crc kubenswrapper[4768]: I0217 14:41:54.448998 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_02aaacdbb2cdc34212ef0d4f992a08d2443727e2a4312d7c57a1078608n5lkj_536648e3-7aff-4027-8132-3aed7835b43f/util/0.log" Feb 17 14:41:54 crc kubenswrapper[4768]: I0217 14:41:54.467486 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_02aaacdbb2cdc34212ef0d4f992a08d2443727e2a4312d7c57a1078608n5lkj_536648e3-7aff-4027-8132-3aed7835b43f/pull/0.log" Feb 17 14:41:54 crc kubenswrapper[4768]: I0217 14:41:54.484514 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_02aaacdbb2cdc34212ef0d4f992a08d2443727e2a4312d7c57a1078608n5lkj_536648e3-7aff-4027-8132-3aed7835b43f/pull/0.log" Feb 17 14:41:54 crc kubenswrapper[4768]: I0217 14:41:54.680141 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_02aaacdbb2cdc34212ef0d4f992a08d2443727e2a4312d7c57a1078608n5lkj_536648e3-7aff-4027-8132-3aed7835b43f/util/0.log" Feb 17 14:41:54 crc kubenswrapper[4768]: I0217 14:41:54.713186 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_02aaacdbb2cdc34212ef0d4f992a08d2443727e2a4312d7c57a1078608n5lkj_536648e3-7aff-4027-8132-3aed7835b43f/pull/0.log" Feb 17 14:41:54 crc kubenswrapper[4768]: I0217 14:41:54.727687 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_02aaacdbb2cdc34212ef0d4f992a08d2443727e2a4312d7c57a1078608n5lkj_536648e3-7aff-4027-8132-3aed7835b43f/extract/0.log" Feb 17 14:41:55 crc kubenswrapper[4768]: I0217 14:41:55.140842 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-6d8bf5c495-hn2hg_663c818c-0255-4f9c-827e-ccb2b430c5e3/manager/0.log" Feb 17 14:41:55 crc kubenswrapper[4768]: I0217 14:41:55.512832 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-77987464f4-7wnck_f5689d3e-d755-485e-80a1-e808c460022d/manager/0.log" Feb 17 14:41:55 crc kubenswrapper[4768]: I0217 14:41:55.703602 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-69f49c598c-bl6rp_8b8ebdec-5fc0-4f66-9a22-b833d3cd4283/manager/0.log" Feb 17 14:41:56 crc kubenswrapper[4768]: I0217 14:41:56.396213 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-5b9b8895d5-hrb5z_aa6bb524-9950-4add-9b03-04f324c9a02d/manager/0.log" Feb 17 14:41:56 crc kubenswrapper[4768]: I0217 14:41:56.950017 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-554564d7fc-pfr2g_2560f30e-2ede-4f2e-a3a1-e3e7e96b5792/manager/0.log" Feb 17 14:41:57 crc kubenswrapper[4768]: I0217 14:41:57.078167 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-79d975b745-cpkx6_9e699840-e748-4e5d-8629-f0379a7cce08/manager/0.log" Feb 17 14:41:57 crc kubenswrapper[4768]: I0217 14:41:57.273580 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-5d946d989d-hrkzn_633a0666-42b2-4422-9b47-fb69c1105655/manager/0.log" Feb 17 14:41:57 crc kubenswrapper[4768]: I0217 14:41:57.357353 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-b4d948c87-7ntmb_4f912c2e-e494-46c0-9231-40c106b00c40/manager/0.log" Feb 17 14:41:57 crc kubenswrapper[4768]: I0217 14:41:57.510819 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-54f6768c69-qfz4j_c040f799-8668-44a6-b694-0b253aaf7930/manager/0.log" Feb 17 14:41:57 crc kubenswrapper[4768]: I0217 14:41:57.846398 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-6994f66f48-v5svk_93f56e48-e402-471a-b9c0-0fac088f7a7e/manager/0.log" Feb 17 14:41:58 crc kubenswrapper[4768]: I0217 14:41:58.343986 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-64ddbf8bb-4wm78_2e1d5b4f-7ff1-43bf-99ed-48cb79cb86df/manager/0.log" Feb 17 14:41:58 crc kubenswrapper[4768]: I0217 14:41:58.360613 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-567668f5cf-9krmz_e48b6c11-496b-4f36-9155-119bbfb506f8/manager/0.log" Feb 17 14:41:58 crc kubenswrapper[4768]: I0217 14:41:58.575022 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-7c6767dc9cg22r7_09c0d3ef-49e2-4dec-a95f-951be73d5740/manager/0.log" Feb 17 14:41:59 crc kubenswrapper[4768]: I0217 14:41:59.003804 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-init-5b99dcf57b-tb622_3f9e32d0-4476-4d44-8266-d821ad79f322/operator/0.log" Feb 17 14:41:59 crc kubenswrapper[4768]: I0217 14:41:59.198743 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-46vqg_ee54ff6a-14d8-4701-beac-8f6eeafc5d84/registry-server/0.log" Feb 17 14:41:59 crc kubenswrapper[4768]: I0217 14:41:59.467368 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-d44cf6b75-dp9tq_96ea84f4-0ad3-49bc-9eb5-83bfbbb7ee0a/manager/0.log" Feb 17 14:41:59 crc kubenswrapper[4768]: I0217 14:41:59.709718 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-8497b45c89-pvjkl_9aee7c4a-404a-434e-8aa9-b671553532d2/manager/0.log" Feb 17 14:41:59 crc kubenswrapper[4768]: I0217 14:41:59.941931 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-pqvzt_a5348349-c195-4af1-b367-a6cb0842305b/operator/0.log" Feb 17 14:42:00 crc kubenswrapper[4768]: I0217 14:42:00.162344 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-68f46476f-kmgl4_a9e933fb-130b-4a7e-91c4-9ca5f2747e35/manager/0.log" Feb 17 14:42:00 crc kubenswrapper[4768]: I0217 14:42:00.485097 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-7f45b4ff68-4c9sb_d8d8a911-905e-45e3-a4ed-35338f74806f/manager/0.log" Feb 17 14:42:00 crc kubenswrapper[4768]: I0217 14:42:00.655984 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-7866795846-ln8v5_c8a74650-d867-4ab8-92a3-fcdc815247c4/manager/0.log" Feb 17 14:42:00 crc kubenswrapper[4768]: I0217 14:42:00.849301 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-5db88f68c-xx7vm_71305e38-208f-43be-9bb9-32341555750c/manager/0.log" Feb 17 14:42:00 crc kubenswrapper[4768]: I0217 14:42:00.986974 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-57785b79bf-sjndd_e7b1071b-c742-4578-8226-12a6cce613f1/manager/0.log" Feb 17 14:42:01 crc kubenswrapper[4768]: I0217 14:42:01.216741 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-69f8888797-j4n52_2567e2d5-83bd-4345-b94b-36527465ce1b/manager/0.log" Feb 17 14:42:05 crc kubenswrapper[4768]: I0217 14:42:05.451715 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-868647ff47-99tll_ea7f039d-d594-4b9e-9dac-06e9f13bdba2/manager/0.log" Feb 17 14:42:23 crc kubenswrapper[4768]: I0217 14:42:23.155271 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-tpnsx_6a5895c9-f283-43f0-82d7-c8a0cbf377ce/control-plane-machine-set-operator/0.log" Feb 17 14:42:23 crc kubenswrapper[4768]: I0217 14:42:23.345407 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-z2b5c_38dc9a37-3332-40e5-b4cd-3c702455584d/kube-rbac-proxy/0.log" Feb 17 14:42:23 crc kubenswrapper[4768]: I0217 14:42:23.378247 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-z2b5c_38dc9a37-3332-40e5-b4cd-3c702455584d/machine-api-operator/0.log" Feb 17 14:42:36 crc kubenswrapper[4768]: I0217 14:42:36.857189 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-858654f9db-ktxxp_390428c9-7c97-428f-b609-39f72ff5e558/cert-manager-controller/0.log" Feb 17 14:42:37 crc kubenswrapper[4768]: I0217 14:42:37.025627 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-cf98fcc89-g4sjn_ddf2aeae-0541-4180-883f-a7bdfeb65a57/cert-manager-cainjector/0.log" Feb 17 14:42:37 crc kubenswrapper[4768]: I0217 14:42:37.085274 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-687f57d79b-8nwpk_c39c013f-68bd-4b7b-9582-2cecc55854a5/cert-manager-webhook/0.log" Feb 17 14:42:51 crc kubenswrapper[4768]: I0217 14:42:51.384687 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-5c78fc5d65-2cs2r_b7911594-28b5-4c40-b08b-5f3b33d9bd11/nmstate-console-plugin/0.log" Feb 17 14:42:51 crc kubenswrapper[4768]: I0217 14:42:51.499207 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-dfmfq_aef4be82-c769-456d-90be-c95789ab9c2c/nmstate-handler/0.log" Feb 17 14:42:51 crc kubenswrapper[4768]: I0217 14:42:51.554922 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-58c85c668d-f5lj2_77308fca-ea01-49d6-b264-61df88438fd0/kube-rbac-proxy/0.log" Feb 17 14:42:51 crc kubenswrapper[4768]: I0217 14:42:51.591522 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-58c85c668d-f5lj2_77308fca-ea01-49d6-b264-61df88438fd0/nmstate-metrics/0.log" Feb 17 14:42:51 crc kubenswrapper[4768]: I0217 14:42:51.741682 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-694c9596b7-2fndh_89be9918-4f1d-4c85-8d1c-73f9245fd232/nmstate-operator/0.log" Feb 17 14:42:51 crc kubenswrapper[4768]: I0217 14:42:51.802549 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-866bcb46dc-ghl29_f4ddc594-4af8-4856-9542-0a76bf8c5acc/nmstate-webhook/0.log" Feb 17 14:42:58 crc kubenswrapper[4768]: I0217 14:42:58.060090 4768 patch_prober.go:28] interesting pod/machine-config-daemon-p97z4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 14:42:58 crc kubenswrapper[4768]: I0217 14:42:58.060568 4768 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-p97z4" podUID="10c685ba-8fe0-425c-958c-3fb6754d3d84" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 14:43:03 crc kubenswrapper[4768]: I0217 14:43:03.579281 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-lth9l"] Feb 17 14:43:03 crc kubenswrapper[4768]: E0217 14:43:03.580931 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d9fdb26b-d4d9-4dd0-a2cd-5a65b868419a" containerName="container-00" Feb 17 14:43:03 crc kubenswrapper[4768]: I0217 14:43:03.580951 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="d9fdb26b-d4d9-4dd0-a2cd-5a65b868419a" containerName="container-00" Feb 17 14:43:03 crc kubenswrapper[4768]: I0217 14:43:03.581300 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="d9fdb26b-d4d9-4dd0-a2cd-5a65b868419a" containerName="container-00" Feb 17 14:43:03 crc kubenswrapper[4768]: I0217 14:43:03.587345 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-lth9l" Feb 17 14:43:03 crc kubenswrapper[4768]: I0217 14:43:03.720828 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e5066c80-5037-42e0-b1ad-62abc50a690b-catalog-content\") pod \"redhat-operators-lth9l\" (UID: \"e5066c80-5037-42e0-b1ad-62abc50a690b\") " pod="openshift-marketplace/redhat-operators-lth9l" Feb 17 14:43:03 crc kubenswrapper[4768]: I0217 14:43:03.720946 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-72bgb\" (UniqueName: \"kubernetes.io/projected/e5066c80-5037-42e0-b1ad-62abc50a690b-kube-api-access-72bgb\") pod \"redhat-operators-lth9l\" (UID: \"e5066c80-5037-42e0-b1ad-62abc50a690b\") " pod="openshift-marketplace/redhat-operators-lth9l" Feb 17 14:43:03 crc kubenswrapper[4768]: I0217 14:43:03.721008 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e5066c80-5037-42e0-b1ad-62abc50a690b-utilities\") pod \"redhat-operators-lth9l\" (UID: \"e5066c80-5037-42e0-b1ad-62abc50a690b\") " pod="openshift-marketplace/redhat-operators-lth9l" Feb 17 14:43:03 crc kubenswrapper[4768]: I0217 14:43:03.762401 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-lth9l"] Feb 17 14:43:03 crc kubenswrapper[4768]: I0217 14:43:03.822601 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-72bgb\" (UniqueName: \"kubernetes.io/projected/e5066c80-5037-42e0-b1ad-62abc50a690b-kube-api-access-72bgb\") pod \"redhat-operators-lth9l\" (UID: \"e5066c80-5037-42e0-b1ad-62abc50a690b\") " pod="openshift-marketplace/redhat-operators-lth9l" Feb 17 14:43:03 crc kubenswrapper[4768]: I0217 14:43:03.822722 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e5066c80-5037-42e0-b1ad-62abc50a690b-utilities\") pod \"redhat-operators-lth9l\" (UID: \"e5066c80-5037-42e0-b1ad-62abc50a690b\") " pod="openshift-marketplace/redhat-operators-lth9l" Feb 17 14:43:03 crc kubenswrapper[4768]: I0217 14:43:03.823182 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e5066c80-5037-42e0-b1ad-62abc50a690b-catalog-content\") pod \"redhat-operators-lth9l\" (UID: \"e5066c80-5037-42e0-b1ad-62abc50a690b\") " pod="openshift-marketplace/redhat-operators-lth9l" Feb 17 14:43:03 crc kubenswrapper[4768]: I0217 14:43:03.824600 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e5066c80-5037-42e0-b1ad-62abc50a690b-utilities\") pod \"redhat-operators-lth9l\" (UID: \"e5066c80-5037-42e0-b1ad-62abc50a690b\") " pod="openshift-marketplace/redhat-operators-lth9l" Feb 17 14:43:03 crc kubenswrapper[4768]: I0217 14:43:03.825452 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e5066c80-5037-42e0-b1ad-62abc50a690b-catalog-content\") pod \"redhat-operators-lth9l\" (UID: \"e5066c80-5037-42e0-b1ad-62abc50a690b\") " pod="openshift-marketplace/redhat-operators-lth9l" Feb 17 14:43:03 crc kubenswrapper[4768]: I0217 14:43:03.845628 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-72bgb\" (UniqueName: \"kubernetes.io/projected/e5066c80-5037-42e0-b1ad-62abc50a690b-kube-api-access-72bgb\") pod \"redhat-operators-lth9l\" (UID: \"e5066c80-5037-42e0-b1ad-62abc50a690b\") " pod="openshift-marketplace/redhat-operators-lth9l" Feb 17 14:43:03 crc kubenswrapper[4768]: I0217 14:43:03.916703 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-lth9l" Feb 17 14:43:04 crc kubenswrapper[4768]: I0217 14:43:04.398144 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-lth9l"] Feb 17 14:43:04 crc kubenswrapper[4768]: I0217 14:43:04.811476 4768 generic.go:334] "Generic (PLEG): container finished" podID="e5066c80-5037-42e0-b1ad-62abc50a690b" containerID="86c24731eb162b02d644b3f3aed28520bac9b92b7b0c96128909832a4a13a521" exitCode=0 Feb 17 14:43:04 crc kubenswrapper[4768]: I0217 14:43:04.811561 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lth9l" event={"ID":"e5066c80-5037-42e0-b1ad-62abc50a690b","Type":"ContainerDied","Data":"86c24731eb162b02d644b3f3aed28520bac9b92b7b0c96128909832a4a13a521"} Feb 17 14:43:04 crc kubenswrapper[4768]: I0217 14:43:04.812701 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lth9l" event={"ID":"e5066c80-5037-42e0-b1ad-62abc50a690b","Type":"ContainerStarted","Data":"7b416b127a0fb3130a5c24a05ce76b132a8b9d86f8a2bf123ae9ec035eb9c170"} Feb 17 14:43:05 crc kubenswrapper[4768]: I0217 14:43:05.824139 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lth9l" event={"ID":"e5066c80-5037-42e0-b1ad-62abc50a690b","Type":"ContainerStarted","Data":"ceb96358ba93797fee13d03276c80b8df03f0ebdde42e464352b08130d2c9aa4"} Feb 17 14:43:07 crc kubenswrapper[4768]: I0217 14:43:07.848856 4768 generic.go:334] "Generic (PLEG): container finished" podID="e5066c80-5037-42e0-b1ad-62abc50a690b" containerID="ceb96358ba93797fee13d03276c80b8df03f0ebdde42e464352b08130d2c9aa4" exitCode=0 Feb 17 14:43:07 crc kubenswrapper[4768]: I0217 14:43:07.848906 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lth9l" event={"ID":"e5066c80-5037-42e0-b1ad-62abc50a690b","Type":"ContainerDied","Data":"ceb96358ba93797fee13d03276c80b8df03f0ebdde42e464352b08130d2c9aa4"} Feb 17 14:43:09 crc kubenswrapper[4768]: I0217 14:43:09.868902 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lth9l" event={"ID":"e5066c80-5037-42e0-b1ad-62abc50a690b","Type":"ContainerStarted","Data":"e28bb66a6436171ccf9ba0ba3bcbbc811e0218443944fd0d40b1fac0a295c871"} Feb 17 14:43:09 crc kubenswrapper[4768]: I0217 14:43:09.896526 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-lth9l" podStartSLOduration=2.808687256 podStartE2EDuration="6.896485989s" podCreationTimestamp="2026-02-17 14:43:03 +0000 UTC" firstStartedPulling="2026-02-17 14:43:04.812889214 +0000 UTC m=+4004.092275666" lastFinishedPulling="2026-02-17 14:43:08.900687957 +0000 UTC m=+4008.180074399" observedRunningTime="2026-02-17 14:43:09.88622833 +0000 UTC m=+4009.165614792" watchObservedRunningTime="2026-02-17 14:43:09.896485989 +0000 UTC m=+4009.175872441" Feb 17 14:43:13 crc kubenswrapper[4768]: I0217 14:43:13.917046 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-lth9l" Feb 17 14:43:13 crc kubenswrapper[4768]: I0217 14:43:13.918436 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-lth9l" Feb 17 14:43:14 crc kubenswrapper[4768]: I0217 14:43:14.963674 4768 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-lth9l" podUID="e5066c80-5037-42e0-b1ad-62abc50a690b" containerName="registry-server" probeResult="failure" output=< Feb 17 14:43:14 crc kubenswrapper[4768]: timeout: failed to connect service ":50051" within 1s Feb 17 14:43:14 crc kubenswrapper[4768]: > Feb 17 14:43:20 crc kubenswrapper[4768]: I0217 14:43:20.255866 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-69bbfbf88f-nv4f7_1a756f9a-bd11-42b8-9b67-1585ee9a5322/kube-rbac-proxy/0.log" Feb 17 14:43:20 crc kubenswrapper[4768]: I0217 14:43:20.382283 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-69bbfbf88f-nv4f7_1a756f9a-bd11-42b8-9b67-1585ee9a5322/controller/0.log" Feb 17 14:43:20 crc kubenswrapper[4768]: I0217 14:43:20.522683 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-hr4bc_9267631c-d9e1-49dc-a9bc-40f8ef1182ca/cp-frr-files/0.log" Feb 17 14:43:20 crc kubenswrapper[4768]: I0217 14:43:20.707420 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-hr4bc_9267631c-d9e1-49dc-a9bc-40f8ef1182ca/cp-reloader/0.log" Feb 17 14:43:20 crc kubenswrapper[4768]: I0217 14:43:20.726162 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-hr4bc_9267631c-d9e1-49dc-a9bc-40f8ef1182ca/cp-metrics/0.log" Feb 17 14:43:20 crc kubenswrapper[4768]: I0217 14:43:20.759276 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-hr4bc_9267631c-d9e1-49dc-a9bc-40f8ef1182ca/cp-reloader/0.log" Feb 17 14:43:20 crc kubenswrapper[4768]: I0217 14:43:20.761812 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-hr4bc_9267631c-d9e1-49dc-a9bc-40f8ef1182ca/cp-frr-files/0.log" Feb 17 14:43:20 crc kubenswrapper[4768]: I0217 14:43:20.894868 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-hr4bc_9267631c-d9e1-49dc-a9bc-40f8ef1182ca/cp-frr-files/0.log" Feb 17 14:43:20 crc kubenswrapper[4768]: I0217 14:43:20.924195 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-hr4bc_9267631c-d9e1-49dc-a9bc-40f8ef1182ca/cp-metrics/0.log" Feb 17 14:43:20 crc kubenswrapper[4768]: I0217 14:43:20.927332 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-hr4bc_9267631c-d9e1-49dc-a9bc-40f8ef1182ca/cp-metrics/0.log" Feb 17 14:43:20 crc kubenswrapper[4768]: I0217 14:43:20.936945 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-hr4bc_9267631c-d9e1-49dc-a9bc-40f8ef1182ca/cp-reloader/0.log" Feb 17 14:43:21 crc kubenswrapper[4768]: I0217 14:43:21.143808 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-hr4bc_9267631c-d9e1-49dc-a9bc-40f8ef1182ca/cp-reloader/0.log" Feb 17 14:43:21 crc kubenswrapper[4768]: I0217 14:43:21.169948 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-hr4bc_9267631c-d9e1-49dc-a9bc-40f8ef1182ca/cp-frr-files/0.log" Feb 17 14:43:21 crc kubenswrapper[4768]: I0217 14:43:21.175274 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-hr4bc_9267631c-d9e1-49dc-a9bc-40f8ef1182ca/cp-metrics/0.log" Feb 17 14:43:21 crc kubenswrapper[4768]: I0217 14:43:21.221399 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-hr4bc_9267631c-d9e1-49dc-a9bc-40f8ef1182ca/controller/0.log" Feb 17 14:43:21 crc kubenswrapper[4768]: I0217 14:43:21.372609 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-hr4bc_9267631c-d9e1-49dc-a9bc-40f8ef1182ca/frr-metrics/0.log" Feb 17 14:43:21 crc kubenswrapper[4768]: I0217 14:43:21.378504 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-hr4bc_9267631c-d9e1-49dc-a9bc-40f8ef1182ca/kube-rbac-proxy/0.log" Feb 17 14:43:21 crc kubenswrapper[4768]: I0217 14:43:21.438448 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-hr4bc_9267631c-d9e1-49dc-a9bc-40f8ef1182ca/kube-rbac-proxy-frr/0.log" Feb 17 14:43:21 crc kubenswrapper[4768]: I0217 14:43:21.559668 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-hr4bc_9267631c-d9e1-49dc-a9bc-40f8ef1182ca/reloader/0.log" Feb 17 14:43:22 crc kubenswrapper[4768]: I0217 14:43:22.153591 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-78b44bf5bb-qq5lm_8d47747b-4164-4e0e-b424-513d688cf6a8/frr-k8s-webhook-server/0.log" Feb 17 14:43:22 crc kubenswrapper[4768]: I0217 14:43:22.173172 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-df9f8fb7d-rjc2w_75aabc4c-e213-4a4b-a0ec-b0907ae8fd0e/manager/0.log" Feb 17 14:43:22 crc kubenswrapper[4768]: I0217 14:43:22.373632 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-654b8769b8-plb5c_fac3924e-d369-478e-9c10-c0a381b8696c/webhook-server/0.log" Feb 17 14:43:22 crc kubenswrapper[4768]: I0217 14:43:22.559056 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-hr4bc_9267631c-d9e1-49dc-a9bc-40f8ef1182ca/frr/0.log" Feb 17 14:43:22 crc kubenswrapper[4768]: I0217 14:43:22.575169 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-8trsw_90e8e26b-3dc0-4bf7-a493-8c089ace61a0/kube-rbac-proxy/0.log" Feb 17 14:43:22 crc kubenswrapper[4768]: I0217 14:43:22.944341 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-8trsw_90e8e26b-3dc0-4bf7-a493-8c089ace61a0/speaker/0.log" Feb 17 14:43:24 crc kubenswrapper[4768]: I0217 14:43:24.977765 4768 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-lth9l" podUID="e5066c80-5037-42e0-b1ad-62abc50a690b" containerName="registry-server" probeResult="failure" output=< Feb 17 14:43:24 crc kubenswrapper[4768]: timeout: failed to connect service ":50051" within 1s Feb 17 14:43:24 crc kubenswrapper[4768]: > Feb 17 14:43:28 crc kubenswrapper[4768]: I0217 14:43:28.060067 4768 patch_prober.go:28] interesting pod/machine-config-daemon-p97z4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 14:43:28 crc kubenswrapper[4768]: I0217 14:43:28.060646 4768 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-p97z4" podUID="10c685ba-8fe0-425c-958c-3fb6754d3d84" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 14:43:33 crc kubenswrapper[4768]: I0217 14:43:33.056121 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-94m74"] Feb 17 14:43:33 crc kubenswrapper[4768]: I0217 14:43:33.058945 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-94m74" Feb 17 14:43:33 crc kubenswrapper[4768]: I0217 14:43:33.071550 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-94m74"] Feb 17 14:43:33 crc kubenswrapper[4768]: I0217 14:43:33.208391 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e272807f-57e7-4fef-a5ed-9d511e93576e-catalog-content\") pod \"redhat-marketplace-94m74\" (UID: \"e272807f-57e7-4fef-a5ed-9d511e93576e\") " pod="openshift-marketplace/redhat-marketplace-94m74" Feb 17 14:43:33 crc kubenswrapper[4768]: I0217 14:43:33.208467 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e272807f-57e7-4fef-a5ed-9d511e93576e-utilities\") pod \"redhat-marketplace-94m74\" (UID: \"e272807f-57e7-4fef-a5ed-9d511e93576e\") " pod="openshift-marketplace/redhat-marketplace-94m74" Feb 17 14:43:33 crc kubenswrapper[4768]: I0217 14:43:33.208633 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qdkgr\" (UniqueName: \"kubernetes.io/projected/e272807f-57e7-4fef-a5ed-9d511e93576e-kube-api-access-qdkgr\") pod \"redhat-marketplace-94m74\" (UID: \"e272807f-57e7-4fef-a5ed-9d511e93576e\") " pod="openshift-marketplace/redhat-marketplace-94m74" Feb 17 14:43:33 crc kubenswrapper[4768]: I0217 14:43:33.309871 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e272807f-57e7-4fef-a5ed-9d511e93576e-utilities\") pod \"redhat-marketplace-94m74\" (UID: \"e272807f-57e7-4fef-a5ed-9d511e93576e\") " pod="openshift-marketplace/redhat-marketplace-94m74" Feb 17 14:43:33 crc kubenswrapper[4768]: I0217 14:43:33.310013 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qdkgr\" (UniqueName: \"kubernetes.io/projected/e272807f-57e7-4fef-a5ed-9d511e93576e-kube-api-access-qdkgr\") pod \"redhat-marketplace-94m74\" (UID: \"e272807f-57e7-4fef-a5ed-9d511e93576e\") " pod="openshift-marketplace/redhat-marketplace-94m74" Feb 17 14:43:33 crc kubenswrapper[4768]: I0217 14:43:33.310051 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e272807f-57e7-4fef-a5ed-9d511e93576e-catalog-content\") pod \"redhat-marketplace-94m74\" (UID: \"e272807f-57e7-4fef-a5ed-9d511e93576e\") " pod="openshift-marketplace/redhat-marketplace-94m74" Feb 17 14:43:33 crc kubenswrapper[4768]: I0217 14:43:33.310599 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e272807f-57e7-4fef-a5ed-9d511e93576e-catalog-content\") pod \"redhat-marketplace-94m74\" (UID: \"e272807f-57e7-4fef-a5ed-9d511e93576e\") " pod="openshift-marketplace/redhat-marketplace-94m74" Feb 17 14:43:33 crc kubenswrapper[4768]: I0217 14:43:33.310826 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e272807f-57e7-4fef-a5ed-9d511e93576e-utilities\") pod \"redhat-marketplace-94m74\" (UID: \"e272807f-57e7-4fef-a5ed-9d511e93576e\") " pod="openshift-marketplace/redhat-marketplace-94m74" Feb 17 14:43:33 crc kubenswrapper[4768]: I0217 14:43:33.331443 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qdkgr\" (UniqueName: \"kubernetes.io/projected/e272807f-57e7-4fef-a5ed-9d511e93576e-kube-api-access-qdkgr\") pod \"redhat-marketplace-94m74\" (UID: \"e272807f-57e7-4fef-a5ed-9d511e93576e\") " pod="openshift-marketplace/redhat-marketplace-94m74" Feb 17 14:43:33 crc kubenswrapper[4768]: I0217 14:43:33.378932 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-94m74" Feb 17 14:43:33 crc kubenswrapper[4768]: I0217 14:43:33.878446 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-94m74"] Feb 17 14:43:34 crc kubenswrapper[4768]: I0217 14:43:34.101557 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-lth9l" Feb 17 14:43:34 crc kubenswrapper[4768]: I0217 14:43:34.118498 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-94m74" event={"ID":"e272807f-57e7-4fef-a5ed-9d511e93576e","Type":"ContainerStarted","Data":"a30581e824c6344d90c5ae4bb82d1976d48b5214dfc32f33c718f439edf312a4"} Feb 17 14:43:34 crc kubenswrapper[4768]: I0217 14:43:34.163763 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-lth9l" Feb 17 14:43:35 crc kubenswrapper[4768]: I0217 14:43:35.127923 4768 generic.go:334] "Generic (PLEG): container finished" podID="e272807f-57e7-4fef-a5ed-9d511e93576e" containerID="f0511869240f27ac29e8f555252a0c72c38f0be0d5a704f6671941f3338984a5" exitCode=0 Feb 17 14:43:35 crc kubenswrapper[4768]: I0217 14:43:35.127986 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-94m74" event={"ID":"e272807f-57e7-4fef-a5ed-9d511e93576e","Type":"ContainerDied","Data":"f0511869240f27ac29e8f555252a0c72c38f0be0d5a704f6671941f3338984a5"} Feb 17 14:43:35 crc kubenswrapper[4768]: I0217 14:43:35.833724 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-lth9l"] Feb 17 14:43:36 crc kubenswrapper[4768]: I0217 14:43:36.136190 4768 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-lth9l" podUID="e5066c80-5037-42e0-b1ad-62abc50a690b" containerName="registry-server" containerID="cri-o://e28bb66a6436171ccf9ba0ba3bcbbc811e0218443944fd0d40b1fac0a295c871" gracePeriod=2 Feb 17 14:43:36 crc kubenswrapper[4768]: I0217 14:43:36.626870 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-lth9l" Feb 17 14:43:36 crc kubenswrapper[4768]: I0217 14:43:36.755800 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc2137gw9s_1b729781-75da-4a19-afbf-7a9459f6a7da/util/0.log" Feb 17 14:43:36 crc kubenswrapper[4768]: I0217 14:43:36.788947 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e5066c80-5037-42e0-b1ad-62abc50a690b-utilities\") pod \"e5066c80-5037-42e0-b1ad-62abc50a690b\" (UID: \"e5066c80-5037-42e0-b1ad-62abc50a690b\") " Feb 17 14:43:36 crc kubenswrapper[4768]: I0217 14:43:36.789163 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-72bgb\" (UniqueName: \"kubernetes.io/projected/e5066c80-5037-42e0-b1ad-62abc50a690b-kube-api-access-72bgb\") pod \"e5066c80-5037-42e0-b1ad-62abc50a690b\" (UID: \"e5066c80-5037-42e0-b1ad-62abc50a690b\") " Feb 17 14:43:36 crc kubenswrapper[4768]: I0217 14:43:36.789267 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e5066c80-5037-42e0-b1ad-62abc50a690b-catalog-content\") pod \"e5066c80-5037-42e0-b1ad-62abc50a690b\" (UID: \"e5066c80-5037-42e0-b1ad-62abc50a690b\") " Feb 17 14:43:36 crc kubenswrapper[4768]: I0217 14:43:36.791285 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e5066c80-5037-42e0-b1ad-62abc50a690b-utilities" (OuterVolumeSpecName: "utilities") pod "e5066c80-5037-42e0-b1ad-62abc50a690b" (UID: "e5066c80-5037-42e0-b1ad-62abc50a690b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 14:43:36 crc kubenswrapper[4768]: I0217 14:43:36.799437 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e5066c80-5037-42e0-b1ad-62abc50a690b-kube-api-access-72bgb" (OuterVolumeSpecName: "kube-api-access-72bgb") pod "e5066c80-5037-42e0-b1ad-62abc50a690b" (UID: "e5066c80-5037-42e0-b1ad-62abc50a690b"). InnerVolumeSpecName "kube-api-access-72bgb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 14:43:36 crc kubenswrapper[4768]: I0217 14:43:36.891583 4768 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e5066c80-5037-42e0-b1ad-62abc50a690b-utilities\") on node \"crc\" DevicePath \"\"" Feb 17 14:43:36 crc kubenswrapper[4768]: I0217 14:43:36.891618 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-72bgb\" (UniqueName: \"kubernetes.io/projected/e5066c80-5037-42e0-b1ad-62abc50a690b-kube-api-access-72bgb\") on node \"crc\" DevicePath \"\"" Feb 17 14:43:36 crc kubenswrapper[4768]: I0217 14:43:36.925389 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e5066c80-5037-42e0-b1ad-62abc50a690b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e5066c80-5037-42e0-b1ad-62abc50a690b" (UID: "e5066c80-5037-42e0-b1ad-62abc50a690b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 14:43:36 crc kubenswrapper[4768]: I0217 14:43:36.991406 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc2137gw9s_1b729781-75da-4a19-afbf-7a9459f6a7da/util/0.log" Feb 17 14:43:36 crc kubenswrapper[4768]: I0217 14:43:36.992574 4768 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e5066c80-5037-42e0-b1ad-62abc50a690b-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 17 14:43:37 crc kubenswrapper[4768]: I0217 14:43:37.024745 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc2137gw9s_1b729781-75da-4a19-afbf-7a9459f6a7da/pull/0.log" Feb 17 14:43:37 crc kubenswrapper[4768]: I0217 14:43:37.033922 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc2137gw9s_1b729781-75da-4a19-afbf-7a9459f6a7da/pull/0.log" Feb 17 14:43:37 crc kubenswrapper[4768]: I0217 14:43:37.145331 4768 generic.go:334] "Generic (PLEG): container finished" podID="e272807f-57e7-4fef-a5ed-9d511e93576e" containerID="d559513dcbf99f240fc0d9c9076cabc38b15a227de0344186461e868e658837d" exitCode=0 Feb 17 14:43:37 crc kubenswrapper[4768]: I0217 14:43:37.145413 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-94m74" event={"ID":"e272807f-57e7-4fef-a5ed-9d511e93576e","Type":"ContainerDied","Data":"d559513dcbf99f240fc0d9c9076cabc38b15a227de0344186461e868e658837d"} Feb 17 14:43:37 crc kubenswrapper[4768]: I0217 14:43:37.147716 4768 generic.go:334] "Generic (PLEG): container finished" podID="e5066c80-5037-42e0-b1ad-62abc50a690b" containerID="e28bb66a6436171ccf9ba0ba3bcbbc811e0218443944fd0d40b1fac0a295c871" exitCode=0 Feb 17 14:43:37 crc kubenswrapper[4768]: I0217 14:43:37.147743 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lth9l" event={"ID":"e5066c80-5037-42e0-b1ad-62abc50a690b","Type":"ContainerDied","Data":"e28bb66a6436171ccf9ba0ba3bcbbc811e0218443944fd0d40b1fac0a295c871"} Feb 17 14:43:37 crc kubenswrapper[4768]: I0217 14:43:37.147774 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lth9l" event={"ID":"e5066c80-5037-42e0-b1ad-62abc50a690b","Type":"ContainerDied","Data":"7b416b127a0fb3130a5c24a05ce76b132a8b9d86f8a2bf123ae9ec035eb9c170"} Feb 17 14:43:37 crc kubenswrapper[4768]: I0217 14:43:37.147790 4768 scope.go:117] "RemoveContainer" containerID="e28bb66a6436171ccf9ba0ba3bcbbc811e0218443944fd0d40b1fac0a295c871" Feb 17 14:43:37 crc kubenswrapper[4768]: I0217 14:43:37.147788 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-lth9l" Feb 17 14:43:37 crc kubenswrapper[4768]: I0217 14:43:37.165604 4768 scope.go:117] "RemoveContainer" containerID="ceb96358ba93797fee13d03276c80b8df03f0ebdde42e464352b08130d2c9aa4" Feb 17 14:43:37 crc kubenswrapper[4768]: I0217 14:43:37.202698 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-lth9l"] Feb 17 14:43:37 crc kubenswrapper[4768]: I0217 14:43:37.204688 4768 scope.go:117] "RemoveContainer" containerID="86c24731eb162b02d644b3f3aed28520bac9b92b7b0c96128909832a4a13a521" Feb 17 14:43:37 crc kubenswrapper[4768]: I0217 14:43:37.210007 4768 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-lth9l"] Feb 17 14:43:37 crc kubenswrapper[4768]: I0217 14:43:37.241563 4768 scope.go:117] "RemoveContainer" containerID="e28bb66a6436171ccf9ba0ba3bcbbc811e0218443944fd0d40b1fac0a295c871" Feb 17 14:43:37 crc kubenswrapper[4768]: E0217 14:43:37.242227 4768 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e28bb66a6436171ccf9ba0ba3bcbbc811e0218443944fd0d40b1fac0a295c871\": container with ID starting with e28bb66a6436171ccf9ba0ba3bcbbc811e0218443944fd0d40b1fac0a295c871 not found: ID does not exist" containerID="e28bb66a6436171ccf9ba0ba3bcbbc811e0218443944fd0d40b1fac0a295c871" Feb 17 14:43:37 crc kubenswrapper[4768]: I0217 14:43:37.242298 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e28bb66a6436171ccf9ba0ba3bcbbc811e0218443944fd0d40b1fac0a295c871"} err="failed to get container status \"e28bb66a6436171ccf9ba0ba3bcbbc811e0218443944fd0d40b1fac0a295c871\": rpc error: code = NotFound desc = could not find container \"e28bb66a6436171ccf9ba0ba3bcbbc811e0218443944fd0d40b1fac0a295c871\": container with ID starting with e28bb66a6436171ccf9ba0ba3bcbbc811e0218443944fd0d40b1fac0a295c871 not found: ID does not exist" Feb 17 14:43:37 crc kubenswrapper[4768]: I0217 14:43:37.242338 4768 scope.go:117] "RemoveContainer" containerID="ceb96358ba93797fee13d03276c80b8df03f0ebdde42e464352b08130d2c9aa4" Feb 17 14:43:37 crc kubenswrapper[4768]: E0217 14:43:37.242869 4768 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ceb96358ba93797fee13d03276c80b8df03f0ebdde42e464352b08130d2c9aa4\": container with ID starting with ceb96358ba93797fee13d03276c80b8df03f0ebdde42e464352b08130d2c9aa4 not found: ID does not exist" containerID="ceb96358ba93797fee13d03276c80b8df03f0ebdde42e464352b08130d2c9aa4" Feb 17 14:43:37 crc kubenswrapper[4768]: I0217 14:43:37.242937 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ceb96358ba93797fee13d03276c80b8df03f0ebdde42e464352b08130d2c9aa4"} err="failed to get container status \"ceb96358ba93797fee13d03276c80b8df03f0ebdde42e464352b08130d2c9aa4\": rpc error: code = NotFound desc = could not find container \"ceb96358ba93797fee13d03276c80b8df03f0ebdde42e464352b08130d2c9aa4\": container with ID starting with ceb96358ba93797fee13d03276c80b8df03f0ebdde42e464352b08130d2c9aa4 not found: ID does not exist" Feb 17 14:43:37 crc kubenswrapper[4768]: I0217 14:43:37.242972 4768 scope.go:117] "RemoveContainer" containerID="86c24731eb162b02d644b3f3aed28520bac9b92b7b0c96128909832a4a13a521" Feb 17 14:43:37 crc kubenswrapper[4768]: E0217 14:43:37.244809 4768 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"86c24731eb162b02d644b3f3aed28520bac9b92b7b0c96128909832a4a13a521\": container with ID starting with 86c24731eb162b02d644b3f3aed28520bac9b92b7b0c96128909832a4a13a521 not found: ID does not exist" containerID="86c24731eb162b02d644b3f3aed28520bac9b92b7b0c96128909832a4a13a521" Feb 17 14:43:37 crc kubenswrapper[4768]: I0217 14:43:37.244848 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"86c24731eb162b02d644b3f3aed28520bac9b92b7b0c96128909832a4a13a521"} err="failed to get container status \"86c24731eb162b02d644b3f3aed28520bac9b92b7b0c96128909832a4a13a521\": rpc error: code = NotFound desc = could not find container \"86c24731eb162b02d644b3f3aed28520bac9b92b7b0c96128909832a4a13a521\": container with ID starting with 86c24731eb162b02d644b3f3aed28520bac9b92b7b0c96128909832a4a13a521 not found: ID does not exist" Feb 17 14:43:37 crc kubenswrapper[4768]: I0217 14:43:37.265461 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc2137gw9s_1b729781-75da-4a19-afbf-7a9459f6a7da/extract/0.log" Feb 17 14:43:37 crc kubenswrapper[4768]: I0217 14:43:37.286609 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc2137gw9s_1b729781-75da-4a19-afbf-7a9459f6a7da/pull/0.log" Feb 17 14:43:37 crc kubenswrapper[4768]: I0217 14:43:37.295187 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc2137gw9s_1b729781-75da-4a19-afbf-7a9459f6a7da/util/0.log" Feb 17 14:43:37 crc kubenswrapper[4768]: I0217 14:43:37.428920 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-hl2m8_8a1604e7-c6f5-498e-ac94-a9e888e3e6b3/extract-utilities/0.log" Feb 17 14:43:37 crc kubenswrapper[4768]: I0217 14:43:37.546848 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e5066c80-5037-42e0-b1ad-62abc50a690b" path="/var/lib/kubelet/pods/e5066c80-5037-42e0-b1ad-62abc50a690b/volumes" Feb 17 14:43:37 crc kubenswrapper[4768]: I0217 14:43:37.588937 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-hl2m8_8a1604e7-c6f5-498e-ac94-a9e888e3e6b3/extract-utilities/0.log" Feb 17 14:43:37 crc kubenswrapper[4768]: I0217 14:43:37.607805 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-hl2m8_8a1604e7-c6f5-498e-ac94-a9e888e3e6b3/extract-content/0.log" Feb 17 14:43:37 crc kubenswrapper[4768]: I0217 14:43:37.650615 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-hl2m8_8a1604e7-c6f5-498e-ac94-a9e888e3e6b3/extract-content/0.log" Feb 17 14:43:37 crc kubenswrapper[4768]: I0217 14:43:37.827349 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-hl2m8_8a1604e7-c6f5-498e-ac94-a9e888e3e6b3/extract-utilities/0.log" Feb 17 14:43:37 crc kubenswrapper[4768]: I0217 14:43:37.867657 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-hl2m8_8a1604e7-c6f5-498e-ac94-a9e888e3e6b3/extract-content/0.log" Feb 17 14:43:38 crc kubenswrapper[4768]: I0217 14:43:38.062452 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-wjsd2_3e01cac9-3463-4a68-be1d-e64867827ad3/extract-utilities/0.log" Feb 17 14:43:38 crc kubenswrapper[4768]: I0217 14:43:38.160129 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-94m74" event={"ID":"e272807f-57e7-4fef-a5ed-9d511e93576e","Type":"ContainerStarted","Data":"a7c84152e2ed729e378fad6184d7e5bc89646ce1b77ade2822b951f03fc806d1"} Feb 17 14:43:38 crc kubenswrapper[4768]: I0217 14:43:38.179449 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-94m74" podStartSLOduration=2.772527522 podStartE2EDuration="5.179421394s" podCreationTimestamp="2026-02-17 14:43:33 +0000 UTC" firstStartedPulling="2026-02-17 14:43:35.130265384 +0000 UTC m=+4034.409651816" lastFinishedPulling="2026-02-17 14:43:37.537159246 +0000 UTC m=+4036.816545688" observedRunningTime="2026-02-17 14:43:38.17449663 +0000 UTC m=+4037.453883072" watchObservedRunningTime="2026-02-17 14:43:38.179421394 +0000 UTC m=+4037.458807846" Feb 17 14:43:38 crc kubenswrapper[4768]: I0217 14:43:38.327382 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-wjsd2_3e01cac9-3463-4a68-be1d-e64867827ad3/extract-content/0.log" Feb 17 14:43:38 crc kubenswrapper[4768]: I0217 14:43:38.327969 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-wjsd2_3e01cac9-3463-4a68-be1d-e64867827ad3/extract-content/0.log" Feb 17 14:43:38 crc kubenswrapper[4768]: I0217 14:43:38.401692 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-wjsd2_3e01cac9-3463-4a68-be1d-e64867827ad3/extract-utilities/0.log" Feb 17 14:43:38 crc kubenswrapper[4768]: I0217 14:43:38.507957 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-hl2m8_8a1604e7-c6f5-498e-ac94-a9e888e3e6b3/registry-server/0.log" Feb 17 14:43:38 crc kubenswrapper[4768]: I0217 14:43:38.616954 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-wjsd2_3e01cac9-3463-4a68-be1d-e64867827ad3/extract-utilities/0.log" Feb 17 14:43:38 crc kubenswrapper[4768]: I0217 14:43:38.685438 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-wjsd2_3e01cac9-3463-4a68-be1d-e64867827ad3/extract-content/0.log" Feb 17 14:43:38 crc kubenswrapper[4768]: I0217 14:43:38.964987 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecant4wt_4d7e7247-8115-4259-b218-d5d8dceac01d/util/0.log" Feb 17 14:43:39 crc kubenswrapper[4768]: I0217 14:43:39.242094 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-wjsd2_3e01cac9-3463-4a68-be1d-e64867827ad3/registry-server/0.log" Feb 17 14:43:39 crc kubenswrapper[4768]: I0217 14:43:39.297880 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecant4wt_4d7e7247-8115-4259-b218-d5d8dceac01d/util/0.log" Feb 17 14:43:39 crc kubenswrapper[4768]: I0217 14:43:39.343705 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecant4wt_4d7e7247-8115-4259-b218-d5d8dceac01d/pull/0.log" Feb 17 14:43:39 crc kubenswrapper[4768]: I0217 14:43:39.364886 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecant4wt_4d7e7247-8115-4259-b218-d5d8dceac01d/pull/0.log" Feb 17 14:43:39 crc kubenswrapper[4768]: I0217 14:43:39.544657 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecant4wt_4d7e7247-8115-4259-b218-d5d8dceac01d/extract/0.log" Feb 17 14:43:39 crc kubenswrapper[4768]: I0217 14:43:39.581120 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecant4wt_4d7e7247-8115-4259-b218-d5d8dceac01d/util/0.log" Feb 17 14:43:39 crc kubenswrapper[4768]: I0217 14:43:39.601292 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecant4wt_4d7e7247-8115-4259-b218-d5d8dceac01d/pull/0.log" Feb 17 14:43:39 crc kubenswrapper[4768]: I0217 14:43:39.839758 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-288nc_317cb29b-f26f-4bed-a923-9fe5e7d15391/extract-utilities/0.log" Feb 17 14:43:39 crc kubenswrapper[4768]: I0217 14:43:39.841936 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-grp2v_949f4cbc-e86f-4f30-bac7-d31c24169e4e/marketplace-operator/0.log" Feb 17 14:43:40 crc kubenswrapper[4768]: I0217 14:43:40.154751 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-288nc_317cb29b-f26f-4bed-a923-9fe5e7d15391/extract-utilities/0.log" Feb 17 14:43:40 crc kubenswrapper[4768]: I0217 14:43:40.156244 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-288nc_317cb29b-f26f-4bed-a923-9fe5e7d15391/extract-content/0.log" Feb 17 14:43:40 crc kubenswrapper[4768]: I0217 14:43:40.168206 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-288nc_317cb29b-f26f-4bed-a923-9fe5e7d15391/extract-content/0.log" Feb 17 14:43:40 crc kubenswrapper[4768]: I0217 14:43:40.346620 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-288nc_317cb29b-f26f-4bed-a923-9fe5e7d15391/extract-content/0.log" Feb 17 14:43:40 crc kubenswrapper[4768]: I0217 14:43:40.400560 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-288nc_317cb29b-f26f-4bed-a923-9fe5e7d15391/extract-utilities/0.log" Feb 17 14:43:40 crc kubenswrapper[4768]: I0217 14:43:40.543026 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-288nc_317cb29b-f26f-4bed-a923-9fe5e7d15391/registry-server/0.log" Feb 17 14:43:40 crc kubenswrapper[4768]: I0217 14:43:40.569315 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-94m74_e272807f-57e7-4fef-a5ed-9d511e93576e/extract-utilities/0.log" Feb 17 14:43:40 crc kubenswrapper[4768]: I0217 14:43:40.790088 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-94m74_e272807f-57e7-4fef-a5ed-9d511e93576e/extract-utilities/0.log" Feb 17 14:43:40 crc kubenswrapper[4768]: I0217 14:43:40.795005 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-94m74_e272807f-57e7-4fef-a5ed-9d511e93576e/extract-content/0.log" Feb 17 14:43:40 crc kubenswrapper[4768]: I0217 14:43:40.818379 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-94m74_e272807f-57e7-4fef-a5ed-9d511e93576e/extract-content/0.log" Feb 17 14:43:41 crc kubenswrapper[4768]: I0217 14:43:41.255018 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-94m74_e272807f-57e7-4fef-a5ed-9d511e93576e/extract-content/0.log" Feb 17 14:43:41 crc kubenswrapper[4768]: I0217 14:43:41.255633 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-94m74_e272807f-57e7-4fef-a5ed-9d511e93576e/extract-utilities/0.log" Feb 17 14:43:41 crc kubenswrapper[4768]: I0217 14:43:41.343083 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-9l98f_c2e521d5-38fc-41dc-9d90-cd52ebc76308/extract-utilities/0.log" Feb 17 14:43:41 crc kubenswrapper[4768]: I0217 14:43:41.350841 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-94m74_e272807f-57e7-4fef-a5ed-9d511e93576e/registry-server/0.log" Feb 17 14:43:41 crc kubenswrapper[4768]: I0217 14:43:41.489436 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-9l98f_c2e521d5-38fc-41dc-9d90-cd52ebc76308/extract-content/0.log" Feb 17 14:43:41 crc kubenswrapper[4768]: I0217 14:43:41.494750 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-9l98f_c2e521d5-38fc-41dc-9d90-cd52ebc76308/extract-utilities/0.log" Feb 17 14:43:41 crc kubenswrapper[4768]: I0217 14:43:41.528755 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-9l98f_c2e521d5-38fc-41dc-9d90-cd52ebc76308/extract-content/0.log" Feb 17 14:43:41 crc kubenswrapper[4768]: I0217 14:43:41.693169 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-9l98f_c2e521d5-38fc-41dc-9d90-cd52ebc76308/extract-content/0.log" Feb 17 14:43:41 crc kubenswrapper[4768]: I0217 14:43:41.694568 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-9l98f_c2e521d5-38fc-41dc-9d90-cd52ebc76308/extract-utilities/0.log" Feb 17 14:43:41 crc kubenswrapper[4768]: I0217 14:43:41.903241 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-9l98f_c2e521d5-38fc-41dc-9d90-cd52ebc76308/registry-server/0.log" Feb 17 14:43:43 crc kubenswrapper[4768]: I0217 14:43:43.379914 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-94m74" Feb 17 14:43:43 crc kubenswrapper[4768]: I0217 14:43:43.381217 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-94m74" Feb 17 14:43:43 crc kubenswrapper[4768]: I0217 14:43:43.437831 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-94m74" Feb 17 14:43:44 crc kubenswrapper[4768]: I0217 14:43:44.289219 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-94m74" Feb 17 14:43:44 crc kubenswrapper[4768]: I0217 14:43:44.343039 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-94m74"] Feb 17 14:43:46 crc kubenswrapper[4768]: I0217 14:43:46.247659 4768 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-94m74" podUID="e272807f-57e7-4fef-a5ed-9d511e93576e" containerName="registry-server" containerID="cri-o://a7c84152e2ed729e378fad6184d7e5bc89646ce1b77ade2822b951f03fc806d1" gracePeriod=2 Feb 17 14:43:46 crc kubenswrapper[4768]: I0217 14:43:46.720131 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-94m74" Feb 17 14:43:46 crc kubenswrapper[4768]: I0217 14:43:46.795820 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e272807f-57e7-4fef-a5ed-9d511e93576e-catalog-content\") pod \"e272807f-57e7-4fef-a5ed-9d511e93576e\" (UID: \"e272807f-57e7-4fef-a5ed-9d511e93576e\") " Feb 17 14:43:46 crc kubenswrapper[4768]: I0217 14:43:46.795885 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qdkgr\" (UniqueName: \"kubernetes.io/projected/e272807f-57e7-4fef-a5ed-9d511e93576e-kube-api-access-qdkgr\") pod \"e272807f-57e7-4fef-a5ed-9d511e93576e\" (UID: \"e272807f-57e7-4fef-a5ed-9d511e93576e\") " Feb 17 14:43:46 crc kubenswrapper[4768]: I0217 14:43:46.795908 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e272807f-57e7-4fef-a5ed-9d511e93576e-utilities\") pod \"e272807f-57e7-4fef-a5ed-9d511e93576e\" (UID: \"e272807f-57e7-4fef-a5ed-9d511e93576e\") " Feb 17 14:43:46 crc kubenswrapper[4768]: I0217 14:43:46.797281 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e272807f-57e7-4fef-a5ed-9d511e93576e-utilities" (OuterVolumeSpecName: "utilities") pod "e272807f-57e7-4fef-a5ed-9d511e93576e" (UID: "e272807f-57e7-4fef-a5ed-9d511e93576e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 14:43:46 crc kubenswrapper[4768]: I0217 14:43:46.805802 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e272807f-57e7-4fef-a5ed-9d511e93576e-kube-api-access-qdkgr" (OuterVolumeSpecName: "kube-api-access-qdkgr") pod "e272807f-57e7-4fef-a5ed-9d511e93576e" (UID: "e272807f-57e7-4fef-a5ed-9d511e93576e"). InnerVolumeSpecName "kube-api-access-qdkgr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 14:43:46 crc kubenswrapper[4768]: I0217 14:43:46.843435 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e272807f-57e7-4fef-a5ed-9d511e93576e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e272807f-57e7-4fef-a5ed-9d511e93576e" (UID: "e272807f-57e7-4fef-a5ed-9d511e93576e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 14:43:46 crc kubenswrapper[4768]: I0217 14:43:46.897870 4768 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e272807f-57e7-4fef-a5ed-9d511e93576e-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 17 14:43:46 crc kubenswrapper[4768]: I0217 14:43:46.897906 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qdkgr\" (UniqueName: \"kubernetes.io/projected/e272807f-57e7-4fef-a5ed-9d511e93576e-kube-api-access-qdkgr\") on node \"crc\" DevicePath \"\"" Feb 17 14:43:46 crc kubenswrapper[4768]: I0217 14:43:46.897918 4768 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e272807f-57e7-4fef-a5ed-9d511e93576e-utilities\") on node \"crc\" DevicePath \"\"" Feb 17 14:43:47 crc kubenswrapper[4768]: I0217 14:43:47.264499 4768 generic.go:334] "Generic (PLEG): container finished" podID="e272807f-57e7-4fef-a5ed-9d511e93576e" containerID="a7c84152e2ed729e378fad6184d7e5bc89646ce1b77ade2822b951f03fc806d1" exitCode=0 Feb 17 14:43:47 crc kubenswrapper[4768]: I0217 14:43:47.264579 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-94m74" Feb 17 14:43:47 crc kubenswrapper[4768]: I0217 14:43:47.264571 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-94m74" event={"ID":"e272807f-57e7-4fef-a5ed-9d511e93576e","Type":"ContainerDied","Data":"a7c84152e2ed729e378fad6184d7e5bc89646ce1b77ade2822b951f03fc806d1"} Feb 17 14:43:47 crc kubenswrapper[4768]: I0217 14:43:47.265140 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-94m74" event={"ID":"e272807f-57e7-4fef-a5ed-9d511e93576e","Type":"ContainerDied","Data":"a30581e824c6344d90c5ae4bb82d1976d48b5214dfc32f33c718f439edf312a4"} Feb 17 14:43:47 crc kubenswrapper[4768]: I0217 14:43:47.265164 4768 scope.go:117] "RemoveContainer" containerID="a7c84152e2ed729e378fad6184d7e5bc89646ce1b77ade2822b951f03fc806d1" Feb 17 14:43:47 crc kubenswrapper[4768]: I0217 14:43:47.294567 4768 scope.go:117] "RemoveContainer" containerID="d559513dcbf99f240fc0d9c9076cabc38b15a227de0344186461e868e658837d" Feb 17 14:43:47 crc kubenswrapper[4768]: I0217 14:43:47.303419 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-94m74"] Feb 17 14:43:47 crc kubenswrapper[4768]: I0217 14:43:47.324710 4768 scope.go:117] "RemoveContainer" containerID="f0511869240f27ac29e8f555252a0c72c38f0be0d5a704f6671941f3338984a5" Feb 17 14:43:47 crc kubenswrapper[4768]: I0217 14:43:47.357668 4768 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-94m74"] Feb 17 14:43:47 crc kubenswrapper[4768]: I0217 14:43:47.370879 4768 scope.go:117] "RemoveContainer" containerID="a7c84152e2ed729e378fad6184d7e5bc89646ce1b77ade2822b951f03fc806d1" Feb 17 14:43:47 crc kubenswrapper[4768]: E0217 14:43:47.371504 4768 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a7c84152e2ed729e378fad6184d7e5bc89646ce1b77ade2822b951f03fc806d1\": container with ID starting with a7c84152e2ed729e378fad6184d7e5bc89646ce1b77ade2822b951f03fc806d1 not found: ID does not exist" containerID="a7c84152e2ed729e378fad6184d7e5bc89646ce1b77ade2822b951f03fc806d1" Feb 17 14:43:47 crc kubenswrapper[4768]: I0217 14:43:47.371586 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a7c84152e2ed729e378fad6184d7e5bc89646ce1b77ade2822b951f03fc806d1"} err="failed to get container status \"a7c84152e2ed729e378fad6184d7e5bc89646ce1b77ade2822b951f03fc806d1\": rpc error: code = NotFound desc = could not find container \"a7c84152e2ed729e378fad6184d7e5bc89646ce1b77ade2822b951f03fc806d1\": container with ID starting with a7c84152e2ed729e378fad6184d7e5bc89646ce1b77ade2822b951f03fc806d1 not found: ID does not exist" Feb 17 14:43:47 crc kubenswrapper[4768]: I0217 14:43:47.371623 4768 scope.go:117] "RemoveContainer" containerID="d559513dcbf99f240fc0d9c9076cabc38b15a227de0344186461e868e658837d" Feb 17 14:43:47 crc kubenswrapper[4768]: E0217 14:43:47.372023 4768 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d559513dcbf99f240fc0d9c9076cabc38b15a227de0344186461e868e658837d\": container with ID starting with d559513dcbf99f240fc0d9c9076cabc38b15a227de0344186461e868e658837d not found: ID does not exist" containerID="d559513dcbf99f240fc0d9c9076cabc38b15a227de0344186461e868e658837d" Feb 17 14:43:47 crc kubenswrapper[4768]: I0217 14:43:47.372062 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d559513dcbf99f240fc0d9c9076cabc38b15a227de0344186461e868e658837d"} err="failed to get container status \"d559513dcbf99f240fc0d9c9076cabc38b15a227de0344186461e868e658837d\": rpc error: code = NotFound desc = could not find container \"d559513dcbf99f240fc0d9c9076cabc38b15a227de0344186461e868e658837d\": container with ID starting with d559513dcbf99f240fc0d9c9076cabc38b15a227de0344186461e868e658837d not found: ID does not exist" Feb 17 14:43:47 crc kubenswrapper[4768]: I0217 14:43:47.372084 4768 scope.go:117] "RemoveContainer" containerID="f0511869240f27ac29e8f555252a0c72c38f0be0d5a704f6671941f3338984a5" Feb 17 14:43:47 crc kubenswrapper[4768]: E0217 14:43:47.372357 4768 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f0511869240f27ac29e8f555252a0c72c38f0be0d5a704f6671941f3338984a5\": container with ID starting with f0511869240f27ac29e8f555252a0c72c38f0be0d5a704f6671941f3338984a5 not found: ID does not exist" containerID="f0511869240f27ac29e8f555252a0c72c38f0be0d5a704f6671941f3338984a5" Feb 17 14:43:47 crc kubenswrapper[4768]: I0217 14:43:47.372388 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f0511869240f27ac29e8f555252a0c72c38f0be0d5a704f6671941f3338984a5"} err="failed to get container status \"f0511869240f27ac29e8f555252a0c72c38f0be0d5a704f6671941f3338984a5\": rpc error: code = NotFound desc = could not find container \"f0511869240f27ac29e8f555252a0c72c38f0be0d5a704f6671941f3338984a5\": container with ID starting with f0511869240f27ac29e8f555252a0c72c38f0be0d5a704f6671941f3338984a5 not found: ID does not exist" Feb 17 14:43:47 crc kubenswrapper[4768]: I0217 14:43:47.545693 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e272807f-57e7-4fef-a5ed-9d511e93576e" path="/var/lib/kubelet/pods/e272807f-57e7-4fef-a5ed-9d511e93576e/volumes" Feb 17 14:43:58 crc kubenswrapper[4768]: I0217 14:43:58.059524 4768 patch_prober.go:28] interesting pod/machine-config-daemon-p97z4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 14:43:58 crc kubenswrapper[4768]: I0217 14:43:58.060085 4768 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-p97z4" podUID="10c685ba-8fe0-425c-958c-3fb6754d3d84" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 14:43:58 crc kubenswrapper[4768]: I0217 14:43:58.060179 4768 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-p97z4" Feb 17 14:43:58 crc kubenswrapper[4768]: I0217 14:43:58.061143 4768 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"19f98ba1d898b39bf0a21ff629e9d39b7fa1b2d21cfcd6ddb2bf6f26918bf017"} pod="openshift-machine-config-operator/machine-config-daemon-p97z4" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 17 14:43:58 crc kubenswrapper[4768]: I0217 14:43:58.061227 4768 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-p97z4" podUID="10c685ba-8fe0-425c-958c-3fb6754d3d84" containerName="machine-config-daemon" containerID="cri-o://19f98ba1d898b39bf0a21ff629e9d39b7fa1b2d21cfcd6ddb2bf6f26918bf017" gracePeriod=600 Feb 17 14:43:58 crc kubenswrapper[4768]: I0217 14:43:58.376859 4768 generic.go:334] "Generic (PLEG): container finished" podID="10c685ba-8fe0-425c-958c-3fb6754d3d84" containerID="19f98ba1d898b39bf0a21ff629e9d39b7fa1b2d21cfcd6ddb2bf6f26918bf017" exitCode=0 Feb 17 14:43:58 crc kubenswrapper[4768]: I0217 14:43:58.376933 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-p97z4" event={"ID":"10c685ba-8fe0-425c-958c-3fb6754d3d84","Type":"ContainerDied","Data":"19f98ba1d898b39bf0a21ff629e9d39b7fa1b2d21cfcd6ddb2bf6f26918bf017"} Feb 17 14:43:58 crc kubenswrapper[4768]: I0217 14:43:58.377154 4768 scope.go:117] "RemoveContainer" containerID="1cdfe752ad5564d137aa582fce5dfd1f6e3403007387ccc408cda17cd0114903" Feb 17 14:43:59 crc kubenswrapper[4768]: I0217 14:43:59.395270 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-p97z4" event={"ID":"10c685ba-8fe0-425c-958c-3fb6754d3d84","Type":"ContainerStarted","Data":"7d54af05ec5a74113e4d5793b54f49bc5519842e435df1206e448365a991ac18"} Feb 17 14:44:32 crc kubenswrapper[4768]: I0217 14:44:32.468401 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-9sj4x"] Feb 17 14:44:32 crc kubenswrapper[4768]: E0217 14:44:32.469467 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e5066c80-5037-42e0-b1ad-62abc50a690b" containerName="extract-content" Feb 17 14:44:32 crc kubenswrapper[4768]: I0217 14:44:32.469487 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="e5066c80-5037-42e0-b1ad-62abc50a690b" containerName="extract-content" Feb 17 14:44:32 crc kubenswrapper[4768]: E0217 14:44:32.469508 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e272807f-57e7-4fef-a5ed-9d511e93576e" containerName="extract-content" Feb 17 14:44:32 crc kubenswrapper[4768]: I0217 14:44:32.469515 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="e272807f-57e7-4fef-a5ed-9d511e93576e" containerName="extract-content" Feb 17 14:44:32 crc kubenswrapper[4768]: E0217 14:44:32.469537 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e272807f-57e7-4fef-a5ed-9d511e93576e" containerName="extract-utilities" Feb 17 14:44:32 crc kubenswrapper[4768]: I0217 14:44:32.469546 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="e272807f-57e7-4fef-a5ed-9d511e93576e" containerName="extract-utilities" Feb 17 14:44:32 crc kubenswrapper[4768]: E0217 14:44:32.469559 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e5066c80-5037-42e0-b1ad-62abc50a690b" containerName="extract-utilities" Feb 17 14:44:32 crc kubenswrapper[4768]: I0217 14:44:32.469567 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="e5066c80-5037-42e0-b1ad-62abc50a690b" containerName="extract-utilities" Feb 17 14:44:32 crc kubenswrapper[4768]: E0217 14:44:32.469584 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e272807f-57e7-4fef-a5ed-9d511e93576e" containerName="registry-server" Feb 17 14:44:32 crc kubenswrapper[4768]: I0217 14:44:32.469593 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="e272807f-57e7-4fef-a5ed-9d511e93576e" containerName="registry-server" Feb 17 14:44:32 crc kubenswrapper[4768]: E0217 14:44:32.469604 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e5066c80-5037-42e0-b1ad-62abc50a690b" containerName="registry-server" Feb 17 14:44:32 crc kubenswrapper[4768]: I0217 14:44:32.469611 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="e5066c80-5037-42e0-b1ad-62abc50a690b" containerName="registry-server" Feb 17 14:44:32 crc kubenswrapper[4768]: I0217 14:44:32.469890 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="e5066c80-5037-42e0-b1ad-62abc50a690b" containerName="registry-server" Feb 17 14:44:32 crc kubenswrapper[4768]: I0217 14:44:32.469910 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="e272807f-57e7-4fef-a5ed-9d511e93576e" containerName="registry-server" Feb 17 14:44:32 crc kubenswrapper[4768]: I0217 14:44:32.471551 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9sj4x" Feb 17 14:44:32 crc kubenswrapper[4768]: I0217 14:44:32.493275 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-9sj4x"] Feb 17 14:44:32 crc kubenswrapper[4768]: I0217 14:44:32.551762 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wxgdq\" (UniqueName: \"kubernetes.io/projected/f1df0a66-dc8c-4f23-9bd9-b9cb78cff86b-kube-api-access-wxgdq\") pod \"community-operators-9sj4x\" (UID: \"f1df0a66-dc8c-4f23-9bd9-b9cb78cff86b\") " pod="openshift-marketplace/community-operators-9sj4x" Feb 17 14:44:32 crc kubenswrapper[4768]: I0217 14:44:32.551983 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f1df0a66-dc8c-4f23-9bd9-b9cb78cff86b-utilities\") pod \"community-operators-9sj4x\" (UID: \"f1df0a66-dc8c-4f23-9bd9-b9cb78cff86b\") " pod="openshift-marketplace/community-operators-9sj4x" Feb 17 14:44:32 crc kubenswrapper[4768]: I0217 14:44:32.552208 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f1df0a66-dc8c-4f23-9bd9-b9cb78cff86b-catalog-content\") pod \"community-operators-9sj4x\" (UID: \"f1df0a66-dc8c-4f23-9bd9-b9cb78cff86b\") " pod="openshift-marketplace/community-operators-9sj4x" Feb 17 14:44:32 crc kubenswrapper[4768]: I0217 14:44:32.654755 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f1df0a66-dc8c-4f23-9bd9-b9cb78cff86b-utilities\") pod \"community-operators-9sj4x\" (UID: \"f1df0a66-dc8c-4f23-9bd9-b9cb78cff86b\") " pod="openshift-marketplace/community-operators-9sj4x" Feb 17 14:44:32 crc kubenswrapper[4768]: I0217 14:44:32.655126 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f1df0a66-dc8c-4f23-9bd9-b9cb78cff86b-utilities\") pod \"community-operators-9sj4x\" (UID: \"f1df0a66-dc8c-4f23-9bd9-b9cb78cff86b\") " pod="openshift-marketplace/community-operators-9sj4x" Feb 17 14:44:32 crc kubenswrapper[4768]: I0217 14:44:32.655247 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f1df0a66-dc8c-4f23-9bd9-b9cb78cff86b-catalog-content\") pod \"community-operators-9sj4x\" (UID: \"f1df0a66-dc8c-4f23-9bd9-b9cb78cff86b\") " pod="openshift-marketplace/community-operators-9sj4x" Feb 17 14:44:32 crc kubenswrapper[4768]: I0217 14:44:32.655498 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f1df0a66-dc8c-4f23-9bd9-b9cb78cff86b-catalog-content\") pod \"community-operators-9sj4x\" (UID: \"f1df0a66-dc8c-4f23-9bd9-b9cb78cff86b\") " pod="openshift-marketplace/community-operators-9sj4x" Feb 17 14:44:32 crc kubenswrapper[4768]: I0217 14:44:32.655613 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wxgdq\" (UniqueName: \"kubernetes.io/projected/f1df0a66-dc8c-4f23-9bd9-b9cb78cff86b-kube-api-access-wxgdq\") pod \"community-operators-9sj4x\" (UID: \"f1df0a66-dc8c-4f23-9bd9-b9cb78cff86b\") " pod="openshift-marketplace/community-operators-9sj4x" Feb 17 14:44:32 crc kubenswrapper[4768]: I0217 14:44:32.677597 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wxgdq\" (UniqueName: \"kubernetes.io/projected/f1df0a66-dc8c-4f23-9bd9-b9cb78cff86b-kube-api-access-wxgdq\") pod \"community-operators-9sj4x\" (UID: \"f1df0a66-dc8c-4f23-9bd9-b9cb78cff86b\") " pod="openshift-marketplace/community-operators-9sj4x" Feb 17 14:44:32 crc kubenswrapper[4768]: I0217 14:44:32.841236 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9sj4x" Feb 17 14:44:33 crc kubenswrapper[4768]: I0217 14:44:33.340303 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-9sj4x"] Feb 17 14:44:33 crc kubenswrapper[4768]: I0217 14:44:33.699157 4768 generic.go:334] "Generic (PLEG): container finished" podID="f1df0a66-dc8c-4f23-9bd9-b9cb78cff86b" containerID="058c1313c7efe3920c47744979f9611362f237d73cdad0df250c53ca1d8c9935" exitCode=0 Feb 17 14:44:33 crc kubenswrapper[4768]: I0217 14:44:33.699246 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9sj4x" event={"ID":"f1df0a66-dc8c-4f23-9bd9-b9cb78cff86b","Type":"ContainerDied","Data":"058c1313c7efe3920c47744979f9611362f237d73cdad0df250c53ca1d8c9935"} Feb 17 14:44:33 crc kubenswrapper[4768]: I0217 14:44:33.699577 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9sj4x" event={"ID":"f1df0a66-dc8c-4f23-9bd9-b9cb78cff86b","Type":"ContainerStarted","Data":"81e718f57bca6ca4740bd9481c1f9567c38756e44a894c45895019246e20a5b4"} Feb 17 14:44:34 crc kubenswrapper[4768]: I0217 14:44:34.710998 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9sj4x" event={"ID":"f1df0a66-dc8c-4f23-9bd9-b9cb78cff86b","Type":"ContainerStarted","Data":"8cb69277202cd7c479787baf50bb5ed1d461ba2f4d73fb43b48406dfbe96f7fc"} Feb 17 14:44:35 crc kubenswrapper[4768]: I0217 14:44:35.727407 4768 generic.go:334] "Generic (PLEG): container finished" podID="f1df0a66-dc8c-4f23-9bd9-b9cb78cff86b" containerID="8cb69277202cd7c479787baf50bb5ed1d461ba2f4d73fb43b48406dfbe96f7fc" exitCode=0 Feb 17 14:44:35 crc kubenswrapper[4768]: I0217 14:44:35.727495 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9sj4x" event={"ID":"f1df0a66-dc8c-4f23-9bd9-b9cb78cff86b","Type":"ContainerDied","Data":"8cb69277202cd7c479787baf50bb5ed1d461ba2f4d73fb43b48406dfbe96f7fc"} Feb 17 14:44:36 crc kubenswrapper[4768]: I0217 14:44:36.739735 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9sj4x" event={"ID":"f1df0a66-dc8c-4f23-9bd9-b9cb78cff86b","Type":"ContainerStarted","Data":"b8101ccfc5248f514d57255cd1298d06e50058a23e9a425106490338a7cb2b4a"} Feb 17 14:44:42 crc kubenswrapper[4768]: I0217 14:44:42.842091 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-9sj4x" Feb 17 14:44:42 crc kubenswrapper[4768]: I0217 14:44:42.842800 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-9sj4x" Feb 17 14:44:42 crc kubenswrapper[4768]: I0217 14:44:42.910388 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-9sj4x" Feb 17 14:44:42 crc kubenswrapper[4768]: I0217 14:44:42.937002 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-9sj4x" podStartSLOduration=8.408798704 podStartE2EDuration="10.936982367s" podCreationTimestamp="2026-02-17 14:44:32 +0000 UTC" firstStartedPulling="2026-02-17 14:44:33.701381656 +0000 UTC m=+4092.980768098" lastFinishedPulling="2026-02-17 14:44:36.229565299 +0000 UTC m=+4095.508951761" observedRunningTime="2026-02-17 14:44:36.768680898 +0000 UTC m=+4096.048067340" watchObservedRunningTime="2026-02-17 14:44:42.936982367 +0000 UTC m=+4102.216368809" Feb 17 14:44:43 crc kubenswrapper[4768]: I0217 14:44:43.898528 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-9sj4x" Feb 17 14:44:43 crc kubenswrapper[4768]: I0217 14:44:43.947388 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-9sj4x"] Feb 17 14:44:45 crc kubenswrapper[4768]: I0217 14:44:45.852166 4768 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-9sj4x" podUID="f1df0a66-dc8c-4f23-9bd9-b9cb78cff86b" containerName="registry-server" containerID="cri-o://b8101ccfc5248f514d57255cd1298d06e50058a23e9a425106490338a7cb2b4a" gracePeriod=2 Feb 17 14:44:46 crc kubenswrapper[4768]: I0217 14:44:46.327666 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9sj4x" Feb 17 14:44:46 crc kubenswrapper[4768]: I0217 14:44:46.440195 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f1df0a66-dc8c-4f23-9bd9-b9cb78cff86b-catalog-content\") pod \"f1df0a66-dc8c-4f23-9bd9-b9cb78cff86b\" (UID: \"f1df0a66-dc8c-4f23-9bd9-b9cb78cff86b\") " Feb 17 14:44:46 crc kubenswrapper[4768]: I0217 14:44:46.440361 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxgdq\" (UniqueName: \"kubernetes.io/projected/f1df0a66-dc8c-4f23-9bd9-b9cb78cff86b-kube-api-access-wxgdq\") pod \"f1df0a66-dc8c-4f23-9bd9-b9cb78cff86b\" (UID: \"f1df0a66-dc8c-4f23-9bd9-b9cb78cff86b\") " Feb 17 14:44:46 crc kubenswrapper[4768]: I0217 14:44:46.440407 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f1df0a66-dc8c-4f23-9bd9-b9cb78cff86b-utilities\") pod \"f1df0a66-dc8c-4f23-9bd9-b9cb78cff86b\" (UID: \"f1df0a66-dc8c-4f23-9bd9-b9cb78cff86b\") " Feb 17 14:44:46 crc kubenswrapper[4768]: I0217 14:44:46.441982 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f1df0a66-dc8c-4f23-9bd9-b9cb78cff86b-utilities" (OuterVolumeSpecName: "utilities") pod "f1df0a66-dc8c-4f23-9bd9-b9cb78cff86b" (UID: "f1df0a66-dc8c-4f23-9bd9-b9cb78cff86b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 14:44:46 crc kubenswrapper[4768]: I0217 14:44:46.449861 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f1df0a66-dc8c-4f23-9bd9-b9cb78cff86b-kube-api-access-wxgdq" (OuterVolumeSpecName: "kube-api-access-wxgdq") pod "f1df0a66-dc8c-4f23-9bd9-b9cb78cff86b" (UID: "f1df0a66-dc8c-4f23-9bd9-b9cb78cff86b"). InnerVolumeSpecName "kube-api-access-wxgdq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 14:44:46 crc kubenswrapper[4768]: I0217 14:44:46.498722 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f1df0a66-dc8c-4f23-9bd9-b9cb78cff86b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f1df0a66-dc8c-4f23-9bd9-b9cb78cff86b" (UID: "f1df0a66-dc8c-4f23-9bd9-b9cb78cff86b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 14:44:46 crc kubenswrapper[4768]: I0217 14:44:46.549821 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wxgdq\" (UniqueName: \"kubernetes.io/projected/f1df0a66-dc8c-4f23-9bd9-b9cb78cff86b-kube-api-access-wxgdq\") on node \"crc\" DevicePath \"\"" Feb 17 14:44:46 crc kubenswrapper[4768]: I0217 14:44:46.549879 4768 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f1df0a66-dc8c-4f23-9bd9-b9cb78cff86b-utilities\") on node \"crc\" DevicePath \"\"" Feb 17 14:44:46 crc kubenswrapper[4768]: I0217 14:44:46.549901 4768 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f1df0a66-dc8c-4f23-9bd9-b9cb78cff86b-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 17 14:44:46 crc kubenswrapper[4768]: I0217 14:44:46.865024 4768 generic.go:334] "Generic (PLEG): container finished" podID="f1df0a66-dc8c-4f23-9bd9-b9cb78cff86b" containerID="b8101ccfc5248f514d57255cd1298d06e50058a23e9a425106490338a7cb2b4a" exitCode=0 Feb 17 14:44:46 crc kubenswrapper[4768]: I0217 14:44:46.865068 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9sj4x" event={"ID":"f1df0a66-dc8c-4f23-9bd9-b9cb78cff86b","Type":"ContainerDied","Data":"b8101ccfc5248f514d57255cd1298d06e50058a23e9a425106490338a7cb2b4a"} Feb 17 14:44:46 crc kubenswrapper[4768]: I0217 14:44:46.865148 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9sj4x" event={"ID":"f1df0a66-dc8c-4f23-9bd9-b9cb78cff86b","Type":"ContainerDied","Data":"81e718f57bca6ca4740bd9481c1f9567c38756e44a894c45895019246e20a5b4"} Feb 17 14:44:46 crc kubenswrapper[4768]: I0217 14:44:46.865167 4768 scope.go:117] "RemoveContainer" containerID="b8101ccfc5248f514d57255cd1298d06e50058a23e9a425106490338a7cb2b4a" Feb 17 14:44:46 crc kubenswrapper[4768]: I0217 14:44:46.865313 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9sj4x" Feb 17 14:44:46 crc kubenswrapper[4768]: I0217 14:44:46.901259 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-9sj4x"] Feb 17 14:44:46 crc kubenswrapper[4768]: I0217 14:44:46.903779 4768 scope.go:117] "RemoveContainer" containerID="8cb69277202cd7c479787baf50bb5ed1d461ba2f4d73fb43b48406dfbe96f7fc" Feb 17 14:44:46 crc kubenswrapper[4768]: I0217 14:44:46.911751 4768 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-9sj4x"] Feb 17 14:44:46 crc kubenswrapper[4768]: I0217 14:44:46.928223 4768 scope.go:117] "RemoveContainer" containerID="058c1313c7efe3920c47744979f9611362f237d73cdad0df250c53ca1d8c9935" Feb 17 14:44:46 crc kubenswrapper[4768]: I0217 14:44:46.971050 4768 scope.go:117] "RemoveContainer" containerID="b8101ccfc5248f514d57255cd1298d06e50058a23e9a425106490338a7cb2b4a" Feb 17 14:44:46 crc kubenswrapper[4768]: E0217 14:44:46.971530 4768 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b8101ccfc5248f514d57255cd1298d06e50058a23e9a425106490338a7cb2b4a\": container with ID starting with b8101ccfc5248f514d57255cd1298d06e50058a23e9a425106490338a7cb2b4a not found: ID does not exist" containerID="b8101ccfc5248f514d57255cd1298d06e50058a23e9a425106490338a7cb2b4a" Feb 17 14:44:46 crc kubenswrapper[4768]: I0217 14:44:46.971575 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b8101ccfc5248f514d57255cd1298d06e50058a23e9a425106490338a7cb2b4a"} err="failed to get container status \"b8101ccfc5248f514d57255cd1298d06e50058a23e9a425106490338a7cb2b4a\": rpc error: code = NotFound desc = could not find container \"b8101ccfc5248f514d57255cd1298d06e50058a23e9a425106490338a7cb2b4a\": container with ID starting with b8101ccfc5248f514d57255cd1298d06e50058a23e9a425106490338a7cb2b4a not found: ID does not exist" Feb 17 14:44:46 crc kubenswrapper[4768]: I0217 14:44:46.971594 4768 scope.go:117] "RemoveContainer" containerID="8cb69277202cd7c479787baf50bb5ed1d461ba2f4d73fb43b48406dfbe96f7fc" Feb 17 14:44:46 crc kubenswrapper[4768]: E0217 14:44:46.971948 4768 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8cb69277202cd7c479787baf50bb5ed1d461ba2f4d73fb43b48406dfbe96f7fc\": container with ID starting with 8cb69277202cd7c479787baf50bb5ed1d461ba2f4d73fb43b48406dfbe96f7fc not found: ID does not exist" containerID="8cb69277202cd7c479787baf50bb5ed1d461ba2f4d73fb43b48406dfbe96f7fc" Feb 17 14:44:46 crc kubenswrapper[4768]: I0217 14:44:46.971998 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8cb69277202cd7c479787baf50bb5ed1d461ba2f4d73fb43b48406dfbe96f7fc"} err="failed to get container status \"8cb69277202cd7c479787baf50bb5ed1d461ba2f4d73fb43b48406dfbe96f7fc\": rpc error: code = NotFound desc = could not find container \"8cb69277202cd7c479787baf50bb5ed1d461ba2f4d73fb43b48406dfbe96f7fc\": container with ID starting with 8cb69277202cd7c479787baf50bb5ed1d461ba2f4d73fb43b48406dfbe96f7fc not found: ID does not exist" Feb 17 14:44:46 crc kubenswrapper[4768]: I0217 14:44:46.972030 4768 scope.go:117] "RemoveContainer" containerID="058c1313c7efe3920c47744979f9611362f237d73cdad0df250c53ca1d8c9935" Feb 17 14:44:46 crc kubenswrapper[4768]: E0217 14:44:46.972333 4768 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"058c1313c7efe3920c47744979f9611362f237d73cdad0df250c53ca1d8c9935\": container with ID starting with 058c1313c7efe3920c47744979f9611362f237d73cdad0df250c53ca1d8c9935 not found: ID does not exist" containerID="058c1313c7efe3920c47744979f9611362f237d73cdad0df250c53ca1d8c9935" Feb 17 14:44:46 crc kubenswrapper[4768]: I0217 14:44:46.972364 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"058c1313c7efe3920c47744979f9611362f237d73cdad0df250c53ca1d8c9935"} err="failed to get container status \"058c1313c7efe3920c47744979f9611362f237d73cdad0df250c53ca1d8c9935\": rpc error: code = NotFound desc = could not find container \"058c1313c7efe3920c47744979f9611362f237d73cdad0df250c53ca1d8c9935\": container with ID starting with 058c1313c7efe3920c47744979f9611362f237d73cdad0df250c53ca1d8c9935 not found: ID does not exist" Feb 17 14:44:47 crc kubenswrapper[4768]: I0217 14:44:47.551147 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f1df0a66-dc8c-4f23-9bd9-b9cb78cff86b" path="/var/lib/kubelet/pods/f1df0a66-dc8c-4f23-9bd9-b9cb78cff86b/volumes" Feb 17 14:45:00 crc kubenswrapper[4768]: I0217 14:45:00.220069 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29522325-tcpzz"] Feb 17 14:45:00 crc kubenswrapper[4768]: E0217 14:45:00.221127 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f1df0a66-dc8c-4f23-9bd9-b9cb78cff86b" containerName="extract-content" Feb 17 14:45:00 crc kubenswrapper[4768]: I0217 14:45:00.221142 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="f1df0a66-dc8c-4f23-9bd9-b9cb78cff86b" containerName="extract-content" Feb 17 14:45:00 crc kubenswrapper[4768]: E0217 14:45:00.221154 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f1df0a66-dc8c-4f23-9bd9-b9cb78cff86b" containerName="registry-server" Feb 17 14:45:00 crc kubenswrapper[4768]: I0217 14:45:00.221159 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="f1df0a66-dc8c-4f23-9bd9-b9cb78cff86b" containerName="registry-server" Feb 17 14:45:00 crc kubenswrapper[4768]: E0217 14:45:00.221190 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f1df0a66-dc8c-4f23-9bd9-b9cb78cff86b" containerName="extract-utilities" Feb 17 14:45:00 crc kubenswrapper[4768]: I0217 14:45:00.221197 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="f1df0a66-dc8c-4f23-9bd9-b9cb78cff86b" containerName="extract-utilities" Feb 17 14:45:00 crc kubenswrapper[4768]: I0217 14:45:00.221382 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="f1df0a66-dc8c-4f23-9bd9-b9cb78cff86b" containerName="registry-server" Feb 17 14:45:00 crc kubenswrapper[4768]: I0217 14:45:00.222152 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29522325-tcpzz" Feb 17 14:45:00 crc kubenswrapper[4768]: I0217 14:45:00.225126 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 17 14:45:00 crc kubenswrapper[4768]: I0217 14:45:00.226096 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 17 14:45:00 crc kubenswrapper[4768]: I0217 14:45:00.232352 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29522325-tcpzz"] Feb 17 14:45:00 crc kubenswrapper[4768]: I0217 14:45:00.341088 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6eadffa9-44e6-45cb-b4da-bb521d0c8a52-config-volume\") pod \"collect-profiles-29522325-tcpzz\" (UID: \"6eadffa9-44e6-45cb-b4da-bb521d0c8a52\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522325-tcpzz" Feb 17 14:45:00 crc kubenswrapper[4768]: I0217 14:45:00.341198 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/6eadffa9-44e6-45cb-b4da-bb521d0c8a52-secret-volume\") pod \"collect-profiles-29522325-tcpzz\" (UID: \"6eadffa9-44e6-45cb-b4da-bb521d0c8a52\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522325-tcpzz" Feb 17 14:45:00 crc kubenswrapper[4768]: I0217 14:45:00.341459 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6cmxf\" (UniqueName: \"kubernetes.io/projected/6eadffa9-44e6-45cb-b4da-bb521d0c8a52-kube-api-access-6cmxf\") pod \"collect-profiles-29522325-tcpzz\" (UID: \"6eadffa9-44e6-45cb-b4da-bb521d0c8a52\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522325-tcpzz" Feb 17 14:45:00 crc kubenswrapper[4768]: I0217 14:45:00.443261 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6eadffa9-44e6-45cb-b4da-bb521d0c8a52-config-volume\") pod \"collect-profiles-29522325-tcpzz\" (UID: \"6eadffa9-44e6-45cb-b4da-bb521d0c8a52\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522325-tcpzz" Feb 17 14:45:00 crc kubenswrapper[4768]: I0217 14:45:00.443346 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/6eadffa9-44e6-45cb-b4da-bb521d0c8a52-secret-volume\") pod \"collect-profiles-29522325-tcpzz\" (UID: \"6eadffa9-44e6-45cb-b4da-bb521d0c8a52\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522325-tcpzz" Feb 17 14:45:00 crc kubenswrapper[4768]: I0217 14:45:00.443487 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6cmxf\" (UniqueName: \"kubernetes.io/projected/6eadffa9-44e6-45cb-b4da-bb521d0c8a52-kube-api-access-6cmxf\") pod \"collect-profiles-29522325-tcpzz\" (UID: \"6eadffa9-44e6-45cb-b4da-bb521d0c8a52\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522325-tcpzz" Feb 17 14:45:00 crc kubenswrapper[4768]: I0217 14:45:00.444521 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6eadffa9-44e6-45cb-b4da-bb521d0c8a52-config-volume\") pod \"collect-profiles-29522325-tcpzz\" (UID: \"6eadffa9-44e6-45cb-b4da-bb521d0c8a52\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522325-tcpzz" Feb 17 14:45:00 crc kubenswrapper[4768]: I0217 14:45:00.457716 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/6eadffa9-44e6-45cb-b4da-bb521d0c8a52-secret-volume\") pod \"collect-profiles-29522325-tcpzz\" (UID: \"6eadffa9-44e6-45cb-b4da-bb521d0c8a52\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522325-tcpzz" Feb 17 14:45:00 crc kubenswrapper[4768]: I0217 14:45:00.475952 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6cmxf\" (UniqueName: \"kubernetes.io/projected/6eadffa9-44e6-45cb-b4da-bb521d0c8a52-kube-api-access-6cmxf\") pod \"collect-profiles-29522325-tcpzz\" (UID: \"6eadffa9-44e6-45cb-b4da-bb521d0c8a52\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522325-tcpzz" Feb 17 14:45:00 crc kubenswrapper[4768]: I0217 14:45:00.564977 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29522325-tcpzz" Feb 17 14:45:01 crc kubenswrapper[4768]: I0217 14:45:01.175388 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29522325-tcpzz"] Feb 17 14:45:02 crc kubenswrapper[4768]: I0217 14:45:02.022698 4768 generic.go:334] "Generic (PLEG): container finished" podID="6eadffa9-44e6-45cb-b4da-bb521d0c8a52" containerID="956572ee74e733e3b9403c0964fafa0fcd78b9fc706f5e22741cf1517acb11b2" exitCode=0 Feb 17 14:45:02 crc kubenswrapper[4768]: I0217 14:45:02.022737 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29522325-tcpzz" event={"ID":"6eadffa9-44e6-45cb-b4da-bb521d0c8a52","Type":"ContainerDied","Data":"956572ee74e733e3b9403c0964fafa0fcd78b9fc706f5e22741cf1517acb11b2"} Feb 17 14:45:02 crc kubenswrapper[4768]: I0217 14:45:02.023004 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29522325-tcpzz" event={"ID":"6eadffa9-44e6-45cb-b4da-bb521d0c8a52","Type":"ContainerStarted","Data":"defd4222969653a18d00abf57540cfdeced7603707c1dc8a2ec9780f21331a72"} Feb 17 14:45:03 crc kubenswrapper[4768]: I0217 14:45:03.390153 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29522325-tcpzz" Feb 17 14:45:03 crc kubenswrapper[4768]: I0217 14:45:03.560887 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/6eadffa9-44e6-45cb-b4da-bb521d0c8a52-secret-volume\") pod \"6eadffa9-44e6-45cb-b4da-bb521d0c8a52\" (UID: \"6eadffa9-44e6-45cb-b4da-bb521d0c8a52\") " Feb 17 14:45:03 crc kubenswrapper[4768]: I0217 14:45:03.561057 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6cmxf\" (UniqueName: \"kubernetes.io/projected/6eadffa9-44e6-45cb-b4da-bb521d0c8a52-kube-api-access-6cmxf\") pod \"6eadffa9-44e6-45cb-b4da-bb521d0c8a52\" (UID: \"6eadffa9-44e6-45cb-b4da-bb521d0c8a52\") " Feb 17 14:45:03 crc kubenswrapper[4768]: I0217 14:45:03.561276 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6eadffa9-44e6-45cb-b4da-bb521d0c8a52-config-volume\") pod \"6eadffa9-44e6-45cb-b4da-bb521d0c8a52\" (UID: \"6eadffa9-44e6-45cb-b4da-bb521d0c8a52\") " Feb 17 14:45:03 crc kubenswrapper[4768]: I0217 14:45:03.562322 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6eadffa9-44e6-45cb-b4da-bb521d0c8a52-config-volume" (OuterVolumeSpecName: "config-volume") pod "6eadffa9-44e6-45cb-b4da-bb521d0c8a52" (UID: "6eadffa9-44e6-45cb-b4da-bb521d0c8a52"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 14:45:03 crc kubenswrapper[4768]: I0217 14:45:03.570726 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6eadffa9-44e6-45cb-b4da-bb521d0c8a52-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "6eadffa9-44e6-45cb-b4da-bb521d0c8a52" (UID: "6eadffa9-44e6-45cb-b4da-bb521d0c8a52"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 14:45:03 crc kubenswrapper[4768]: I0217 14:45:03.570891 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6eadffa9-44e6-45cb-b4da-bb521d0c8a52-kube-api-access-6cmxf" (OuterVolumeSpecName: "kube-api-access-6cmxf") pod "6eadffa9-44e6-45cb-b4da-bb521d0c8a52" (UID: "6eadffa9-44e6-45cb-b4da-bb521d0c8a52"). InnerVolumeSpecName "kube-api-access-6cmxf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 14:45:03 crc kubenswrapper[4768]: I0217 14:45:03.665139 4768 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6eadffa9-44e6-45cb-b4da-bb521d0c8a52-config-volume\") on node \"crc\" DevicePath \"\"" Feb 17 14:45:03 crc kubenswrapper[4768]: I0217 14:45:03.665172 4768 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/6eadffa9-44e6-45cb-b4da-bb521d0c8a52-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 17 14:45:03 crc kubenswrapper[4768]: I0217 14:45:03.665186 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6cmxf\" (UniqueName: \"kubernetes.io/projected/6eadffa9-44e6-45cb-b4da-bb521d0c8a52-kube-api-access-6cmxf\") on node \"crc\" DevicePath \"\"" Feb 17 14:45:04 crc kubenswrapper[4768]: I0217 14:45:04.051294 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29522325-tcpzz" event={"ID":"6eadffa9-44e6-45cb-b4da-bb521d0c8a52","Type":"ContainerDied","Data":"defd4222969653a18d00abf57540cfdeced7603707c1dc8a2ec9780f21331a72"} Feb 17 14:45:04 crc kubenswrapper[4768]: I0217 14:45:04.051356 4768 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="defd4222969653a18d00abf57540cfdeced7603707c1dc8a2ec9780f21331a72" Feb 17 14:45:04 crc kubenswrapper[4768]: I0217 14:45:04.051369 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29522325-tcpzz" Feb 17 14:45:04 crc kubenswrapper[4768]: I0217 14:45:04.490142 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29522280-ghhtv"] Feb 17 14:45:04 crc kubenswrapper[4768]: I0217 14:45:04.499364 4768 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29522280-ghhtv"] Feb 17 14:45:05 crc kubenswrapper[4768]: I0217 14:45:05.565820 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5193ed8a-5a4b-4ae8-abf3-161f56ded5d0" path="/var/lib/kubelet/pods/5193ed8a-5a4b-4ae8-abf3-161f56ded5d0/volumes" Feb 17 14:45:30 crc kubenswrapper[4768]: I0217 14:45:30.445087 4768 scope.go:117] "RemoveContainer" containerID="cb3065dfe14be7ae452fb785f282f60d238c8cf2b7ae8eec5992bc26406f21d3" Feb 17 14:45:31 crc kubenswrapper[4768]: I0217 14:45:31.298540 4768 generic.go:334] "Generic (PLEG): container finished" podID="f52c76e0-cf87-47a2-a917-fb08c2924e10" containerID="3ea3261530326f7ab96b33a47ae07820ce803bf0c594bbc9b2530b30e4340c2e" exitCode=0 Feb 17 14:45:31 crc kubenswrapper[4768]: I0217 14:45:31.298738 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-dxhrg/must-gather-7z8dt" event={"ID":"f52c76e0-cf87-47a2-a917-fb08c2924e10","Type":"ContainerDied","Data":"3ea3261530326f7ab96b33a47ae07820ce803bf0c594bbc9b2530b30e4340c2e"} Feb 17 14:45:31 crc kubenswrapper[4768]: I0217 14:45:31.299931 4768 scope.go:117] "RemoveContainer" containerID="3ea3261530326f7ab96b33a47ae07820ce803bf0c594bbc9b2530b30e4340c2e" Feb 17 14:45:31 crc kubenswrapper[4768]: I0217 14:45:31.534614 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-dxhrg_must-gather-7z8dt_f52c76e0-cf87-47a2-a917-fb08c2924e10/gather/0.log" Feb 17 14:45:36 crc kubenswrapper[4768]: E0217 14:45:36.416679 4768 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.102.83.36:47694->38.102.83.36:45817: write tcp 38.102.83.36:47694->38.102.83.36:45817: write: broken pipe Feb 17 14:45:41 crc kubenswrapper[4768]: I0217 14:45:41.754458 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-dxhrg/must-gather-7z8dt"] Feb 17 14:45:41 crc kubenswrapper[4768]: I0217 14:45:41.755605 4768 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-dxhrg/must-gather-7z8dt" podUID="f52c76e0-cf87-47a2-a917-fb08c2924e10" containerName="copy" containerID="cri-o://243869db233e1b96abda2353cba2410ab458cae2152203c6a111528af54b5045" gracePeriod=2 Feb 17 14:45:41 crc kubenswrapper[4768]: I0217 14:45:41.773419 4768 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-dxhrg/must-gather-7z8dt"] Feb 17 14:45:42 crc kubenswrapper[4768]: I0217 14:45:42.233842 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-dxhrg_must-gather-7z8dt_f52c76e0-cf87-47a2-a917-fb08c2924e10/copy/0.log" Feb 17 14:45:42 crc kubenswrapper[4768]: I0217 14:45:42.236434 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-dxhrg/must-gather-7z8dt" Feb 17 14:45:42 crc kubenswrapper[4768]: I0217 14:45:42.313957 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/f52c76e0-cf87-47a2-a917-fb08c2924e10-must-gather-output\") pod \"f52c76e0-cf87-47a2-a917-fb08c2924e10\" (UID: \"f52c76e0-cf87-47a2-a917-fb08c2924e10\") " Feb 17 14:45:42 crc kubenswrapper[4768]: I0217 14:45:42.314278 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6nps8\" (UniqueName: \"kubernetes.io/projected/f52c76e0-cf87-47a2-a917-fb08c2924e10-kube-api-access-6nps8\") pod \"f52c76e0-cf87-47a2-a917-fb08c2924e10\" (UID: \"f52c76e0-cf87-47a2-a917-fb08c2924e10\") " Feb 17 14:45:42 crc kubenswrapper[4768]: I0217 14:45:42.324680 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f52c76e0-cf87-47a2-a917-fb08c2924e10-kube-api-access-6nps8" (OuterVolumeSpecName: "kube-api-access-6nps8") pod "f52c76e0-cf87-47a2-a917-fb08c2924e10" (UID: "f52c76e0-cf87-47a2-a917-fb08c2924e10"). InnerVolumeSpecName "kube-api-access-6nps8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 14:45:42 crc kubenswrapper[4768]: I0217 14:45:42.416159 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6nps8\" (UniqueName: \"kubernetes.io/projected/f52c76e0-cf87-47a2-a917-fb08c2924e10-kube-api-access-6nps8\") on node \"crc\" DevicePath \"\"" Feb 17 14:45:42 crc kubenswrapper[4768]: I0217 14:45:42.421610 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-dxhrg_must-gather-7z8dt_f52c76e0-cf87-47a2-a917-fb08c2924e10/copy/0.log" Feb 17 14:45:42 crc kubenswrapper[4768]: I0217 14:45:42.422113 4768 generic.go:334] "Generic (PLEG): container finished" podID="f52c76e0-cf87-47a2-a917-fb08c2924e10" containerID="243869db233e1b96abda2353cba2410ab458cae2152203c6a111528af54b5045" exitCode=143 Feb 17 14:45:42 crc kubenswrapper[4768]: I0217 14:45:42.422232 4768 scope.go:117] "RemoveContainer" containerID="243869db233e1b96abda2353cba2410ab458cae2152203c6a111528af54b5045" Feb 17 14:45:42 crc kubenswrapper[4768]: I0217 14:45:42.422412 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-dxhrg/must-gather-7z8dt" Feb 17 14:45:42 crc kubenswrapper[4768]: I0217 14:45:42.439433 4768 scope.go:117] "RemoveContainer" containerID="3ea3261530326f7ab96b33a47ae07820ce803bf0c594bbc9b2530b30e4340c2e" Feb 17 14:45:42 crc kubenswrapper[4768]: I0217 14:45:42.472563 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f52c76e0-cf87-47a2-a917-fb08c2924e10-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "f52c76e0-cf87-47a2-a917-fb08c2924e10" (UID: "f52c76e0-cf87-47a2-a917-fb08c2924e10"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 14:45:42 crc kubenswrapper[4768]: I0217 14:45:42.501256 4768 scope.go:117] "RemoveContainer" containerID="243869db233e1b96abda2353cba2410ab458cae2152203c6a111528af54b5045" Feb 17 14:45:42 crc kubenswrapper[4768]: E0217 14:45:42.501772 4768 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"243869db233e1b96abda2353cba2410ab458cae2152203c6a111528af54b5045\": container with ID starting with 243869db233e1b96abda2353cba2410ab458cae2152203c6a111528af54b5045 not found: ID does not exist" containerID="243869db233e1b96abda2353cba2410ab458cae2152203c6a111528af54b5045" Feb 17 14:45:42 crc kubenswrapper[4768]: I0217 14:45:42.501812 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"243869db233e1b96abda2353cba2410ab458cae2152203c6a111528af54b5045"} err="failed to get container status \"243869db233e1b96abda2353cba2410ab458cae2152203c6a111528af54b5045\": rpc error: code = NotFound desc = could not find container \"243869db233e1b96abda2353cba2410ab458cae2152203c6a111528af54b5045\": container with ID starting with 243869db233e1b96abda2353cba2410ab458cae2152203c6a111528af54b5045 not found: ID does not exist" Feb 17 14:45:42 crc kubenswrapper[4768]: I0217 14:45:42.501843 4768 scope.go:117] "RemoveContainer" containerID="3ea3261530326f7ab96b33a47ae07820ce803bf0c594bbc9b2530b30e4340c2e" Feb 17 14:45:42 crc kubenswrapper[4768]: E0217 14:45:42.502139 4768 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3ea3261530326f7ab96b33a47ae07820ce803bf0c594bbc9b2530b30e4340c2e\": container with ID starting with 3ea3261530326f7ab96b33a47ae07820ce803bf0c594bbc9b2530b30e4340c2e not found: ID does not exist" containerID="3ea3261530326f7ab96b33a47ae07820ce803bf0c594bbc9b2530b30e4340c2e" Feb 17 14:45:42 crc kubenswrapper[4768]: I0217 14:45:42.502169 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3ea3261530326f7ab96b33a47ae07820ce803bf0c594bbc9b2530b30e4340c2e"} err="failed to get container status \"3ea3261530326f7ab96b33a47ae07820ce803bf0c594bbc9b2530b30e4340c2e\": rpc error: code = NotFound desc = could not find container \"3ea3261530326f7ab96b33a47ae07820ce803bf0c594bbc9b2530b30e4340c2e\": container with ID starting with 3ea3261530326f7ab96b33a47ae07820ce803bf0c594bbc9b2530b30e4340c2e not found: ID does not exist" Feb 17 14:45:42 crc kubenswrapper[4768]: I0217 14:45:42.517472 4768 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/f52c76e0-cf87-47a2-a917-fb08c2924e10-must-gather-output\") on node \"crc\" DevicePath \"\"" Feb 17 14:45:43 crc kubenswrapper[4768]: I0217 14:45:43.552131 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f52c76e0-cf87-47a2-a917-fb08c2924e10" path="/var/lib/kubelet/pods/f52c76e0-cf87-47a2-a917-fb08c2924e10/volumes" Feb 17 14:45:58 crc kubenswrapper[4768]: I0217 14:45:58.059790 4768 patch_prober.go:28] interesting pod/machine-config-daemon-p97z4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 14:45:58 crc kubenswrapper[4768]: I0217 14:45:58.060417 4768 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-p97z4" podUID="10c685ba-8fe0-425c-958c-3fb6754d3d84" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 14:46:28 crc kubenswrapper[4768]: I0217 14:46:28.060687 4768 patch_prober.go:28] interesting pod/machine-config-daemon-p97z4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 14:46:28 crc kubenswrapper[4768]: I0217 14:46:28.061461 4768 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-p97z4" podUID="10c685ba-8fe0-425c-958c-3fb6754d3d84" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 14:46:30 crc kubenswrapper[4768]: I0217 14:46:30.584240 4768 scope.go:117] "RemoveContainer" containerID="f135f0b99d75614b026845c58ed7f13b53a517a3386838b7e7a663e76dfedd87" Feb 17 14:46:58 crc kubenswrapper[4768]: I0217 14:46:58.059757 4768 patch_prober.go:28] interesting pod/machine-config-daemon-p97z4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 14:46:58 crc kubenswrapper[4768]: I0217 14:46:58.060537 4768 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-p97z4" podUID="10c685ba-8fe0-425c-958c-3fb6754d3d84" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 14:46:58 crc kubenswrapper[4768]: I0217 14:46:58.060611 4768 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-p97z4" Feb 17 14:46:58 crc kubenswrapper[4768]: I0217 14:46:58.064020 4768 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"7d54af05ec5a74113e4d5793b54f49bc5519842e435df1206e448365a991ac18"} pod="openshift-machine-config-operator/machine-config-daemon-p97z4" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 17 14:46:58 crc kubenswrapper[4768]: I0217 14:46:58.064232 4768 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-p97z4" podUID="10c685ba-8fe0-425c-958c-3fb6754d3d84" containerName="machine-config-daemon" containerID="cri-o://7d54af05ec5a74113e4d5793b54f49bc5519842e435df1206e448365a991ac18" gracePeriod=600 Feb 17 14:46:58 crc kubenswrapper[4768]: E0217 14:46:58.189686 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p97z4_openshift-machine-config-operator(10c685ba-8fe0-425c-958c-3fb6754d3d84)\"" pod="openshift-machine-config-operator/machine-config-daemon-p97z4" podUID="10c685ba-8fe0-425c-958c-3fb6754d3d84" Feb 17 14:46:58 crc kubenswrapper[4768]: I0217 14:46:58.879803 4768 generic.go:334] "Generic (PLEG): container finished" podID="10c685ba-8fe0-425c-958c-3fb6754d3d84" containerID="7d54af05ec5a74113e4d5793b54f49bc5519842e435df1206e448365a991ac18" exitCode=0 Feb 17 14:46:58 crc kubenswrapper[4768]: I0217 14:46:58.879876 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-p97z4" event={"ID":"10c685ba-8fe0-425c-958c-3fb6754d3d84","Type":"ContainerDied","Data":"7d54af05ec5a74113e4d5793b54f49bc5519842e435df1206e448365a991ac18"} Feb 17 14:46:58 crc kubenswrapper[4768]: I0217 14:46:58.879959 4768 scope.go:117] "RemoveContainer" containerID="19f98ba1d898b39bf0a21ff629e9d39b7fa1b2d21cfcd6ddb2bf6f26918bf017" Feb 17 14:46:58 crc kubenswrapper[4768]: I0217 14:46:58.881013 4768 scope.go:117] "RemoveContainer" containerID="7d54af05ec5a74113e4d5793b54f49bc5519842e435df1206e448365a991ac18" Feb 17 14:46:58 crc kubenswrapper[4768]: E0217 14:46:58.881697 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p97z4_openshift-machine-config-operator(10c685ba-8fe0-425c-958c-3fb6754d3d84)\"" pod="openshift-machine-config-operator/machine-config-daemon-p97z4" podUID="10c685ba-8fe0-425c-958c-3fb6754d3d84" Feb 17 14:47:12 crc kubenswrapper[4768]: I0217 14:47:12.534296 4768 scope.go:117] "RemoveContainer" containerID="7d54af05ec5a74113e4d5793b54f49bc5519842e435df1206e448365a991ac18" Feb 17 14:47:12 crc kubenswrapper[4768]: E0217 14:47:12.535374 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p97z4_openshift-machine-config-operator(10c685ba-8fe0-425c-958c-3fb6754d3d84)\"" pod="openshift-machine-config-operator/machine-config-daemon-p97z4" podUID="10c685ba-8fe0-425c-958c-3fb6754d3d84" Feb 17 14:47:23 crc kubenswrapper[4768]: I0217 14:47:23.536257 4768 scope.go:117] "RemoveContainer" containerID="7d54af05ec5a74113e4d5793b54f49bc5519842e435df1206e448365a991ac18" Feb 17 14:47:23 crc kubenswrapper[4768]: E0217 14:47:23.538400 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p97z4_openshift-machine-config-operator(10c685ba-8fe0-425c-958c-3fb6754d3d84)\"" pod="openshift-machine-config-operator/machine-config-daemon-p97z4" podUID="10c685ba-8fe0-425c-958c-3fb6754d3d84" Feb 17 14:47:35 crc kubenswrapper[4768]: I0217 14:47:35.534423 4768 scope.go:117] "RemoveContainer" containerID="7d54af05ec5a74113e4d5793b54f49bc5519842e435df1206e448365a991ac18" Feb 17 14:47:35 crc kubenswrapper[4768]: E0217 14:47:35.536995 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p97z4_openshift-machine-config-operator(10c685ba-8fe0-425c-958c-3fb6754d3d84)\"" pod="openshift-machine-config-operator/machine-config-daemon-p97z4" podUID="10c685ba-8fe0-425c-958c-3fb6754d3d84" Feb 17 14:47:48 crc kubenswrapper[4768]: I0217 14:47:48.534996 4768 scope.go:117] "RemoveContainer" containerID="7d54af05ec5a74113e4d5793b54f49bc5519842e435df1206e448365a991ac18" Feb 17 14:47:48 crc kubenswrapper[4768]: E0217 14:47:48.536348 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p97z4_openshift-machine-config-operator(10c685ba-8fe0-425c-958c-3fb6754d3d84)\"" pod="openshift-machine-config-operator/machine-config-daemon-p97z4" podUID="10c685ba-8fe0-425c-958c-3fb6754d3d84" Feb 17 14:48:01 crc kubenswrapper[4768]: I0217 14:48:01.545306 4768 scope.go:117] "RemoveContainer" containerID="7d54af05ec5a74113e4d5793b54f49bc5519842e435df1206e448365a991ac18" Feb 17 14:48:01 crc kubenswrapper[4768]: E0217 14:48:01.546267 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p97z4_openshift-machine-config-operator(10c685ba-8fe0-425c-958c-3fb6754d3d84)\"" pod="openshift-machine-config-operator/machine-config-daemon-p97z4" podUID="10c685ba-8fe0-425c-958c-3fb6754d3d84" Feb 17 14:48:14 crc kubenswrapper[4768]: I0217 14:48:14.534734 4768 scope.go:117] "RemoveContainer" containerID="7d54af05ec5a74113e4d5793b54f49bc5519842e435df1206e448365a991ac18" Feb 17 14:48:14 crc kubenswrapper[4768]: E0217 14:48:14.535547 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p97z4_openshift-machine-config-operator(10c685ba-8fe0-425c-958c-3fb6754d3d84)\"" pod="openshift-machine-config-operator/machine-config-daemon-p97z4" podUID="10c685ba-8fe0-425c-958c-3fb6754d3d84" Feb 17 14:48:26 crc kubenswrapper[4768]: I0217 14:48:26.533873 4768 scope.go:117] "RemoveContainer" containerID="7d54af05ec5a74113e4d5793b54f49bc5519842e435df1206e448365a991ac18" Feb 17 14:48:26 crc kubenswrapper[4768]: E0217 14:48:26.534744 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p97z4_openshift-machine-config-operator(10c685ba-8fe0-425c-958c-3fb6754d3d84)\"" pod="openshift-machine-config-operator/machine-config-daemon-p97z4" podUID="10c685ba-8fe0-425c-958c-3fb6754d3d84" Feb 17 14:48:37 crc kubenswrapper[4768]: I0217 14:48:37.533850 4768 scope.go:117] "RemoveContainer" containerID="7d54af05ec5a74113e4d5793b54f49bc5519842e435df1206e448365a991ac18" Feb 17 14:48:37 crc kubenswrapper[4768]: E0217 14:48:37.534476 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p97z4_openshift-machine-config-operator(10c685ba-8fe0-425c-958c-3fb6754d3d84)\"" pod="openshift-machine-config-operator/machine-config-daemon-p97z4" podUID="10c685ba-8fe0-425c-958c-3fb6754d3d84"